Yes, the way every single person in the entire world (statistically) does it is my "personal definition".
Yes, the way every single person in the entire world (statistically) does it is my "personal definition".
that's been KDE's driving philosophy for years
Yeah, but KDE is not Linux, and Linux doesn't know where that driving philosophy hides. Linux is KDE/GNOME/Whatever
it's a good thing, that proprietory vendors have a cost increase with every additional distro supported while free software developers get pretty much all distros for free
Please explain how proprietary vendors have to abide by different rules than open source developers please. Also, your statement is quite obviously false since the vast majority of software vendors are entirely un-interested in developing for Linux.
The latter part of your infantile rant is also proven wrong by the fact that statistically ZERO vendors develop for Linux. I have zero problems developing for, on and under Linux, and I have been doing so for years. In fact, I was part of a team that delivered one of the first commercial applications written in Java, running mainly on Linux (and SunOS and HP-UX) way back in the late 1990s. The fact that there are no (statistically) commercial apps for Linux shows two things, one of them mirrored on Android: 1/ Developing good UI apps for the Linux platform is hard, and it is difficult to chose a technology that will be consistent and functional over time. 2/ Linux users don't buy desktop apps (Android users are far more reluctant than iOS users to pay for apps).
Consider someone like Adobe, they have a vast array of industry-leading apps, they are functionally identical on Windows and on OSX. You can be quite sure that Adobe has a clear separation between application logic and display logic, otherwise the apps would end up having far more issues that were platform specific than they do. This means that the majority of something like After Effects is already mostly platform independent. Porting from OSX to Linux would be anything but trivial, but a reasonably cost-efficient project. If there was a customer base. However, they would have to chose between (for example) KDE and GNOME. The problem is that one year KDE is the hottest thing in Linux land the next GNOME is. Who knows which is next. This drives up maintenance cost for a platform where nobody buys software. Business wise it would be idiotic for Adobe to go that route with anything marginally more complex than what they have on mobile.
If anybody in Linux land ever cared about the desktop, there would be a single UI platform with a consistent and long-lived API. Nobody cares about that though. This is why Linux is a (great) server OS, a mediocre desktop OS, and will never conquer the mainstream desktop platform. Ever. OSX won the Unix on the desktop war, and if you need a decent Unix on the desktop for running something that is not developer tools, you'd be insane to chose anything BUT OSX.
End words: I have been using Linux to develop cross-platform apps since some time in the 1990s. After moving into the mobile space I was "forced" to get a Mac since mobile today == iOS (Android users don't pay for apps) and you are basically required to have a Mac for iOS development (has recently changed a little, but that's another matter). OSX is what Linux could have been 10 years ago if someone had ever cared. Nobody ever did, and today there is only one rational solution for Unix on the Desktop.
is this related to projects that use C++, Net and other forms of object oriented programming languages where you have tons of classes, members and files?
Or for visual type of programming where you design GUI:s by drag and drop?
If I say that I have never done that, it would be a tiny bit of a lie, I once used Delphi working like that. I only do server-side and web stuff, and for that Visual Studio is amazing. It's of course best for C# since the entire C# compiler system is built into the text editor (it compiles the code as you write it, making things like intellisense exceedingly accurate and lightning fast. Though it is not as configurable as, for example, Eclipse, it is more than configurable enough, and for speed it blows Eclipse out of the water. IntelliJ a little less so, but still, it's not even close. I would not (obviously) use VS for Java, but for C++, C# and cross-platform mobile development (for example) it can't be beat.
Yeah because just that I believe that most IT stuff can be done faster using a CLI than with a GUI
A tiny tip to clue you in. Just a little bit. 99.999% of the worlds population never uses their computer for "most IT stuff". Do you know what the word "mainstream" means? I use Unix variations for development, and grep. awk and all of that is great. It's not "mainstream" though. Not by a mile.
closer to moronic is pretending that it is not possible without even knowing
I do photography. I do software development. I use ImageMagick in a few places to alter photographs. Photo editing is not and never was, something you can do with ImageMagick. Photo editing is not making global changes to images or crop a bunch of them in the same way. Photo editing is what you do in tools like Lightroom, Photoshop (or Elements) and GiMP. It is not something you do in ImageMagick. "without even knowing". I know that ImageMagick is not photo editing software. Attempting to present it as such is moronic by definition. It's like me suggesting Neil Armstrong should have taken his bicycle to go to the moon in 1969. Moronic. By definition. So, no, it's not my opinion he's a moron, he is by definition.
calling me a liar because I have used it to do batch processing of images on many occasions
ImageMagick is very good for this. It's not photo editing though.
The biggest thing I hope to see change is Apple start publishing iTunes for Linux
Are you serious? iTunes is a horrible piece of shit software. On my Mac. On my Windows 10 box. I wouldn't want it polluting my Linux development environment too.
Yes - (www.imagemagick.org)
No. It doesn't even come close to working for what users do. Try not to be a moron. Remember, it's better to sit quietly in the corner having everybody think you're a retard than to post in public and remove any lingering doubt.
A good terminal (like bash) lets you do stuff faster and easier than any GUI.
Cool, I just shot 10 images in succession while panning. Can you please stitch them together to make a panorama? Did I mention they were RAW images? You need to read the raw, stitch them, add 10% contrast, take exposure down about
Here is a clue for you: The average person can do basically none of the work they regularly do on a computer from the command line, and if you could cobble together stuff to do some of the above, it would be insanely difficult compared to firing up Lightroom or Capture one.
the project files for Visual Studio is a complete nightmare
And now they are no longer even XML, but JSON. That'll change every single release too
it's easier to tell someone how to do something from a command line than to direct them to do something via the gui. Reply to This
Really? Wow! Great! Now, can you please tell me how to do this. I have a Sony A7R2 camera and I shoot raw. I would like to open the file, reduce exposure by 1.3 stops, adjust the R and G curves slightly, add some sharpening, set a white point, set a black point, lift the shadows a little, take down the highlights and publish it in aRGB for printing and sRGB for the web. Using the command line if you please.
For the vast (like 99.9%) amount of the work people do on their computers, the command line is utterly useless. Sure, when I set up a new Node project is is great, but then again, 99.9% of computer users don't know what Node is.
Well, there are a few errors here, but..
Not really part of the OS
NTFS - designed to fragment
Yeah, this is true, but there is a reason for it. A large multi-user system running on slow disks will (statistically) benefit from a somewhat fragmented file system. Do the maths. Of course, this hurts WNT on the desktop. The general idea is that one user task will nor process an entire file in the time-slice alotted for it to run, so when it is pre-empted, the next task will need to read a different file somewhere else on the disk, in other words, moving the read head, and before it is finished with its file it will be pre.empted, the read head will be moved again, to another place on the disk, etc. In such a scenario a fragmented file system will have higher performance than a non-fragmented system due to the read head moving shorter (on average) distances each time.
Imagine two files on a one-platter spindle. One file is on the "inner" side of the spindle, the other on the "outer". Two processes are reading one file each, but are being pre-empted multiple times during the reads. For each task switch, the read head will have to move from the outer to the inner part of the spindle, or inner to outer. In other words, for each time slice, the read head moves across the entire disk. If the files are fragmented and the fragments are spread across the disk randomly, the disk head will, on average, only move across half the spindle each time. So, at the time of design and implementation, based on the purpose of the OS (both big server and desktop were imagined) an intentionally fragmented file system made sense. The problem is that one have to live with decisions like that for a long time
A fundamentally broken and insecure security model
Again, in the OS, no the model is not broken, but the way Microsoft configured it, it did become broken. Mostly because of the elevated privileges needed for the first few years to do just about anything. But still, not a bad feature of the OS. In fact, again in the OS implementation, it beats the woefully simplistic and inadequate Unix security model of the time (and for many, still at this point in time).
Android (has a Linux kernel) has 86.2%
When I develop for Android (I do) I develop for one environment using one set of APIs, where my application will be deployed on a consistent, coherent and fully sane user environment. None of that is true for desktop Linux. The fact that Linux is not having a showing on the desktop has nothing to do with Microsoft and everything to do with Linux/GNOME/KDE/X/(all kinds of other shit).
For the average user, choice is bad, options are bad, configurability is bad. Users don't want options and choice, they want consistency. Microsoft and Apple, even through the Win8-10 debacle, gives them that The fact that nobody in the Linux community have been able to put together a coherent user experience is the ONLY reason Linux has failed, and will continue to fail, on the desktop. Luckily for most, the desktop is becoming less and less relevant.
because when they bought their computers, Windows was already installed
Nonsense. The reason people don't use it is because it is a mess, and the desktop environment(s) were never a priority. If Linux had a single desktop environment that everybody used, and everybody developed to, it would be far more successful on the desktop. As it is now, you'd be almost completely insane to develop mainstream software for Linux.
The only successful Linux for end-users is Android, and Android (mostly) fixes this problem.
Simply put, for the end-user, Linux on the desktop is still garbage with no real software.
This login session: $13.76, but for you $11.88.