If, as you say, "Open source is a development methodology; free software is a social movement." in your article: Why Open Source Misses the Point of Free Software, why do you advocate not using the term 'open source', particularly if it is being used in a technical/development methodology context only?
Do you object to the use of the term 'Free/Open Source Software' (FOSS)?
What if anything do you think Linux should do to improve network security?
The reason I ask this question is runtime environments allow and require (depending on the tools your using) programmers to be experts in memory management and systems programming, but by and large the vast majority are not. This leads to zero day exploits hiding in various applications - including application layer parts of the OS. Is anyone giving thought to prevention, instead of chasing bugs after the fact, and what will that look like in the future?
For most of us who have been around Unix and Linux for any length of time, Unix and/or Linux is not the GUI. It is the kernel and all of the other Single Unix Specification/POSIX compliant parts that make up the command shell, tools, and system programming APIs. The GUI is not specified in the standard - and the variance in GUIs makes that problematic at best, and confusing at worse.
As a result, I think it reasonable when someone asks 'what's the best distro to learn Linux?' - the response should be a distro that is 'no frills' - and allows users to dig in and learn the command line interface and tools - hence the answers seen that focus on that area.
Perhaps for some people the question should be, 'what's the best distro for a desktop user?' - then more feature-rich GUIs might be applicable.
This has nothing to do with elitism. It has to do with a misperception about what is the goal of the person asking. Perhaps Linux geeks should take the time to clarify what the person asking really means.
You get what you pay for. 'nuff said.
Interesting - failure of user space in this way is exactly why we have zero-days.
I would like to see this happen - but several things make it improbable:
1. Von Neuman architecture. As long as data and instructions exist in the same space - poorly written apps will allow abuse of it.
2. Complexity of current software. The more complex the software, the more likely a bug will exist in it that allows #1. Given how programmers stitch together preexisting modules without understanding what is being done on the underlying system - I only expect that to continue expanding.
It should be instructive that Java was supposed to be that sandboxed layer...and it has so many zero-days it looks like swiss cheese.
Now - how would we avoid that and make an unhackable userspace?
From the interview - Michael Dell lumped tablets into a bucket called 'PC'. If they are selling more tablets, that means the cost of their Desktop PCs will have to go up given slackening volumes. Volume of demand sets price - particularly if the business becomes a non-volume business.
Let's face it - if you want to have a desktop system, while its price may be less than a server with the same capabilities (you are really paying for redundancy and maintenance with servers), it will cost more than we are used to paying today - regardless if you build your own, or buy a name brand (and given the downgrade on performance and expansion capabilities of most desktops - you'll probably want to build your own).
I run Linux at work - and here is what I would classify as my Pro Applications:
I can also handle email from my Android based phone.
Our servers are also running Linux.
Therefore the argument that Linux can't get real work done is silly to the point of absurdity.
I perceive what is happening as signaling a sea change. Comercial interests (Redhat, Canonical, etc) see now as the time to push their agendas - if they control something - then they can control it's evolution. However, they are forgetting one thing: linux users want results. It is plain to me that most of these large distros don't give a rat's patootie about the average user.
I think the time is ripe for new distros to emerge - and if they address a number of key issues for the users they can cause the big boys problems they may be institutionally incapable of resolving. Time for a shake-up in the Linux space.
I think you're lumping everyone into the same boat. I love Linux - but not really happy with the way most of the UIs on top of it are going. I also am not a fan of Windows. I also have Macs - and OSX's UI is close - but I'm not really happy about the iron-fist Apple has over it (e.g. I would prefer to hover my mouse pointer and have the window become active - but not bring to front - like I've done with every Unix/Linux system I've ever used).
I've finally come to the conclusion that the only solution for me is to build my own distribution that has all the best parts that I do like - and some things that I want that no distro has today.
I'm not broken - I'm just hard to satisfy. Of course, that is what leads to innovation. I guess the difference is, don't just sit there and complain about it - do something about it!
PC roles that other devices can't currently do:
Cost effective software development/compilation *check*
Cost effective scientific computing *check*
Hard core simulation/gaming (high fidelity/realistic MMO Combat/FPS, flight simulation, etc - where you need more commands than are available on a common game controller - and where the graphics go beyond anything a game console can currently deal with while also providing large maps and large user bases sharing the same spaces) *check*
Can other devices do these things - yes if COST or other limiting factors are not an issue for you (Angry Birds on tablets, or console versions of various FPS titles are not at all comparable to a complex 3D simulation on a dedicated general purpose PC in any way shape or form). For those of us without a silver spoon in his/her mouth - that is not an option.
For most of us - buying a server grade system at $5000+ to do hobby coding isn't worth it - nor is springing for an equivalent cloud based VM to do the same. If it is over $1000 USD over the course of several years, it is too much.
Lumping desktop/server PCs in with laptops is not useful - laptops are not meant to run 24/7 and have automation for doing infrastructure things - like nightly builds, automatic updates for repositories, or other automation (spidering etc). Laptops are made to be mobile, and don't make good servers due to constraints placed on energy consumption and processing power. As for other devices - due to DMCA regulations - there are no legal means of turning them into general purpose devices any longer. That only leaves the PC as the bastion of general purpose computing.
Too many people don't realize what they would be giving up if cheap PCs are not available - they will be limiting the options of small developers (who historically generate more creative output - and the next big thing [e.g. Linux wouldn't exist if Linus didn't have access to a general purpose PC]) while strengthening the strangle hold large companies have over software development (app stores barriers to entry).
I wonder what RMS would do?
The 'reality' that surrounds us is taken in through the limited senses we have (we frail creatures, can't even see radar or thermal emissions) - and registered in our consciousness inside our brain, perhaps without full fidelity at all times.
Therefore, everything we perceive as 'reality' could arguably be unreal when compared to video of the same senses. Imagination has a strong influence on what we perceive (ask 10 different witnesses to a crime to report what they saw, and you'll get 10 different realities - even though they were observing the same event), as well as conditions that trick our senses (mirages and slight of hand).
Is cyberspace real? As real as anything else we take in through our senses, and think we know about the world around us.
Computer networks are not just about communications - like radio or telephone systems - but the computers in that network allow for persistence within the confines of unique addresses on the packet switched network. Persistence allows the formation of virtual spaces at these network locations - that can be as simple as a threaded message board, where conversations can form a complex web of shared history and culture, to more complex forms including 3D multimedia simulations that mimic space as perceived by humans in the 'real' world - in both places were multiple participants can form community. To the participants in these virtual spaces - it holds as much importance as other spaces within their lives - perhaps more so with the demise of the public spaces - the local bars, parks and so on that formed a 'third space' (first being home, and second being work) who's easy access was lost with the advent of suburbs and the fast food drive-through (borrowing heavily from ideas put down in Howard Rheingold's "The Virtual Community - Homesteading on the Electronic Frontier").
These virtual spaces allow people to quickly find like minded people, form alliances, and get things done. These spaces have significance - they can spill over into the real world - such as the 'Arab Spring', and change the face of countries and the world. They can also be misused and lead to group-think, and victimization of its members (ask Mante Teo about that - or your local Troll).
The value of cyberspace outweighs the desire of lawyers, regulators and governments to find simple answers to complex issues.
The more code you can see, and the better you can see it will translate into more productivity.
I imagine all your workers running 13" monitors - with coke-bottle thick glasses - cursing you behind your back. Cheap charlie1!!!