Follow Slashdot stories on Twitter


Forgot your password?
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Journal Journal: Linux as a desktop OS 9

Linux can indeed make it in the desktop market. So far, every good Linux distrobution has seemed to try too hard at making Linux mimic other Unix platforms. There's a reason that we all aren't using SGI's as our desktop computers these days, and why AIX just hasn't quite made it into the standard business PC market.

Linux is not useable as a generic desktop platform. Yes, I know many of you do use Linux as your preferred desktop OS. I do as well. The usability of Linux cannot be accurately measured by those of us that are already on the bandwagon, as we have the motivation to understand how the OS works. Casual PC users only wish to get on, perform their desired task, and move on. Thus, Linux will have to conform to a few standard ideas before it can ever become mainstream as a desktop platform.

#1. Installs have to be easy
Most software you find for Linux is either in a package made by the distrobution creator, or must be compiled and installed manually. The brutal reality is, this isn't good enough. Most package management systems are still too compilcated or daunting for casual users to understand. RPM systems do work well, but they require the user wishing to uninstall somthing to sift through system pacakges in order to find the product they want to remove, giving them more than enough chances to really screw up their OS by removing the wrong packages. As sad as it is to say this, compiling yourself is no better. Casual users do not understand configure, make, make install. Not only does this method provide no easy uninstall, but in the (more likely than is preferable) event that some part of the compiling process fails, the user does not have the knowledge to fix the problem. Frankly, most also do not wish to learn. The general idea is, "it runs out of the box". If it doesn't then there is something wrong with it, or at least that's what they've always been taught. So the first issue to address, is putting together a simple installer to manage the software that has been installed on the system. Windows has the "Windows Installer" and "Install Shield", Linux needs something similar. The key is to get the base install, and then everything else is a separate application. Common unix tools that we can't live without that don't change often (if at all anymore) can be considered part the install base or part of a selectable unix utilities software package. (This includes things like grep, awk, sed, less, more, cat, sort, uniq, etc...) There should also be a generic 'start menu' like directory contruct so that applications can insert themselves into the user's menus. Otherwise, the user would have to manually add the entry, which is undesirable since we are trying to cater to the less apt.

#2. Too many library dependancies; Binary only installs must be feasable and reliable.
Linux has a lot of libs. That's fine. The real problem occurs when applications start using unusual libs to perform certain operations. For instance, there is a lib for working with jpegs. You'd be surprised how many non-paint related programs will use this lib. All because they don't want to link it statically into their code, or because they don't want to use bitmaps (which are a lot easier to work with anyways). Once the lib is updated, every actively developed program that uses that lib will compile against the new version, and the end user will have to upgrade it the next time they update the program. This is unacceptable as it breaks the 'works out of the box' requirement. The solution is to package all required libs with your program, or statically link. I recommend statically linking where it makes sense to do so, and packaging other required libs with the program using the installer app. Packaged libs should reside in a subdirectory from where the application is installed so that they can be loaded with a relative path. You don't know what the user has already installed, don't assume that you can put your library in the standard libpath or you may end up overwriting something you shouldn't be! System libs should have their entry points looked up by symbol, so their version shouldn't matter so long as symbols are not removed. Compatability must remain in tact at all costs. No one wants to see their applications stop working when they upgrade their OS. Commercial applications need to become easy to deploy for Linux. To do so, vendors must be able to reliably compile their applications and distribute it in binary only form. Not everyone is so fast to jump onto the open source bandwagon. That's fine if they don't, there's room for both followings in this world. By making binary only installs easy for software company's to produce, we are opening the doors for a much larger user base, which will inevitably help us in the end regardless of how we get them.

#3. Put your application in the proper place.
This is partly the responsibility of the installer app, but this should be outlined for those who wish to take on the endeavor of writing installer apps. Applications should be installed to a single place in the filesystem as much as possible. If the user says you go into /opt/myprog, then your apps binary goes in /opt/myprog, and any other packaged files go into /opt/myprog or a subdirectory thereof. If your application is not a system suppliment or service (like samba), then don't put a config file in /etc. Your app doesn't really need to know where it's been installed to. If there's a config file involved, keep it in the same dir as the binary. You'll be able to find it pretty easily that way. The user's environment will be available to your app, so $HOME is where you can find the user's home dir. If you need to save user specific data, put it in a hidden subdirectory in $HOME. It is very important that rule #3 is followed as much as possible. Knowing where all of an application's files are makes it much easier to uninstall an app if the installer database dies for any reason. Plus this adds the benefit of system admins being able to easily export specific applications via NFS to various user systems in business environments. If all employees need the word processor, but only tech support needs the web browser and email, then it is simple to export the word processor to all PCs, and the internet tools only to the pc's in the support department. (Yes, you could also export the entire dir tree to all and restrict using group permissions, but NFS is far from secure when it comes to pretending that you have a different UID that you really do.

#4. X11 must die.
Before you get all upset about this, listen to it, as it will make sense. X11 was designed to be a network ready display method. Particularly useful for dumb terminals that don't have enough processing power to run their own applications. The problem is, this was really only useful a decade ago. Computer components are extremely cheap, and businesses are already used to the idea of dumping at least $500 dollars into a computer system for each employee that needs one. Dumb terminals never really caught on, they aren't likely to make a comeback, why drag this out any longer. By dumping X11, we are able to provide Linux (and unix in general) desktops with more efficient displays. There's no reason for X to use 20MB of memory just load all the widgets for an app incase it needs to redraw them at some point. By utilizing a Windows-like GDI, we'll be able to keep the memory footprint lower by only storing displayed graphics in video memory. If part of the screen needs to be redrawn, each affected application will be notified of the area they need to refresh, and the application will be responsible for making the GDI calls to keep itself displayed properly. I'm not saying that X11 needs to be cut out entirely. There will always be a need for remote displayed applications in certain situations. The replacement for X11 should also have the ability to run in a remote display mode, where GDI calls are transmitted across the network to a recieving machine. The main point is, this will not be the default purpose of the display manager. X11 support will not be native, and will require an extension application to provide the X11 capabilities for backward compatability. Not all unixes will be worth porting the new display manager to, and the X11 compatability application will be helpful in remote displaying from those systems. Obviously this change will require display drivers to be rewritten. This is a chore, so there will need to be a generic X11 driver available for those devices that have Xservers, but do not yet have drivers for the new system. Obviously this will be slightly slower, but it is better than nothing until the manufacturer or a volunteer writes the driver.

#5. Inter-app Messaging
So far Windows has a better drawing routing, and it also has a better messaging system as well. Messages can be sent and waited for, or sent and left on their own. The popular desktop managers seem torn between two messaging systems, both have their pros and cons, neither are willing to give in to the other's design. The best solution in that case, is to make a Windows-like messaging system integrated into the display manager. Both GUI's can maintain their own systems, but future applications will likely use the Windows-like message system, as it will work regardless of which GUI the user is running. Commercial applications that are already written for windows will be easier to port to Linux when the messaging system is the same. A clone of MFC will also need to be developed to support those applications that were written using the construct.

#6. (My personal peeve) Get apps out of the $PATH
Core OS apps like grep, vi, cat, more, and whatnot can remain in the path. That is fine. But netscape? gnome? What is gnumeric doing in the path? These are graphical applications that are only usable from within the GUI desktop. There is no reason that they should be in the $PATH. They should be startable from a menu, that is provided by the desktop manager. If a user wishes an application to be in their path, they can put a symbolic link to the application's binary in a directory that is in the path, like /bin or /usr/bin or /usr/local/bin. Better yet, they can put a bin dir in their home directory and add that to the path and link from there. Either way, this keeps the $PATH variable small, while allowing the user to customize the CLI to their own tastes. Applications should not make this decision for the user. They may present this option during the install, but should be defaulted to 'no' incase the user does not know what it means.

Yes, there are a lot of changes, and they are very large changes in many respects. However, many of these changes have been needed for a long time. Windows is simple for the average user to operate. This was not a mistake. Microsoft has spent millions of dollars on usability research to find ways to make computers easy enough for average people to use. It would be foolish for Linux to ignore this data and remain a cult following. Windows is simple for software vendors to write for. This was not a mistake either. Software titles attract users. End of story. If the OS can't run the software the users want, it will never become mainstream. We need to make Linux attractive to software vendors so that it is considered a good target platform. Once the user base starts to increase a little, it will become more attractive and thus more software is written for the OS, and so on.

Please do give me your comments. Try to restrain yourself from saying something negative until you've thought about it for at least a day. My goal is to make Linux mainstream. These are the methods that I feel will greatly help it get there.

Slashdot Top Deals

You will be successful in your work.