Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
It's funny.  Laugh.

New Way To Grade Decay of Computer Installations 561

skojt writes: "I saw this link in Dr Dobb's Journal (the paper edition) about the behaviour of a slowly decaying computer installation. It refers to a Windows installation, but as the author writes, 'But there will shortly be ports to Linux, Mac OS X, and other Unices; we are confident these OSes are just as prone.'"
This discussion has been archived. No new comments can be posted.

New Way To Grade Decay of Computer Installations

Comments Filter:
  • by The World Will End ( 595617 ) on Monday July 29, 2002 @06:24PM (#3974628) Homepage
    That's why you use a package manager.

    If you use rpm, then use checkinstall, it will generate rpms out of tar.gz easily, you run it instead of "make install"
  • by sterno ( 16320 ) on Monday July 29, 2002 @06:32PM (#3974685) Homepage
    While Linux is prone to falling into dependancy hell, it doesn't suffer from the same performance degradation that you get in windows. In windows, you seem to have to periodically re-install everything just to get your system to load in a reasonable amount of time. You might get into a dependancy nightmare in Linux when trying to install something new, but the system performance doesn't seem to suffer from cruft related degradation.

    I've found in my Linux experience that if I try be experimental and cutting edge, I end up, eventually, getting into situations where it becomes a major nightmare to upgrade. On the other hand, if I leave my system relatively stock, tools like red-carpet, up2date, or apt-get, do a pretty damn good job of hiding the dependancy hell from me.

    All things considered, I'd rather have it become a pain to install piece of software then to have it be easy and slowly cause my system to become unusable for no apparent reason.
  • Well... (Score:3, Informative)

    by RinkSpringer ( 518787 ) <[rink] [at] [rink.nu]> on Monday July 29, 2002 @06:37PM (#3974729) Homepage Journal
    ...the upside of an open-source OS is that you can browse through the source and figure out *why* it is messing itself up... :) And most likely, fix it while you're at it.

    That's the power of open source.
  • Plug, plug (Score:5, Informative)

    by Phexro ( 9814 ) on Monday July 29, 2002 @06:47PM (#3974793)
    Use Debian [debian.org]. I'm not saying that it's immune to cruft, but the fact that they have close to 9000 packages which all comply with the Debian Policy [debian.org] (as well as the FHS [pathname.com]) means that everything plays nice together, and if it doesn't, it's a bug. There's even a tool called Cruft [debian.org], which will locate cruft on your system.
  • Re:BSOD (Score:3, Informative)

    by man_ls ( 248470 ) on Monday July 29, 2002 @06:56PM (#3974852)
    Either a botched upgrade to the kernel itself, or a hardware glitch (Video driver isn't right...SCSI controller craps out...drive fails while in use...NIC unseated during installation of cable...) have been the only problems I've seen.

    When I had low-quality (I mean LOW quality) video hardware, I got IRQL_NOT_LESS_OR_EQUAL errors while playing games all the time, but with the upgrade to a brand new GeForce3, there have been no problems.

    Win2K seems to have gotten software stability down pretty well.
  • I used to keep my macs working for 3-4 months before having to reinstall the whole shebang. I only reinstalled Mac OS X once since october. Macs are quite easy to keep clean, after some time you know where "cruft" accumulates. If anyone's interested, Alladin sells a product called Spring Cleaning [aladdinsys.com], which I don't use. I clean my mac by hand. Seriously, on Mac OS X the only messy places are ~/Library and /Library. If you put your personal mess in your home folder, that is.

    My Gnu/Linux distro of choice is Debian. If you use debian, you know how quickly apt installs those libraries. Have a look at deborphan [debian.org], which "finds 'orphaned' packages on your system. It determines which packages have no other packages depending on their installation, and shows you a list of these packages. It is most useful when finding libraries, but it can be used on packages in all sections". I run apt-get remove `deborphan` about once a month.

    Another great tool for the Gnu/Linux user is cruft [debian.org], which, as the name says, tries to find the cruft on your system. It generates many false positives (e.g. /vmlinux), so use with many grep and caution :-).

    Which tools do you guys use to keep your system clean?

  • by g4dget ( 579145 ) on Monday July 29, 2002 @07:00PM (#3974879)
    Windows decays because its package management and system resource databases suck. Sorry, but there is no more polite word for it. The registry is a prime example of those gee-whiz solutions ("why don't we put all this information into a 'real' database") that looks neat but just doesn't work well in practice; Microsoft seems susceptible to implementing those kinds of systems.

    MacOS's preferred installation method ("drag-and-drop") doesn't suffer quite from the same problems as Windows. It's clean, simple, and easy to understand, and it doesn't leave junk all over the disk in mysterious places. But some applications install differently, and there is no single software update mechanism. Still, so far, OSX is holding up well on my systems, showing no signs of decay. But maintaining applications at the latest versions is a significant amount of work compared to Linux.

    For Linux distributions, it depends on the installation and update method. Debian systems can be updated for years without "decay". In fact, I haven't seen one "decay" yet, either ones that are updated regularly or ones that aren't. Because all packages come from a single source, they are all integrated, cross-checked, and tested together, a luxury that neither Windows nor MacOS have.

    The fact that, in Linux, each program has its own configuration files, often one system-wide one and one in the user's home directories, also makes Linux enormously more robust. There is no single point of failure and if some program's defaults get corrupted, it's trivial to fix and trivial to tell users how to fix it ("rm .foobar" and you should be fine).

  • by reverse flow reactor ( 316530 ) on Monday July 29, 2002 @07:26PM (#3975036)

    disk images are great for most applications, and I used a simple image for quite some time. But the best parts about vmware are two-fold:

    1) you can store multiple different images and boot them as you please

    2) the little message when you shut down the virtual machine: "Commit changes to disk?". If you liked a softwrae package you installed, you just say yes. If you don't like it, say no. You don't have to update the image you made, as it just did.

    Mind you, vmware and the host OS do take up system resources. In some cases, you want the guest OS to have as many resources as possible, and the disk image is the better solution. Or you are run a public computer lab and every monring the computers load the latest image from the server and boot that

  • by Burdell ( 228580 ) on Monday July 29, 2002 @07:48PM (#3975182)
    Most every Unix variant is just as prone to this as Windows, depending on how the system is managed. If you use your package management system religiously (be it rpm, deb, SysV pkg, etc.), then you can track everything back down.

    However, the first time that you do a blind make install on some random software package, you start to lose control. Most problems I've run into managing Unix systems are because of this.

    One server where I work now started out (as I understand it anyway) as OSF/1 3.0, then upgraded to 3.2, then Digital Unix 4.0[ABCDE], then Compaq Tru64 Unix 4.0F. Lots of additional software had been installed over the years with little or no thought to tracking what was what (how do you figure, three years after the fact, where /usr/local/lib/libfoo.so came from?). When it came time to upgrade it to Tru64 5.1A, I did not take the upgrade path. Instead, I sat down with a spare server, did a clean (and documented) install of 5.1A. Then I got port of RPM 4.0 for Tru64 working (I'm a Red Hat guy; if I used Debian I'd have used dpkg) and packaged every bit of non-Compaq software we use in RPMs (with our configs built in). There is a well-defined directory for local scripts that has very little in it; every other file on the system can either be tracked via RPM or Tru64's native setld package management system.

    It was a lot of work the first time, but when I upgraded the second server, it didn't take much time at all. Also, when I've needed to upgrade software, it was much easier to start with a source RPM of the previous version with our local changes set up as patches.

    The moral of the story: use your package management system! It may seem like it gets in the way sometimes, but it can save your butt (or your successor's, in the case of a company server) down the road.

  • Maybe, maybe not (Score:4, Informative)

    by Restil ( 31903 ) on Monday July 29, 2002 @07:51PM (#3975199) Homepage
    Linux MAY be prone to SOME of these problems, but I'm willing to trust that the great majority of what causes windows systems to go nuts on a regular basis simply won't affect linux, not because its immune, but because its not used in the same way.

    First of all, I'm willing to take at face value the fact that a 2K/XP system running only well supported, stable drivers on stable hardware running only a small set of vital application programs will be unlikely to encounter any serious problems. I have no personal evidence to support this, but a few people I know swear by it, so I'm willing to accept it under these conditions.

    However, 2K/XP might have gotten it right, but it took MS 20 someodd years to get around to getting it right. And it requires a fairly new computer to be useful. Win 95 runs great on old (read: CHEAP) hardware as far as performance goes, but it has serious stability problems. If I want to run a 7 year old version of Linux, I'm willing to bet
    that the last release in the 1.0.x series is just
    as stable in a production environment as the latest 2.4.x release is. Sure, it might not be
    as feature packed, and might not have the extensive driver support, but if it serves the purposes I require, it will work flawlessly.

    As for drivers, Windows virgin installs come with a set of drivers for a lot of legacy hardware. If your system is a couple years older than the version of Windows you're installing, it probably has the drivers for all your hardware. For any other hardware, you'll have to use vendor supplied drivers. If these drivers are unstable, Windows can misbehave, and it wouldn't necessarily be the fault of Windows. Certainly Linux must have the same problem, right?

    The simple fact of the matter, those who support Linux tend to support the same software methodology. The drivers, like the kernel, are all open sourced. They're heavily peer reviewed, and those that are integrated into the kernel are solid. And if bugs are found, they're fixed. If the original programmer doesn't/can't/won't fix it himself, there are countless others who can. In many/most cases, the drivers aren't even written/produced by the manufactuerers of the hardware, but by kernel hackers, on their own time. These guys have no interest in being first to the market. They have no desire to play the "just get it working, we can fix it later" game. Their only interest is in releasing solid, efficient code, becuase if they don't, they know someone else will be tearing it apart.

    Therefore, the drivers used on linux systems tend to be rock solid. So you have a rock solid kernel and drivers. Now for the applications.
    Applications for linux based operating systems tend not to overwrite system libraries with their own versions. General purpose applications are not generally run as root. The worst a normal user can do on a linux box by running buggy applications is to cause it to crash. Certainly, he can send the machine into thrashing or fill up the hard disk, but there are ways the administrator can restrict the type of activities that cause such outcomes. 2K/XP have methods to prevent these same problems, but many of the problems involved with installing misbehaving applications simply shouldn't be a problem in the first place.

    As for adding cruft to the operating system, my linux box has the same number of directories off of / as it did the day I installed it, with three extras added for each mounted HD on the system. My /home directory has one directory for every user on the system. My personal home directory, as I suspect others might be as well, is an organizational nightmare. But all that "cruft" is isolated. I know where the mess is, and I know only where the mess is. I don't have /etc, /bin, and other important directories littered with files that have no business being there. And no rogue application is likely to change that fact. Sure, an application program might add a directory to /usr/local and leave a large bloated mess under there, but if I decide later that I want to remove it, I can do a recursive delete of one directory and its gone. There aren't any mystery registry values that are going to cause me fits the next time I boot the system. There might be some entires in /etc/rc or crontab, but they won't hurt anything and can be removed later as they're discovered.

    I suppose its possible that a poorly managed linux box can cause massive problems, just as a perfectly managed windows box might work flawlessly. But all I can say is this. It's been 154 days since my last power failure, and my linux server has been up for 154 days. None of my windows boxes have that track record.

    -Restil
  • by bedessen ( 411686 ) on Monday July 29, 2002 @07:52PM (#3975212) Journal
    I decided to build a new system some time in the fall of 2000, but prior to that I had been running the original Windows 95 install that I did some time in mid 1996. There were some hardware upgrades, sure, but I never resorted to reinstalling. My systems are highly customized, I like to set everything just the way I like it. So to me a reinstall is not something I do lightly. The system was not unstable at all, it was quite a workhorse. Sure, every now and then it would have a lockup of some sort, but we're talking once every few weeks. Now that I run win2k it's very rare indeed.

    You can manage the cruft in windows. It's not impossible, even if you install/uninstall a lot of stuff. The important things are to know what's running (task list, services, run at startup, etc) and to get to know the registry. You must babysit for poor installation programs. Often they will add crap to startup, or icons on the desktop, or other weird things, which I would always delete. You also have to help some of them wipe their ass when you uninstall, as a lot of them leave junk behind. You have to be willing to go into the Windows system directory and examine questionable DLLs. There a lot of tools to help with this. I recommend everyone who is interested go to www.sysinternals.com [sysinternals.com]. There you will find programs such as REGMON and FILEMON which show you every registry access or file access in realtime, with the ability to filter. Also very useful is LISTDLLS which shows you which DLLs are loaded by every process in memory. If there is a file that's locked you can often find out who is using in with this program. The 2k resource kit has a free utility called Dependancy Walker which will show you the library dependancies of any .EXE, sort of like ldd. You must also be familiar with certain areas of the registry, such as the part where stuff is loaded on boot, the "pending file rename" section, the section where apps install their preferences, etc.

    I find a lot of times when I use someone else's windows machine I am appauled by the amount of crap they have loaded, and most of the time aren't aware of it. Programs that load stuff on startup without being very clear about it and asking you first really peeve me. I patrol the startup folder+registry entries very strictly, and keep the task list small.

    You of course have to make sure your hardware is stable and you have to go through the process of finding a driver combination that is suitable. It can be very frustrating to mess with crap drivers and a ton of strange BIOS settings. But if you stick with it you can eventually find a combination that is bulletproof and will yield stability. If you don't put in the effort to do this, though, you will forever be messing with strange crashes.

    It can be done, but it is not for the faint of heart.

  • Mac OS 9 (Score:4, Informative)

    by Phroggy ( 441 ) <slashdot3.phroggy@com> on Monday July 29, 2002 @08:24PM (#3975388) Homepage
    Certainly a system that has been in use for a long period of time can become less stable due to increased complexity as new software is added. However, the real question is, how easy is it to clean up the mess and return to a smoothly running system, without reinstalling the entire operating system?

    The problem with Windows is the Registry. Practically nobody, including Microsoft's own programmers, knows exactly what to clean up in the Registry to get the system running as good as new, without breaking something important. In Mac OS, however, it's really quite simple. Granted, you do have to have an understanding of how the system works, so I wouldn't expect a novice to know how to do this intuitively, but I'd expect far less of a Windows user.

    The most obvious thing is the Desktop file (actually a couple of files now). This is the closest thing the Mac OS has to a Registry, and it's not close at all. Every six months or so, reboot while holding the Command and Option keys (technically, you just have to hold the keys while the Finder is loading) and it will ask if you want to rebuild the Desktop file for each mounted volume (filesystem). A couple minutes later, good as new.

    The next thing is extensions and control panels. Perhaps you've downloaded some cheezy shareware thing that's conflicting with some other cheezy shareware thing. Open the Extensions Manager, and have a look. Usually you can easily identify where most things came from; if you don't recognize something, you can turn it off, reboot, and see what happens. You can create multiple extension sets to experiment with if you want.

    Finally, preferences. Some app misbehaving? Trash the Preferences file. Everything reverts to defaults, but nothing is really broken.

    And of course, if you want to uninstall an app, usually you just need to trash the folder the app is in. Sometimes it may come with control panels or extensions; just trash those too (they're easy to identify). If you want to be thorough, trash the prefs too, although it won't hurt to leave 'em.

    I have yet to see anything easier to maintain.
  • by Darth_Burrito ( 227272 ) on Monday July 29, 2002 @09:10PM (#3975588)
    Real Player has a "feature" which allows it to protect the extensions associated to it against other programs like Media Player and Winamp which I believe have a similar "features". With Real Player I think it is turned on by default. Basically, it is not enough to register a file type with Winamp, you must also unregister it with RealPlayer through their settings wizard... or find someway to kill their scheduler. Here is an excerpt from a RP help file...

    "Ask": During installation RealOne Player will ask you for permission to become the default media player for media types that may be assigned to other programs on your computer. Using a feature called Scheduler, RealOne Player will periodically check to ensure that your media playback preferences have not been overridden by another program, even when RealOne Player is not in use. Any media type that you have assigned to RealOne Player will be reclaimed automatically when another program attempts to override your choices.

    For example, if another program decided for you that it should be your default media player for a given media type, RealOne Player would silently and automatically correct the change to protect your original choice. If you wish to change the media types that you have associated with the RealOne Player you can follow these steps: On the Tools menu, select 'Preferences', 'Media Types', then select the media types you want RealOne Player to be associated with. Select the "OK" button to save your changes.

    You can configure the Scheduler to operate only when RealOne Player is in use by following these steps: On the Tools menu, select 'Preferences', 'Connection', 'Internet Settings', then select "Only perform automatic services while RealOne Player is in use". Select the "Yes" button when the confirmation dialog appears.

    Please Note: We will always reassociate media types with RealOne Player that are unique to RealNetworks' products and cannot be played by other applications (such as RealAudio and RealVideo).

  • by Jucius Maximus ( 229128 ) on Monday July 29, 2002 @09:24PM (#3975642) Journal
    FYI: An install of MSFT Visual Studio adds 4 MB to the windows9x registry.
  • by johnlcallaway ( 165670 ) on Monday July 29, 2002 @09:32PM (#3975672)
    Most cruft can be attributed to users who do not take the time to learn about their computers and what it takes to maintain them. How many people go out and buy a new hard drive when they run out of disk space instead of going through the add/remove programs in Windows, RPM manager in Linux, or wander through all directories and check for things no longer needed.

    I have lived with 10GB for two years now just by pruning cruft whenever I get less than 300MB free. I would love to spend $100 on 80GB, but that would only lead to more cruft.

    Linux/Unix does hold one bit over Windows, there is no single directory that becomes crufted. (Please ... I know everything falls off of slash ... work with me here...) How big is your WinNT directory?? Mine is 1.24GB, and contains 9,191 files. That is 12% of my hard drive space, and 10% of all files, including my p0rn! Linux/Unix doesn't put all of its eggs into one basket, making it a little easier to prune the cruft that builds up, or at least a little less dangerous.

    Face it, unless you and I are willing to spend many hours pruning the cruft on a regular basis, it is often easier to delete and rebuild. Oh yeah ... another thing Linux/Unix has in its favor. If I put all the user directories on a separate partition, I don't lose all my settings when I reinstall Linux.

    Bad registry...evil registry...corrupted registry...
  • by crucini ( 98210 ) on Monday July 29, 2002 @09:43PM (#3975708)
    Of course linux has the nice problem of scatter-componets-across-10,000 directorys. I use linux as a server platform instead of a desktop platform for precisely this reason. I can *never* find all the parts of some installs and I despise when a program places itself into 4-5 different directorys.
    If you installed from RPM, rpm -qlp some.rpm. If you installed from source, try make -n install.
  • by homer_ca ( 144738 ) on Monday July 29, 2002 @09:50PM (#3975741)
    Storing the configs in lots of little .ini files or conf files in /etc is more robust and fail safe than 2 huge registry files. Let's say the computer has a hard poweroff, maybe from a power outage or a hard lockup from buggy drivers. Despite claims of NTFS being journaled, there will be filesystem corruption. which brings out 2 big problems with the registry:

    - All the eggs in one basket: With .ini and config files, only a few files are likely to be open at one time and likely to be corrupted. This limits the damage. With the registry files you're outta luck if restore from .bak files doesn't work. Admittedly I haven't seen many such errors on Win2K, but Win95 was a crapshoot every time you installed a new driver.

    - Opacity of binary config files: With a text config file you can go look at the files reported to be damaged, and it's pretty obvious if they're corrupt; they'd be truncated or garbled. Filesystem corruption happens a sector at a time. What can you do with the registry assuming the system even boots up.
  • by Anonymous Coward on Monday July 29, 2002 @09:56PM (#3975764)
    Especially if you get in the habit of compiling programs from source without a package. I don't mean compiling deb or rpm source packages, or source that can generate a package too. I mean where you just download the tgz and do a "./configure && make && make install". Those generate true cruft.

    However, I think that even though Unix can have more cruft, its also easier to get rid of it. Windows, and especially Windows XP, has the registry which can easily load up. Who knows what the hell needs what. Especially when you've uninstalled programs that don't like to remove registry entries. Windows XP will even protect itself with backup copies of the registry and you can only remove some files in a special way other wise XP will just replace the removed or user-replaced file with its own backup. It has backups of backups too.

    Anyway, since Unix generally doesn't have a registry (for better or worse) it also easier to remove the cruft. If you strictly follow your distributions packaging system, cruft in Unix should be fairly low because you have a way to track the cruft.

    Debian (and maybe RH too) has a way to reduce cruft even further. If you modify files after installation of a package, you can have it rebuild that package with your changes included.

    Debian rules and Red Hat drools!
  • by rusty0101 ( 565565 ) on Monday July 29, 2002 @10:35PM (#3975917) Homepage Journal
    Being accepted as a Debian package means that your package does follow the Debian policy. That means that there are over 9000 packages where the developer was concerned enough about the policy to follow through with what put you to sleep.

    The dselect, apt-get, dpkg, gnome-apt, installers do just what you are asking a package installer to do. When you build a package, using Make, or other software building applications that support Debian packages, your package does identify what files are needed, what independently developed packages are required. It also handles uninstall very well.

    Is it perfect? Nope. But in comparison to Windows software installers, it is light years ahead.

    Of course BSD users will brag about how their installer works for any platform that has a C compiler... and that there exists drivers for the hardware... Sounds like a really lousy way to be set up to uninstall software later, but I am not judging the system, I don't use it.

    -Rusty
  • user mode linux (Score:2, Informative)

    by asteinberg ( 521580 ) <ari.steinberg@st ... edu minus author> on Monday July 29, 2002 @10:47PM (#3975967) Homepage
    But there will shortly be ports to Linux, Mac OS X, and other Unices; we are confident these OSes are just as prone.

    Aside from all the other comments made in defense of these other OSes here, most of which I wholeheartedly agree with, I'd also like to point out that I think this is something that User Mode Linux [sourceforge.net] will help to avoid. UML makes it a bit safer to play around with installing software that could potentially add cruft. You can have a UML file that has programs you're experimenting with, and then once you're confident that the programs work well and that you won't later decide that you don't need them, install them to your main Linux installation.

  • by John Hasler ( 414242 ) on Monday July 29, 2002 @10:50PM (#3975978) Homepage

    And all it does so far as I can tell is tell an
    application designer how to play nice with
    everyone else.

    No. It tells a Debian maintainer who chooses to add an application (of which he is not usually the designer) to the archive what he _must_ do.

    Until operating systems have a generic installer

    Debian has one.

    and application designers don't have to do any
    more tell this installer "here are my files, i
    need to store this config info, and these are my
    dependncies, do what you will"

    That is what the Debian package management system does. It is the job of the Debian maintainer, not the program author, to package the program so that it complies with Debian policy and functions properly with the package management system. Familiarity with Debian policy is one of the requirements for becoming a Debian maintainer.

    let the one who knows the details be the one to
    handle them.

    That would be the Debian maintainer. There are about a thousand of us.
  • Re:Don't use it! (Score:3, Informative)

    by bnenning ( 58349 ) on Monday July 29, 2002 @11:43PM (#3976135)
    Quicktime is particularly bad because it asks you if you want to upgrade EVERY SINGLE TIME you play a file or stream.

    That is a pain. But there's a 30 second workaround: set your clock forward many years, launch the QuickTime player, click "Later", quit, and reset your clock. It won't bother you again as long as the time is earlier than what you set it to.

  • Re:Maybe, maybe not (Score:2, Informative)

    by Anonymous Coward on Monday July 29, 2002 @11:50PM (#3976162)
    [Windows] 2K/XP might have gotten it right, but it took MS 20 someodd years to get around to getting it right.

    Those who are relatively new to the microcomputer field often have this attitude. Without historical knowledge of the PC architecture to provide the context, the architecture of Windows 9x is indeed baffling. Put into context, however, it makes perfect sense.

    The single most important thing to understand is that the '386 was the first 'modern' x86 processor, which is to say the first one capable of running a 32-bit, virtual-machine OS. The reason NT/2000/XP and PC UNIX are stable is that each process runs in a virtual machine, and the hardware allows the OS to decide precisely what that process can and cannot do.

    Older, pre-386 PCs simply lacked the hardware to support a virtual-machine OS. If you look at the 8086, there were no restricted modes of operation (i.e. everything ran in 'kernel mode'), and nor was there any memory-management hardware (i.e. no virtual memory, nothing to prevent processes stomping all over each others' memory).

    MS-DOS and Windows were written for these old, pre-386 PCs. More importantly, virtually all software for MS-DOS and Windows was written with the assumption that it could do anything it wished with the hardware (because it could).

    In the late 1980s, modern RISC chips and the '386 were taking off, offering the prospect of micros capable of running real OSes (which had previously required expensive minis). Microsoft therefore began developing a new OS for these systems. Originally called NT OS/2, it had become Windows NT by the time it was released in 1993 (the OS/2 API could be swapped for a Windows-like one because the OS kernel had been designed to support multiple OS personalities). With NT, Microsoft had a minicomputer-class OS for 386 and RISC micros, theoretically ending the reliability problems of the PC. However, because PC software had been written expecting direct hardware access, it in many cases didn't work, or worked poorly, in a virtual machine on NT, where the OS was trapping these attempts to access the hardware and converting them to system calls (with the OS ultimately dealing with the hardware).

    The nub of it is that much of the installed base of PC software eith didn't work, or didn't work well enough, on NT. The solution to this was a compromise: Windows 95. Windows 95 sacrificed robustness for compatibility, allowing most old software to run reasonably well -- but if you allow software to do things like access hardware, reliability immediately goes out the window, and this isn't because the OS is 'buggy'. At the same time, Windows 95 offered a subset of the NT Win32 API, which developers could use to write new software for both OSes.

    Windows 95 was always an interim solution, from its inception to the ultimate triumph of NT in Windows XP, but it was essential to keep the installed base of DOS/Windows users (unlike, say, Apple, Microsoft would never consider throwing away an installed base for the sake of 'architectural purity'). Through the years, Win32 was expanded with APIs like DirectX, to allow games (the most stubborn 'we own the hardware' software) to be ported to NT, in addition to 'compatibility modes' to simulate older versions of the OS, and all manner of things designed to make as much old software as possible work with it.

    With most software now written for Win32 instead of the PC hardware, NT/XP is able to finally replace 9x. It's still imperfect with respect to running legacy software, but it's good enough that the market will accept it. A loss of the ability to run a small amount of legacy software is an acceptable exchange for a robust system.

    At the end of the day, Microsoft had a robust OS for the PC in 1993, which isn't too long after the requisite hardware became popular. The next decade or so was spent weaning PC developers off of direct-hardware-access and onto that system.

  • by darqchild ( 570580 ) on Tuesday July 30, 2002 @02:13AM (#3976515) Homepage
    It's true that {Li,U}nix machines do build up cruft. I have 4 machines running 24/7.

    Machine 1: DHCP/DNS/NIS/SYSLOG server for my lan. This has been sitting at CF1 for 3 years. I log into it about once every 2 months, to add or remove a user.

    Machine 2: Firewall and mail/http/ftp server. This probably was at a CF 3 after 2 years, up until last week when i moved from redhat 5.2 to slackware 8.0. It's seen a fair bit of tweaking, and frobbing, and i'm not disappointed. After all, windows has a CF 4 out of the box.

    Machine 3: My Laptop. It's at my girlfriend's apartment, I'm not sure but it's probably at CF3 or 4, but it's running a late model redhat distro, which comes with the cruft pre-installed

    Machine 4: My Desktop. My poor desktop. It gets a full reinstall every 6 months. What can i say? Unix was designed to be configured and left alone. Not somthing I can do (well, i could but it's no fun) When my machine reaches CF 5 it's an excuse to finally upgrade to the latest release of my favorite distro of the week.
  • Re:BSOD (Score:2, Informative)

    by _xeno_ ( 155264 ) on Tuesday July 30, 2002 @10:15AM (#3977739) Homepage Journal
    I can reliably generate a IRQL_NOT_LESS_OR_EQUAL by quitting a game that used IPX. It's quite annoying, really, because if I ever want to play StarCraft multiplayer, I know I'm in for a reboot. For added fun, playing Diablo II over Battle.net also means random IRQL_NOT_LESS_OR_EQUAL popping up.

    It seems to be a bug with something that Blizzard is doing because I am yet to BSOD my system on anything but Blizzard games. When StarCraft caused the system to BSOD, I figured it was a problem with the IPX drivers, but since Diablo II can do it on Battle.net, I'm beginning to doubt that. (I've also seen Java manage to reboot my machine randomly - hasn't happened recently, but some how Cocoon [apache.org] managed to reboot my Windows machine with the 1.3 JDK. Don't ask how, it just did...)

    I'd guess I have network card issues, something's probably wrong with my LinkSys card drivers ("Works with Linux! Download drivers from our website! Uh, you aren't planning on using the Ethernet for Internet access, right?" - later versions of their driver disk come with Linux drivers, and Linux kernel 2.4.x have the appropriate drivers, but Linux 2.2.x at the time I got the card didn't - meaning a quick boot to Windows before I could get Linux up and running...)

    Other than randomly rebooting with JDK 1.3 and the occasional multiplayer Blizzard game BSODing me, Win2k's been rock solid. Although I use Mozilla as my browser, solving the "Explorer bringing the system down" liability that Win2k has. (Mostly when some page causes IE to start chewing threw resources, or when some app manages to crash the Explorer desktop instance and it gets screwed up when it's autorestarted.)

    My only beef is that I rarely get "unkillable" processes - processes that Windows claims are being debugged. Ah well - I can deal with it.

    It's better than my current Linux Gnome 2 install, which steadfastly refuses to use any window manager except twm... I'll get around to fixing it eventually, but since I mostly use my desktop for games and Java development, I really don't find myself wanting to go back to Linux. Sorry guys...

The Tao doesn't take sides; it gives birth to both wins and losses. The Guru doesn't take sides; she welcomes both hackers and lusers.

Working...