Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Making Operating Systems Faster 667

mbrowling writes "In an article over at kernelthread.com Amit Singh discusses 'Ten Things Apple Did To Make Mac OS X Faster'. The theme seems to be that since you won't run into 'earth-shattering algorithmic breakthroughs' in every OS releases, what're you gonna do to bump your performance numbers higher? Although the example used is OS X, the article points out that Windows uses the same approach."
This discussion has been archived. No new comments can be posted.

Making Operating Systems Faster

Comments Filter:
  • Faster? (Score:4, Interesting)

    by AsnFkr ( 545033 ) on Thursday June 03, 2004 @10:33AM (#9325370) Homepage Journal
    You've got to be kidding me. XP is CRAZY slower than 2k. I suppose thats what happens when you add a Microsoft+ package to Windows 2000. Wanna make it faster? Disable all the useless services and shut off the ugly eye candy. *sigh*.
  • One word: (Score:5, Interesting)

    by swordboy ( 472941 ) on Thursday June 03, 2004 @10:34AM (#9325384) Journal
    Hard Drive

    Largest bottleneck in any modern system. If you've never had the opportunity to use a 15krpm (or something faster) system, do it now. It flies... I don't care if it is Windows or what... it doesn't matter when you've got usable bandwidth to the biggest chunk of storage out there.
  • by Anonymous Coward on Thursday June 03, 2004 @10:34AM (#9325392)
    Same thing with XP... I get a much better performance if I shut off all the fancy transparency effects. Sure, they look cool.. but are they really necessary?

    OS designers shoudl also cut down with bloatware and trying to 'integrate' everything into the OS...
  • by torpor ( 458 ) <ibisum.gmail@com> on Thursday June 03, 2004 @10:35AM (#9325402) Homepage Journal
    .. but I thought that the primary 'reason' for OSX slowness was that Apples binary format is designed to maintain 'compatability' with the register set of the 68k processors, and in fact they're not using all the PPC registers in a way that is most efficient?

    I haven't looked into it for a while (mod me down for being uncertain if you like), but I seem to recall that there were serious leaps and bounds still left in OSX performance, with a change to the ABI register use, potentially, in the future ...
  • Re:One word: (Score:0, Interesting)

    by Anonymous Coward on Thursday June 03, 2004 @10:37AM (#9325421)
    RAID or more RAM will solve your problem just as well. You don't need to have a faster (read: runs hotter) HD in your machine.
  • by Paulrothrock ( 685079 ) on Thursday June 03, 2004 @10:40AM (#9325443) Homepage Journal
    I've been using OS X since public beta, and every upgrade has been considerably faster, even on my four-year old G4/400. I expect to be using that machine as a server well into the future, mostly due to the fact that Apple is doing such a good job making operating systems work well on older machines.

    And the fact that I won't be discouraged from keeping 10.3 or 10.4 on that system if the next version doesn't support my hardware through annoying EULAs.

  • by Stevyn ( 691306 ) on Thursday June 03, 2004 @10:40AM (#9325446)
    I wish these were incorporated into linux more. I don't care what anyone says, comparing windows and linux on the same machine has always shown to ME that windows seems a lot faster. Applications take longer to load in linux. Mozilla for example, takes longer to load than it did in windows on the same computer. Other applications that I can't compare directly seem to take a while when they're just small apps.

    Aparently, windows caches a bunch of stuff and has a bunch other little hacks that allows this. So why can't linux and the kde people do this. They've copied everything else, why not this?

    Before you mod me as flamebait or troll, I switched over to linux a while ago and I have no intention on going back to windows. I'm not some ms fanboy bitching about my 10 minute experience with linux. All I'm saying is that here are some points where linux annoys me.
  • by mac-diddy ( 569281 ) on Thursday June 03, 2004 @10:43AM (#9325484)
    Sure, prebinding does speed up loading, but it also breaks everything from tripwire, to backup. Since the file is changed out from under you, all traditional unix tools that use checksums or file size to determine file changes break.

    Apple, and other system vendors need to consider these types of management issues when making a change. Speed improvements are only good if they are "management friendly"

  • um, no (Score:3, Interesting)

    by millahtime ( 710421 ) on Thursday June 03, 2004 @10:45AM (#9325523) Homepage Journal
    Problem with this is that it's things the user needs to do. The article is about what apple did that is independant of the user.
  • That's 2 words. (Score:5, Interesting)

    by Moderation abuser ( 184013 ) on Thursday June 03, 2004 @10:45AM (#9325527)
    Anyway... You are completely correct but...

    My 2 words are RAM DRIVE. You think you can't justify 4Gb of RAM? Course you can.

    Dedicate 2-3Gb of it to a ram drive and mount it as your root, /usr, /opt partition, whichever one you have all of your applications installed on. Copy the hard drive to the ram drive at bootup. DD can do it quickly if you just zap the whole partition across. I think there are mount options to tell the Linux filesystem buffer not to cache a particular filesystem.

    The difference in performance can be stunning.

  • Re:One word: (Score:5, Interesting)

    by AviLazar ( 741826 ) on Thursday June 03, 2004 @10:49AM (#9325566) Journal
    Agreed. My Pent III 800 mghtz, SCSI computer (scsi hard drive, dvd player, cd rom) with 512 ram, Hercules 64 Meg video card runs games like Diablo II MUCH MUCH Faster then my 2.2 ghtz laptop with 512 ram, a better video card (Nvida GForce 4 Go card), "faster" IDE dvd rom. A better test. When I upgraded from IDE to SCSI I performed a DOS level copy. The screen would scroll and periodically pause when reading from the IDE drive. The IDE drive was 7600 RPM, the SCSI HD is 15k. When it would write to the SCSI drive, it FLEW! Never once did it pause. WHile scsi is expensive, runs extremely hot (meaning you need more fans), and is fickle at best - when it works it does WONDERS.... For those people who like to have a RAID system - SCSI is still faster as it reads & writes faster... but again it is more expensive (usually about double - triple) -A
  • by G4from128k ( 686170 ) on Thursday June 03, 2004 @10:51AM (#9325595)
    Early versions of some film scanner software that I worked on were terribly slow. A quick profile of the running code showed that about 10% of the time was spent in a little piece of code called TtoF(). This code parsed and coverted text into floats.

    The earliest versions of the software did not convert key preference/calibration/setup files into internally stored numerical values -- instead, anytime the code needed a calibration/setup value, it went to the file, read it, and converted it. Needless to say, that "feature" was quickly corrected.

    That's not as bad as an early VAX image processing program that prepped newly allocated file space by setting all the bytes to zero, one byte at a time.
  • Re:One word: (Score:1, Interesting)

    by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Thursday June 03, 2004 @10:52AM (#9325603) Homepage
    Is it really the harddrive? Sure it might be the hardware part that could give the best boost in todays system, but how about fixing the boot issue at the software level? I mean boot speed hasn't been much of an issue big in the past, yet, today it suddenly is, even while harddrives and CPUs and RAM are much faster then a few years or even decades ago.

    What went wrong on the software level that systems now take so long to boot? Are it generic kernels that probe for a lot of non existing hardware and thus basically just spend their time waiting for timeouts? Or is it todays software with all its dynamic linking and its dependecy (heard the slowness of KDE was in most part a fault of ld.so)? Or is it just bad organisation of boot scripts, ie. lack of paralelims while starting them, useless stuff first, so that the important stuff has to wait and such?

  • by poptones ( 653660 ) on Thursday June 03, 2004 @10:53AM (#9325607) Journal
    I know what you mean. In fact, I wrote a "call for help" a very loing time ago about just this, as I had purchased (at very low price) a bunch of old vectras for use as "giveaway desktops" and I was looking to make the most of their 200mhz pentium mmx cpus. I tried several different linux distros with minimal windows managers (like blackbox) and none of them felt as snappy as the same machine running windows 2000.

    So, I know what you mean. And I've even noticed the same thing when trying ootb installs of mandrake 7,8,9,10, redhat 6,7 etc. on my 1600 athlon xp.

    Until I tried SuSE 9.1. I'm not a fan of kde but this distro looks really nice and it feels snappy in a way I've never known from linux in the half dozen or so commercial distros I've tried over the years. Between the snappy desktop, the eye candy and yast, it sets a REALLY high bar for every other desktop. You might give it a try and see if you don't agree.

    And no, I don't work for novell...

  • Re:Missing Step (Score:4, Interesting)

    by lpangelrob2 ( 721920 ) on Thursday June 03, 2004 @11:03AM (#9325727) Journal
    I'm not sure if one can say on a general level that even the majority of users considers speed to be important. I'll take up OS X because I remember reading a quote on an Apple webpage -- Why did we do it [fancy graphics *everywhere*]? Because we could.

    I'll simplify the comparison quiter a bit, but I think Apple decided to trade speed for distinguishing features. It must've worked, because people noticed.

  • Re:One word: (Score:2, Interesting)

    by October_30th ( 531777 ) on Thursday June 03, 2004 @11:04AM (#9325736) Homepage Journal
    WHile scsi is expensive, runs extremely hot (meaning you need more fans), and is fickle at best

    I'm curious. Why do you say that is SCSI fickle?

    I remember a time when one had to be careful not to exceed the a certain cable length when daisy-chaining external devices, but other than that SCSI has been nothing but rock-solid on my systems.

    As far as the performance goes, you're absolutely right. I've got a RAID5 array of four 10 krpm U320-SCSI drives on my dual Opteron. It was almost scary to watch how fast it compiled stuff during Gentoo stage 1 installation. ;-)

  • by kryptkpr ( 180196 ) on Thursday June 03, 2004 @11:06AM (#9325750) Homepage
    You missed an issue.

    c) Have a shutdown script that will always run on shutdown. From what I understand, Windows has more then one shutdown (there's at least 2: the "slow" shutdown you get from Start -> Shutdown, and the "fast" shutdown you get from pushing the soft power button on your case).
  • by Anonymous Coward on Thursday June 03, 2004 @11:09AM (#9325768)
    ...apparently not as well as Apple. Gee, didn't see that coming.

    Every revision of OS X has run (or at least "felt") faster than its predecessor did on the same hardware. Panther will run fine on a five year-old G4, assuming you've added RAM to what the machine shipped with in 1999.

    You absolutely cannot say that about Windows. Nobody sane would even consider trying to run XP on a PC they bought new in 1999. One of Microsoft's growing problems is that people are getting off the upgrade treadmill-- they've begun to REFUSE to upgrade to version n+1 because they know their computer will feel slower than it does with version n.

    I've got a 733MHz G4 running OS X 10.2.8, and a home built (with *quality* parts) Athlon XP 2600 system running XP, and there's no comparison... the XP box feels terribly slower, even heavily optimized with all the XP eye candy shit turned off, unneeded services disabled and spyware, etc ruthlessly prevented/exterminated. OS X running on a machine with 1/3 of the horsepower thoroughly embarasses it in terms of user-perceived speed. That's just plain pathetic.
  • by Efialtis ( 777851 ) on Thursday June 03, 2004 @11:15AM (#9325876) Homepage
    I worked there for years, through the development of Win95 Osr2, Win98, Win98 SE, Win ME (but that one wasn't my fault), WIn 2k, Win XP and into the first little bit of Longhorn... Longhorn will be as slow as or slower than the current XP systems, even when properly configured. We don't call it "Bloatware" for nothing. One way to make it faster is to cut out all the crap. If someone wants to install Solitaire, FANTASTIC, let them choose to do so, but for crap sake, DON'T install it by default... Fix the File Tables, Fat32 was good, NTFS is better, they say the new schema for Longhorn will be better, if they can ever get it working... If a user wants the colors and blinking things, then let them set it that way...don't make that the default... Just because a processor can hit 3.2 GHz DOES NOT mean you have to use every Hz of speed... Just because Hard Disks are not in the hundreds of GB, does not mean you must fill it up with an OS... Just because memory is "cheep" and some systems can handle 2 gig or more, does not mean you must use the whole thing to manage your OS... The system requirements for Longhorn are rediculous at best...when Longhorn ships, Linux will finally get the break it needs!
  • Looking elsewhere (Score:5, Interesting)

    by aking137 ( 266199 ) on Thursday June 03, 2004 @11:17AM (#9325901)
    This is technically offtopic, but often much of the 'slowness' we still experience on our computers which people often blame on their 'operating system' isn't really down to the operating system (i.e. kernel), but more the higher level stuff that runs on top of it. It seems that lots of efforts are going into making operating systems more efficient, since there's lots of interest in this area, but that efficiency is more than lost further up. (Not that I should be complaining, since I'm just another person not doing anything about it.)

    Try running Windows NT on a new Intel system (say 2-3GHz) for example - it'll run blazingly fast, and with software versions from around the same time it'll still do much of what everyone wants to do - email, web, office, graphics manipulation - but really much faster - things will load practically instantly, rather than after five or ten seconds, and it's all still nice and graphical and everything, just like people want.

    Many (but not all) XP machines I meet still seem to take 2-10 seconds even to do basic things such as open My Computer, Internet Explorer or a properties dialog, which one has to wonder is worth the wait for the extra functionality - basically lots of drivers, a couple of extra bundled programs and supported file formats, minor changes to the interface and the other couple of things I'll get flamed for forgetting. Microsoft have no doubt made some improvements to the kernel between releasing NT and releasing XP, but most still seem to be no faster to use, if not slower.

    I maintained a school network up until last year which still ran NT and KDE2 on around 2/3 of systems, and then when my replacement went and wiped everything out and replaced it with new machines running XP (with an enormous cost to them), many staff told me that there were lots of things that didn't work any more, and there'd be frequent outages of the entire network.

    On a Linux+X system, running X on its own (i.e. just the one program you want) or with a light window manager (fvwm or whatever) is again noticeably faster than running Gnome or KDE. Loading Mozilla or OpenOffice.org means loading the entire frameworks they run in, and often we're loading up a great deal of functionality we don't want in that particular situation. I think a good example is Dillo [dillo.org], a web browser written entirely in C that just does the basics (launches in around 0.7 seconds on this Athlon 700 system, compared to Mozilla, which takes around 5, and Mozilla Firefox, which isn't far off that) - it'd be interesting to see if they could add things like CSS or SSL support and still keep it fast.
  • Re:One word: (Score:5, Interesting)

    by Smallpond ( 221300 ) on Thursday June 03, 2004 @11:21AM (#9325948) Homepage Journal
    Improved relatively little? Average overall seek times are:

    5400 RPM 11 ms
    7200 RPM 8 ms
    10K RPM 5 ms
    15K RPM 4 ms

    Name another common mechanical device that has nearly tripled in speed in that period. (Source: seagate.com, all numbers are for 3.5" disks)
  • by bheer ( 633842 ) <rbheer AT gmail DOT com> on Thursday June 03, 2004 @11:23AM (#9326001)
    -1, Misinformation. Office and IE don't keep "portions" resident in memory in either the DOS TSR sense *or* in the Mozilla Quickstart (or whatever it's called now) sense.

    The case of mshtml.dll, shdocvw.dll, urlmon.dll are a little different. These are *system DLLs* which can be used by any app, including IE (iexplore.exe) -- and the shell (explorer.exe). Explorer in particular will load urlmon if you visit FTP or WebDAV sites.

    IIRC after login on a fresh Windows 2000 install, none of mshtml, shdocvw or urlmon are loaded.

    Note that Working Set Detection/Maintenance on Windows can change this over time, but it will do so even for Firefox or any other non-MS app.

    Btw, the real reason IE and Office start up quickly is because they are better engineered that the competition -- which is typically cross-platform portable code that is not particularly optimized for Windows. Reducing startup time is not necessarily a black art:
    [...] Startup time is all about minimizing disk I/O. So analyze your startup code to death: Track every page fault and work to get rid of it. Delay initialization of everything that can be delayed. (The fastest code is code that doesn't run at all.) Take all the functions that are called at startup and put them near each other in memory so you take fewer page faults. Use the /ORDER switch to do this. If you have a large function and only half of it is used at startup, break it into two functions, the part used at startup and the part that isn't. Reorder your data so all the memory used by startup is kept near each other in memory. With CPUs as fast as they are, disk I/O is the limiting factor in app startup. [
    link [msdn.com]]


    The true measure is how fast the app runs, not how fast it opens.

    Not sure what your point is, but Open Office and Mozilla both run slower (_and_ open slower) than Office and IE on comparable hardware. Thankfully, Firefox opens slower than IE, but is almost as fast in use for most common tasks, which lets me use it for day-to-day browsing.
  • FS Journaling (Score:4, Interesting)

    by mslinux ( 570958 ) on Thursday June 03, 2004 @11:27AM (#9326056)
    Filesystem journaling does not make the filesystem faster, and it's silly to suggest that it does.

    In fact, journaled filesystems are generally noticeably (one might say significantly) slower than non-journaled ones.

    The only 'performance' gain one gets from journaling is after an unclean dismount (a crash or power outage). The system will boot up much quicker, but that's it.
  • Windows... (Score:1, Interesting)

    by Anonymous Coward on Thursday June 03, 2004 @11:29AM (#9326075)
    I know this is an unpopular view (hence AC), but Windows XP is very fast for a lot of things.

    One of the reasons I dont use Linux as my desktop is the office packages. Under windows, I can type ctrl+alt+w (My shortcut for MS Word) and it starts INSTANTLY! Not 5 or 10 seconds, like OpenOffice. Call it bloated or whatever, but it works the way I like.

    For example, run any program and look in %windir%\Prefetch - and try run it again and look at the speed difference.

    There was a discussion about this on the kernel development mailing list a couple of months ago I think, and the approach used by windows was considered.

    I also use Maya a lot, and other operating systems it has been ported to (I havent used IRIX) have problems
    with various graphics cards and/or sound. OS X has worse performance than windows for the same price for running Maya as a workstation or a rendernode.

    All the eyecandy and other crap is very easy to turn off... things work a lot better on a wider variety of hardware than any other OS. Linux has a lot of driver support, but nowhere near that of Windows still :(
  • Re:Missing Step (Score:3, Interesting)

    by Draknor ( 745036 ) on Thursday June 03, 2004 @11:30AM (#9326092) Homepage
    While I agree with you, there's a very good reason Microsoft won't change their defaults: because its more important for Microsoft to get version differentiation / recognition by the unwashed masses than it is to get best performance. You set a novice computer user in front of Windows XP (with the fugly XP shell) and Windows 2000, and John Doe *immediately* knows which one is XP. So when John Doe is ordering a machine from Dell, or standing in Best Buy or CompUSA, he knows he's getting XP (which he wants because that's what the 12 year old neighbor kid said he should get and that's what everyone else uses).

    That, and if they made the eyecandy off by default, it would never get turned on by 95% of the users, and so Microsoft's investment in that development is wasted (you could argue its wasted now, and I wouldn't disagree :-)

    As it stands, the people who know enough about their computers can find magazine articles & online guides to tweak for performance, and those that don't probably don't know the speed they are missing anyway. And if its getting too slow, they just buy a new computer! Everybody (== Microsoft) wins!
  • You want fast! (Score:2, Interesting)

    by Anonymous Coward on Thursday June 03, 2004 @11:33AM (#9326137)
    Then avoid bloated multi-library dependent C++ GUIs.

    To me the ultimate example of this is Damnsmall Linux [damnsmalllinux.org], nothing but lean and mean apps!

    If a computer is 10x more powerful then it was 7 years ago it should be doing 10x the amount of work, instead we get more and more eye candy.
  • by Ath ( 643782 ) on Thursday June 03, 2004 @11:34AM (#9326151)
    I have a nice utility from Logitech called iTouch. What does it do? It handles key mappings for their keyboard that has some custom key.

    Application memory space during runtime? 15MB.

    I remember when Borland spend a lot of effort to optimize their Quattro Pro spreadsheet so that it was monitoring it's own memory usage down to 512 byte increments. It would start discarding portions of itself that it no longer needed.

    Those days are over, for sure.

  • by Anonvmous Coward ( 589068 ) on Thursday June 03, 2004 @11:36AM (#9326171)
    "Sure, they look cool.. but are they really necessary?"

    Ugh I hate this question. "Is it really necessary?"... is the type of question you can ask if you really want to make anything go away. "Is a >500mhz processor really necessary? Is a color monitor really necessary? Is being connected to the net 24/7 really necessary? Is a color printer really necessary when B&W is cheaper?" Who really cares so long as you can choose?

    I'll answer your question, though: The more your UI gives you, the better reflexes you can build while using your machine. Have you ever reacted to a screen refresh? (Particularly in the olden days when the CPU had to fight harder...) Ever notice change in window focus simply by spotting the change in titlebar color? Etc.

    I have no problem with people turning the fancy stuff off to boost performance, but the "is it really necessary" argument does not apply. The question is really "Do I want it?"

  • Re:That's 2 words. (Score:1, Interesting)

    by Anonymous Coward on Thursday June 03, 2004 @11:37AM (#9326176)
    I did exactly this, but only with 768MB. The whole point was not performance, but noise from the hd.
    So I had a boot script to load a teeny root disk (25MB, for X & xmms) then cram the rest of memoy to the brim with music.
    Combined with a fanless VIA epia, it's a noiseless (less than 0dB) pc based hi-fi.

    Now I've got a quiet enough maxtor disk, thus mooting the whole project, but it's no longer "noiseless". I'm just waiting for 80G of Ram on a passively cooled amd-64...

    --
    Laurence Darby
    btw I'm ac cos usually I only lurk here
  • OS/2 Warp (Score:4, Interesting)

    by ToasterTester ( 95180 ) on Thursday June 03, 2004 @11:37AM (#9326182)
    Do like IBM did with OS/2's big revision Warp. All the changes to Warp slowed performance down in general so IBM used smoke and mirrors. They worked on speeding up screen I/O as much as possible. End users raved about how fast Warp was. Looks faster, feels faster, but any program that required much prcessing was getting slower and slower. But joe user thought he had a speed deamon becasue the screen painted real fast.
  • 1. turn on apple II series box.
    2. Press Ctrl+break (? it's been a looong time since I used one).
    3. You're done.
    It takes under 2 seconds. Show me a "new" machine (see: desktop,server or notebook from the last 5 years) that actually boots that fast, please! (not just turns on the monitor)
  • by Greedo ( 304385 ) on Thursday June 03, 2004 @11:40AM (#9326211) Homepage Journal
    "do you want the 'pretty version'? Be warned that it may affect system performance."

    That's going to scare away non-technical users though.

    MS, love 'em or hate 'em, is doing it right: appeal to the largest market segment with the default settings. Those people who want to improve performance are still be able to, but need to make the adjustments post-install.
  • by merdark ( 550117 ) on Thursday June 03, 2004 @11:41AM (#9326231)
    Try running LinuxPPC on your mac some day, and you will see a huge difference in general snappiness.

    But then the OS is 'doing less' too, so that's not really a good comparison. Linux GUI's are not as advanced as OS X at this point. They don't use a display postcript like system yet, don't yet have the same level of integration in terms of plugable software frameworks, etc etc

    This is separate from AltiVec, which is an instruction set, not just a register setup ...


    I'm still not really clear how it's different from AltiVec since you could easily build binaries using diferent registers as well. But then I don't have all the details either. It could well be true I suppose. Not sure why the ABI would require that binaries not use some registers, seems weird.
  • Re:Windows... (Score:2, Interesting)

    by Anonymous Coward on Thursday June 03, 2004 @11:45AM (#9326289)
    Under windows, I can type ctrl+alt+w (My shortcut for MS Word) and it starts INSTANTLY!

    No, it only seems like it starts instantly because half of the shit it needs loaded got loaded at boot time and just sat there wasting resources until you decided to launch Word.

    I don't know about you, but my PC is no slouch hardware-wise and is very well-maintained software-wise, but it still takes an infuriatingly long time between the desktop, icons, and mouse cursor appearing (i.e. looking like it's ready for me to use it) and the computer actually being ready for me to use it.
  • Sleep vs Hibernate (Score:3, Interesting)

    by Hiroto. S ( 631919 ) on Thursday June 03, 2004 @11:46AM (#9326300) Journal
    10. Instant-on

    Apple computers do not hibernate. Rather, when they "sleep", enough devices (in particular, the dynamic RAM) are kept alive (at the cost of some battery life, if the computer is running on battery power). Consequently, upon wakeup, the user perceives instant-on behavior: a very desirable effect.

    I don't know how they can be proud of not hibernating. Windows can sleep OR hibernate. Although being a Mac household, hibernation is one reason I MIGHT consider windows for my next laptop. The ability to get back to all you have left around with your laptop hibernating for a few days unplugged and still have full battery power when you open it up is VERY nice.

  • by ReciprocityProject ( 668218 ) on Thursday June 03, 2004 @11:51AM (#9326375) Homepage Journal
    I'm surprised by how much people are ignoring this. Every single time Apple releases a new version of MacOS X they cut out a bunch of Aqua special effects. The most notable thing was when they took the striping away from the dock, which made that critical UI element pop up much faster. These aren't really optimizations so much as "taking away features to make it go faster."

    For a comparison you can run X with fvwm in (not in rootless mode) on MacOS X and see the difference. Or turn on terminal transparency and wiggle the terminal and watch the whole computer slow to a crawl.

    That is the real reason OS X seems to go faster on slower computers with each release. On faster computers, I forget what it's called, but they pipe Aqua through the video card to take away the overhead, which is a major optimization. I don't think, by comparison, that any of the other effects they mention in the article count for much in terms of between-release improvements.
  • by GooberToo ( 74388 ) on Thursday June 03, 2004 @11:53AM (#9326407)
    KDE apps has down right painful startup times, especially if you don't run KDE. That being said, I find parity in application start ups between the two and generally find Linux to be faster for most everything I do, save only for playing games (like NWN).

    Some efforts have been going into making KDE and KDE applications start up faster. Just the same, if it bothers you that much, don't run KDE or KDE applications. There are many window managers to pick from. Even GTK+ applications tend to load much faster than KDE applications (C versus C++, which is the root of one of the speed issues).

    The overall performance of X and Linux will be faster and more responsive as the 2.6 kernel starts to become more common. A typical desktop user should see something like 20%-40% better performance and responsiveness. Even servers typically see 20%-30% improvement in almost all areas. Improvements like these, make applications like apache and samba, which already blew the doors off of Windows, that much more impressive.

    Beyond that, start up time, in my mind, is a complete waste of time. Unlike Windows, Linux does not become unstable as you load more applications into memory. Start your computer and all of your applications (memory is cheap; tuning you swappiness as needed) and never have to load them again. I find that application crashes are rare; well, the ones I run. This means, rarely needing to restart your applications. As such, restart time is lost in noise. Furthermore, system stability can easily be measured in months or years as long as you're not running a closed source 3rd party driver (*cough* nvidia, ati).

    Long story short, while I hear you and think you have a valid point, the long of it is, it's completely lost in the noise and really doesn't matter.
  • by Anonymous Coward on Thursday June 03, 2004 @12:01PM (#9326518)
    The main reason this is the case with Apple is that the first couple of revs of OSX were HIDEOUSLY slow. The IO was so insanely slow I could not believe it. The OS was like 1/10th the speed of OS 9. It was a huge dissapointment.

    Eventually they fixed the problems. Windows did not have that problem to being with so it's "speed increases" are not as huge.

    OSX only became a real OS release with the release of .3 Before that it was really nothing more than a beta.

    (I won't even go into it's original lack of color management, printing, and file handling mechanisms that made OS X such a publishing favorite to being with!)
  • by ViolentGreen ( 704134 ) on Thursday June 03, 2004 @12:07PM (#9326602)
    I'm using mandrake 10. I decided on this distro because it's the easiest I've found and I wanted to introduce myself with my hands held. I've heard of prelinking and I've googled for it but I haven't found anything that can easily explain how it works and how it's used. Most stuff I've read on it just says to "use it." If anyone knows much about it or can point me and I'm sure many other people in its path that would be appriciated.

    I have always found Mandrake to be very slow. I started with Mandrake 7 or so. I recently tried the previous (9.x) release before 10 and still found it to be unbearably slow.

    That being said, I still say that that is the best distro for someone to start with. I have since moved on to Gentoo. My linux install blows away anything else I have used in respect to speed however I still use XP quite a bit (though not as much since i got by powerbook.) I would suggest putting a new partition on your machine and giving gentoo a shot. The install is a little difficult but you'll learn so much about how to administer your machine through the install.

    I'm sure mandrake could be sped up a little but I'm not the person to ask. I think a lot of it it boils down to getting the correct drivers for your hardware, especially your video card and getting read of services that you don't need or want.
  • by GooberToo ( 74388 ) on Thursday June 03, 2004 @12:09PM (#9326630)
    I've used both it and 2K on several machines, and XP boots up ~30 seconds faster

    Microsoft tends to spend more time figuring out ways to trick their users into *thinking* that things are faster even though it's actually taking as long, if not longer than previous versions. In this case, you've been tricked. Microsoft moved more stuff after the user is logged on. In other words, your system is still doing all of the things it used to do, plus probably more, it's just that you think it's done.

    This is the difference between reality and perception. Microsoft tries very hard to address a user's perception, even at the cost of making reality slower. As is, in the above cited example, Microsoft gave you a login screen, whereby, you can do very little to nothing, but you're satisified thinking it's done, in spite of the fact (reality) that it's not. This means, attempting to do things right after the login screen will more than likely, take much longer than expected. They further hide this fact by making application startup and caching part of the OS boot sequence. Non-cached application startup, following initial login, will more than likely be painfully slow for non-trvial applications, at least until XP actually finishes it's startup.

    Good or bad, you decide.
  • by ebcdic ( 39948 ) on Thursday June 03, 2004 @12:18PM (#9326736)
    Whenever you install new software, you have to wait while the system "optimizes" it, which in fact means checking for applications that need their prebinding redone. On a 700MHz imac - less than 2 years old - this sometimes takes 15 minutes or more. Since I bought it, I've wasted hours, if not days, waiting for installations to complete because of this, which is far longer (and more frustrating) than the total time saved starting programs.

    I don't understand why it doesn't just leave the prebinding to be done the first time the program is run.
  • by rworne ( 538610 ) on Thursday June 03, 2004 @12:23PM (#9326805) Homepage
    Apple removed striping from everywhere in Panther. Quite a bit of it was replaced by brushed-metal. Even so, all it is doing is replacing one bitmap with another. The only possible gain is if they do not need to use alpha for transparency. Yet not all of this is by "removing" stuff. Quite a bit of tweaking is being done to speed up the OS, the most recent software update resulted in quite a few reports of faster system operation, and there was no discernable change in the featureset or operation of the UI.

    The reason X runs slowly compared to Aqua is that Apple optimizes Aqua and allows harware acceleration (Quartz Extreme) and offloads lots of tasks to the GPU. I know of no X windowing system (aside from Apple's own implementation) that does this in OS X.

    10.0 and 10.1 were dog-slow. Especially when you had a couple of hundred files in a folder. Jaguar was a huge increase in speed and performance. Quite a bit of that was due to the Quartz Extreme, but even my lowly 500MHz dual-USB iBook saw quite a boost from Jaguar and it was not able to use QE at all. Panther did very little to the iBook, except make it take forever to boot. I need to check on that bootcache issue.

    My dual 800MHz Quicksilver is now almost three years old and I am still very happy with its performance. I expected to be wanting to replace it after two years, or after clock speeds have doubled, which is what I did when I used Wintel systems. Instead, I am considering keeping it around for the 10.4 release and at least another year or two. I attribute quite a bit of this to Apple's tweaks and performance enhancements of the OS.
  • by ktulu1115 ( 567549 ) on Thursday June 03, 2004 @12:30PM (#9326883)
    Eh.. I'd argue differently.

    IMHO, the next major revolution in OS design (and performance) will be from an exokernel [mit.edu] architecture. For those who aren't familiar with them, it's a completely radical and different approach to kernel design, the main idea behind it is seperate protection from management. If you really think about it, who (I use that term loosely) would know better what resources, scheduling, etc an application will need - the kernel, or the application itself.

    Traditional kernel design techniques give the (pretty much) the entire management of resources to the kernel itself and hide it behind a HAL (hardware abstraction layer), allowing the application little to zero say in the matter. Exokernels throw that idea out of the window, taking a completely opposite view on the issue. Once you give the power to the application, it opens a whole new world of OS design.

    It's really quite interesting, for more information on different kernel designs you can check out the Microkernel entry [thefreedictionary.com] at thefreedictionary.com
  • by Sloppy ( 14984 ) * on Thursday June 03, 2004 @12:32PM (#9326907) Homepage Journal
    The boot cache seems like a neat idea. Read-ahead caching is normally predictive but the predictions are just guesses. But if there's a sequence of events that goes the same way every time, yeah, I guess it makes sense to log it and use that info for reading ahead next time.

    One thing I just remembered that annoys me about the Linux dists that I have used, is that all the startup scripts are executed in sequence, even if they aren't dependent on one another. On my Amiga, I remember I used to have the startup script execute all sorts of things asyncronously in parallel, so that the CPU never idled while waiting on disk.

    Maybe Unix-like OSes should do that. I mean, there's no reason /etc/init.d/postfix and /etc/init.d/apache can't run at the same time, so that if one of 'em blocks on some I/O (disk or network or whatever) then the CPU(s) can work on the other one. That would ultimately result in a faster boot.

    Sure, there are some dependencies (I guess you want network interfaces started up before servers start binding to ports, for example) but there are ways of dealing with that. Hm.. maybe there's already a tool that sort of handles the complexity of dependencies and can execute things in parallel when appropriate: make. Hmm... Anyone already doing this?

  • by Anonymous Coward on Thursday June 03, 2004 @12:45PM (#9327053)
    Every time I work on a system for my clients, I turn most all of it off. Before I leave, I make them try it. Without exception, I always hear "Wow, what did you do? It seems 3X faster!" I explain it to them and then ask "Do you want me to put it back the way it was?" The answer is always NO!
  • by Anonymous Coward on Thursday June 03, 2004 @12:50PM (#9327105)
    Try using a faster windows manager.

    Fluxbox, for example. =D
  • by tbuskey ( 135499 ) on Thursday June 03, 2004 @01:33PM (#9327520) Journal
    Palm Pilot.

    My Handspring Visor has:
    8MB ram 8MB flash ram backup
    68000 CPU at 25MHz (?)
    160x320(?) screen w/ 4 greyscale

    My Sony Clie SJ22 has
    16MB ram 128MB memory stick
    faster 68000 at 33MHz
    higher res screen in color

    Compare to a Macintosh SE:
    68000 cpu at 8MHz
    4MB ram
    1.4MB floppy
    Maybe a 20MB hard drive
    512x348(?) screen in black & white

    The PDAs are do lots more then the SE. I can get Word & Excel compatibles for the Palm too.

  • Re:Windows... (Score:1, Interesting)

    by Anonymous Coward on Thursday June 03, 2004 @01:46PM (#9327669)
    I have 2GB of RAM in my system and tested your theory, very simply:

    1. Use a simple program to allocate 2GB and randomly read/write to memory - to prevent it from all being swapped.

    2. Run word.

    It may not be a very accurate result, I only tried it 3 times, but each time, word started in under 1 second and I could start typing in 2 seconds, and OpenOffice 1.0 took more than 16 seconds before I could start typing.

    I think the grandparent has a point.
  • by tgibbs ( 83782 ) on Thursday June 03, 2004 @01:59PM (#9327797)
    Whenever you install new software, you have to wait while the system "optimizes" it, which in fact means checking for applications that need their prebinding redone. On a 700MHz imac - less than 2 years old - this sometimes takes 15 minutes or more. Since I bought it, I've wasted hours, if not days, waiting for installations to complete because of this, which is far longer (and more frustrating) than the total time saved starting programs.

    Why wait? Usually, I just switch to another application and work on something else while the prebinding is going on. The fact that even major installs do not monopolize the computer is one of the things I appreciate about OS X. I certainly want it deferred until the next time I'm in a hurry for that particular application to start up.
  • by prockcore ( 543967 ) on Thursday June 03, 2004 @02:20PM (#9328058)
    And that's why KDE and Gnome are slow. They aren't slow, they just feel slow.

    Try LinuxPPC. Gnome 2.6 really flies on LinuxPPC. Especially compared to Panther. My entire desktop is noticably more responsive under Linux than it is under Panther. On both my Dual 1.25ghz G4 and my crappy 400mhz Pismo Powerbook.
  • Re:One word: (Score:1, Interesting)

    by Anonymous Coward on Thursday June 03, 2004 @02:28PM (#9328131)
    But it is interesting that the times are almost equal to the maximum rotational latency at all of those rpms.
  • by dsouth ( 241949 ) on Thursday June 03, 2004 @02:48PM (#9328310) Homepage
    For example, OS X by default installs some 20 languages for everything, including tutorials and help files. Removing these afforded me 10 gig of space.
    I call bs.

    My current OS X 10.3 install, plus some additional apps, plus some source tarballs of projects, plus the Xcode enviroment, plus an archive copy of 10.2, is using 10.61 GB of space. The lproj files are not very large. While it is true that you can save space by not installing all the language support, it isn't 10GB of space. Languages can selected/deselected during the initial install.

    The poster is correct though, that the granularity of control over install is much rougher than with Gentoo or Debian. Given the target audiences, this shouldn't be surprising.

  • barebones install? (Score:2, Interesting)

    by morethanapapercert ( 749527 ) on Thursday June 03, 2004 @03:00PM (#9328471) Homepage
    for a real lean install, take a look at this! http://www.litepc.com/products.html
  • by B1ackDragon ( 543470 ) on Thursday June 03, 2004 @04:26PM (#9329301)
    RAM is cheaper than programmer time. Particularly since the developers don't have to buy the RAM...

    Sad, but true. Though, there really is no excuse for for the iTouch problem, then again there is no reason to spend 6 months trying to fit a word processor into 640K (like they had to in the "old days" my prof likes to talk about so much.) There really must be a happy medium, and I think most apps and OS's are at it.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...