Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Making Operating Systems Faster 667

mbrowling writes "In an article over at kernelthread.com Amit Singh discusses 'Ten Things Apple Did To Make Mac OS X Faster'. The theme seems to be that since you won't run into 'earth-shattering algorithmic breakthroughs' in every OS releases, what're you gonna do to bump your performance numbers higher? Although the example used is OS X, the article points out that Windows uses the same approach."
This discussion has been archived. No new comments can be posted.

Making Operating Systems Faster

Comments Filter:
  • Reduce Bloat (Score:5, Insightful)

    by Anonymous Coward on Thursday June 03, 2004 @10:34AM (#9325382)

    why does my 3ghz p4 choke on spellchecking a 50k doc with a 500mb text editor (Word2k3) ?

    why does explorer choke on listing 10,000 files ?

    why should i ever upgrade my word processing applications ? or can they type for me now ?

    bah, innovation is dead, shame

  • pretty much (Score:4, Insightful)

    by LBArrettAnderson ( 655246 ) on Thursday June 03, 2004 @10:34AM (#9325393)
    So pretty much, Mac and Windows are made faster by using resources when they're not being used already. Not a genius idea, but the hard part is figuring out how to do that, which is what the article discusses.
  • 10 steps (Score:0, Insightful)

    by Anonymous Coward on Thursday June 03, 2004 @10:38AM (#9325428)
    1. remove bloat
    2. ...
    3.
    4.
    5.
    6.
    7.
    8.
    9.
    10.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Thursday June 03, 2004 @10:39AM (#9325440)
    Comment removed based on user account deletion
  • by wombatmobile ( 623057 ) on Thursday June 03, 2004 @10:40AM (#9325455)

    After the government changes in the US and the DOJ is free to investigate monopolism in software again...

    How hard would it be to make the case that consumers would be advantaged by gaining access to just a basic o/s?

    It mightn't be easy because the courts are legal organs not technical forums, but with a disciplined argument based on metrics derived from the types of performance issues noted in the article... an articulate, intelligent lawyer might get this done.

    Right?

  • by garcia ( 6573 ) * on Thursday June 03, 2004 @10:41AM (#9325457)
    I don't mind that they are a possible thing to include. What I don't want to see is them enabled/installed by default.

    You have to go through a bunch of settings to tweak it for "optimum performance" or whatever. Those should be enabled by default. The fancy stuff should be enabled easily but it should be up to the user to decide if they are turned on.
  • Re:One word: (Score:1, Insightful)

    by Anonymous Coward on Thursday June 03, 2004 @10:41AM (#9325461)
    RAID or more RAM will solve your problem just as well. You don't need to have a faster (read: runs hotter) HD in your machine.

    yeah, because 2 or 3 10krpm drives are much cooler than 1 15krpm drive. oh yeah, and if something's bound to disk I/O, RAM will not solve your problem.

  • Re:Faster? (Score:5, Insightful)

    by KoriaDesevis ( 781774 ) <{koriadesevis} {at} {yahoo.com}> on Thursday June 03, 2004 @10:42AM (#9325466) Journal

    XP is CRAZY slower than 2k.

    XP is faster to come up to the desktop. However, it is still busy accessing the hard drive and loading stuff in the background. You still have to wait for the OS to quit loading itself before you can use anything. Microsoft's claim that XP is faster than 2K was based on the time to desktop, apparently not time to usability.

    Once loaded, XP has an annoying habit of wanting to refresh the desktop from time to time. That slows things down even more.

  • Finally.... (Score:1, Insightful)

    by Anonymous Coward on Thursday June 03, 2004 @10:42AM (#9325470)
    ...someone who RTFA and can summarize it for us lazy people. That's exactly what Apple did.
  • by millahtime ( 710421 ) on Thursday June 03, 2004 @10:42AM (#9325478) Homepage Journal
    upgrading from 2K to XP on the same hardware will slow you down. Upgreading from OS X 10.2 to 10.3 on the same hardware will give you speed improvements a majority of the time.

    I can see how they can write an artice about how apple did this but to claim that Microsoft does it too. I don't see how. Unless Microsoft has improvements but enough of the new things they add slow it down so much more the gain is outweighted by the loss.
  • by Cyclopedian ( 163375 ) on Thursday June 03, 2004 @10:45AM (#9325528) Journal
    What takes genius is getting every ounce of speed from a Linux or Windows box that can be a conglomeration of different motherboards, CPUs, graphics cards, hard disks, etc.

    No. What takes genius is getting every combination of different motherboards, CPU, graphic cards, hard disks, etc and make it *ALL* work flawlessly and without any configuration at all. Just plug it in, turn it on and it's ready.

    No updating drivers. No having to check for incompatibilities between different mobos and wifi chipsets (or anything). It. Just. Works.

    -Cyc

  • Apparent Speed (Score:4, Insightful)

    by rf0 ( 159958 ) <rghf@fsck.me.uk> on Thursday June 03, 2004 @10:50AM (#9325570) Homepage
    Of course one could argue that is worth making the GUI faster to give an apparent speed increase whilst allowing improvments in CPU/Disk to carry the rest of the OS. Then again of course I know nothing about system design

    Rus
  • by joshds ( 768748 ) on Thursday June 03, 2004 @10:50AM (#9325572)
    Hard Drive is the bottleneck........ Has anyone tried using a RAMdisk as their OS drive? I've read a lot and heard of people trying, but never come across a comprehensive how-to + review. With the amount of ram we can have nowadays (new pc's coming with 6 banks for dual-channel DDR), I'd pay $250 for an extra 2GB of ram in order to have my OS + key apps run off of that. Other solutions? (CF too slow?)...
  • by benzapp ( 464105 ) on Thursday June 03, 2004 @10:51AM (#9325585)
    BTW, Windows 3.1 sitting on MSDOS 6.2 ran like shit of a stick on my old P133. I wonder if/how it would run on a modern system?

    I don't know, but I ran Windows 3.1 on top of OS/2 3.0 and on a P133 and it worked perfectly, and its speed was acceptable. It must have run significantly faster on native DOS.
  • Re:One word: (Score:5, Insightful)

    by mbbac ( 568880 ) on Thursday June 03, 2004 @11:00AM (#9325689)
    Yes, but a 10,000 RPM SATA drive is so expensive! A 73.4GB Western Digital "Raptor" 10,000 RPM is the same price as a 250GB Maxtor MaXLine Plus II 7200 RPM.

    Maybe 10,000 RPM model would make a good boot drive with all of the home folders on the 250GB 7200 RPM drive. Then again, most file access would probably be from the slower drive. Eh.
  • by RAMMS+EIN ( 578166 ) on Thursday June 03, 2004 @11:01AM (#9325695) Homepage Journal
    Rewrite it!

    This holds especially for applications, but it definitely applies to operating systems as well. Most modern software is simply bloated beyond belief.

    BeOS, by all accounts, is a full-fledged OS, and it takes a Pentium (not Pentium 4, but original Pentium) 15 seconds to boot it, including the GUI. What's up with Windows and OS X taking over a minute on hardware that is several times faster?! On Linux, you could at least skip most of the init stuff and boot in seconds (likely mostly pauses that you have to keep for faulty PC hardware).

    Then there's the libraries. glibc is well over 5 megabytes. You are not going to convince me that isn't bloatware. If all that code doesn't eat CPU time, it at least eats memory, which could lead to more swapping. GTK is also typical - ever resize a GTKWindow? It's visibly slow! That doesn't happen to Windows 3.11 on my grandpa's 486! What is that code doing?!

    Applications... Firefox is what? 10 megabytes installed size? And that's a light weight browser. What? We need 10 megabytes on top of libc, X, and GTK for parsing a simple markup language and rendering those widgets? Excuse me! Even lynx is hundreds of kilobytes, and it mostly just reads data from a socket, strips the tags, and spits it straight out. What the fsck? Say "OpenOffice.org" or Java and I'll explode.

    All we have today is bloatware. I'm *really* tempted to roll my own OS and applications, and I am going to have a shot at it this summer.
  • by Anonymous Coward on Thursday June 03, 2004 @11:14AM (#9325846)
    > What I don't want to see is them enabled/installed by default.

    Let me guess, you don't sell OS's right? To move software, you have to have all the pretty stuff that makes it look nice ON by default. Because that's what the general population cares about. They'll look at it and say "Wow, that's ugly, what a crappy OS." ... and never buy it.

    When it's pretty, *you* will say "Wow, that's pretty, but it's slowing it down, let me go into control panels, and registry settings, and god knows what else to tweak my settings while I overclock the damn thing and stick it in a freezer." Then you'll bitch about it on Slashdot. Which is exactly what's supposed to happen.

    Because *they* don't know how to turn it on, and *you* do know how to turn it off. So the burden, by default, is on you. It sucks, but hey, what else is new?
  • by ianscot ( 591483 ) on Thursday June 03, 2004 @11:21AM (#9325961)
    People have said this before, but maybe you didn't catch it: Successive releases of OS X have actually been noticably faster, even on older machines.

    Don't take my word for it -- take Ars Technica's [arstechnica.com] review of Panther for example:

    Here's another way to look at Panther's performance. For over three years now, Mac OS X has gotten faster with every release -- and not just "faster in the experience of most end users", but faster on the same hardware. This trend is unheard of among contemporary desktop operating systems.

  • by mallardtheduck ( 760315 ) <stuartbrockman@h ... inus threevowels> on Thursday June 03, 2004 @11:26AM (#9326035)
    But it does have that annoying dog (whats his name?) 'seach assistant'.
  • by Anonymous Coward on Thursday June 03, 2004 @11:28AM (#9326063)
    Following in Microsofts footstep would produce dramatic results in speed. Quite simply all Apple needs to do is double the system requirements for every new release. This is much simpler and cheaper than tweaking the GUI.
  • by LiquidCoooled ( 634315 ) on Thursday June 03, 2004 @11:32AM (#9326118) Homepage Journal
    I mentioned the eye candy slowness recently, and somebody came back with a reply that made sense:


    Windows's idea of eye candy was that menus (and submenus) would all slowly fade in. The process of navigating deep into hierarchical menus was maddeningly slow--at least until everyone turned it off.

    In osx, menus appear immediately, and then fade out after you select something. This is not only pretty, but functional: it gives you visual confirmation that you've selected a menu item, which can be helpful if whatever you've asked for doesn't produce obvious or instant results.



    the thread is Here [slashdot.org]
  • by garcia ( 6573 ) * on Thursday June 03, 2004 @11:33AM (#9326131)
    When it's pretty, *you* will say "Wow, that's pretty, but it's slowing it down, let me go into control panels, and registry settings, and god knows what else to tweak my settings while I overclock the damn thing and stick it in a freezer." Then you'll bitch about it on Slashdot. Which is exactly what's supposed to happen.

    There are easier ways to enable these "features" than creating a ton of hoops for BOTH sides of users.

    Instead of clicking through a bunch of menus, finding the options, selecting radio buttons, etc, just disable it by default and ask at install/setup time "do you want the 'pretty version'? Be warned that it may affect system performance."

    I think that eliminates the problems.
  • by Greedo ( 304385 ) on Thursday June 03, 2004 @11:37AM (#9326179) Homepage Journal
    The problem, if Windows came "bare-bones", is that no one would buy it.

    If Joe Public doesn't see "improvements" in the next generation of OS (like transparent windows, integrated internet browsing, etc.), then MS isn't going to convince many people to upgrade.

    (And yes, the typical /. crowd may not see those things as improvements, but MS isn't selling to the typical /. user.)
  • Re:Faster? (Score:4, Insightful)

    by GooberToo ( 74388 ) on Thursday June 03, 2004 @11:41AM (#9326224)
    Ya, benchmark after benchmark showed all of XP's IPC mechanisms to be much, much slower than previous releases. IIRC, several other subsystems were found to be slower as well. By those in the know, XP is widely regarded as Microsoft's slowest OS release in a long while. The only reason it's not widely realized is that machines constantly get faster and more memory is being used which hides the additional bloat.

    Anyone that thinks MS' OS, as a whole, is getting faster with each release is simply not living in our reality.
  • Re:One word: (Score:3, Insightful)

    by Moderation abuser ( 184013 ) on Thursday June 03, 2004 @11:44AM (#9326267)
    We're talking desktops here.

    When I click on Open Office or Netscape the CPU and I have to wait for the disk to finish the transfer before we can work. A 15k does it faster. The CPU cycles are wasted because on a desktop they're rarely used for something else. I'd agree with you if we were talking about a server.

  • by poptones ( 653660 ) on Thursday June 03, 2004 @11:46AM (#9326308) Journal
    On my windows2000 box (which is also my SuSE box) I use mozilla because I can move my single profile easily between systems. And in windows, when I've had mozilla minimized for a long time while doing something else it takes damn FOREVER to reopen. I don't hear a lot of disk activity, it just takes a really long time for windows to switch back to the task (I don't use the tooltray "always on" feature because this makes my desktop more likely to crash).

    In linux, one of the things that makes it seems really lethargic is the lack of operator feedback. With even recent MDK and RH installs I notice the mouse cursor is frequently just sitting there doing nothing at all while the machine thrashes away at a task. Last week I was mutzing around with DiskDrake - I told it to create a 160GB encrypted partition and mount it. After several seconds the cursor stopped animating and the window became completely non responsive. I knew it hadn't crashed it was just busy waiting for the process to end and if I let it go it would eventually come back. About five minutes later it returned, filled in the empty white box and reported the task complete.

    This kind of behavior in windows means "the task is dead, ctrl-alt-del and see if you can end the task." In linux it may not mean that at all - it may just mean "wait a minute I'm not done." But in either case it lessens the user experience and, in some cases, is downright confusing. And in most every case it's extremely frustrating.

    This is the sort of thing I was talking about with suse. I'm not sure what switches were set where, but I've never seen the busy cursor lose its animation nor have I seen a busy window just quit responding. Even when the task takes a few minutes it remains well behaved on the desktop. This is the sort of polish that makes a computer feel "professional" and even "fast" - it doesn't have to get done this very second, but "at least act rational while you're doing it."

  • Re:You want fast! (Score:3, Insightful)

    by BenjyD ( 316700 ) on Thursday June 03, 2004 @11:57AM (#9326467)
    That's the way to get work done - don't install any apps! Who needs LaTeX?
    There's no point removing features to reduce the mythical "install bloat" if you can't actually do anything with the system.
    Relying on shared libraries rather than stand alone binaries actually improves performance, by reducing memory usage when lots of processes use the libraries, and allows optimisations of the libraries to speed up all the apps that depend on them.
    Small does not necessarily imply fast. For example, a project I work on was taking forever to open files (upwards of a minute for large files). So I implemented a custom memory manager that optimised block allocation for the application. The size of the program increased by 15% or so. Agghh! Bloat - must be slow, right? No, time to open files was reduced by a factor of 6.
  • Re:That's 2 words. (Score:3, Insightful)

    by Rich0 ( 548339 ) on Thursday June 03, 2004 @12:21PM (#9326783) Homepage
    I have 4Gb of RAM, 2Gb of it as disk. My system doesn't swap, it still has 2Gb of RAM used as RAM and the performance is sensational.

    And how is the performance compared to a system with 4GB of RAM in which the VM is left to its own devices?

    There is no question that adding RAM makes a system faster. However, what is under debate is whether using RAM as a RAM drive instead of as cache is a better solution.

    I liked another poster's suggestion of preloading the cache by cat'ing selected binaries to /dev/null. The system might be sitting at a kdm login screen, but an intelligent system designer would realize that there is a significant likelihood that half of KDE will get loaded sometime in the near future. Of course, apple has the right solution in making the behavior smart and configured per-user. While you might have gdm running with the expectation that the whole of gnome will be loaded when somebody logs in, maybe my computer is a dedicated webserver which runs gnome only for rare administration - in which case it is safe to swap out just about everything assoicated with it to make room for apache processes and disk cache for fetching webpages.

    I think there is plenty of room for improvement in the linux VM - however I must say I'm generally in awe about how smart it is already...
  • by Anonymous Coward on Thursday June 03, 2004 @12:35PM (#9326944)
    you almost had a shred of credibility until you said -09
    -fomit-instructions is an old joke, but -09 marks you as a clueless ShitHead.

    Most of the other stuff I agree with.
    Have you seen the latest gcc optimisation? It's -fnew-ra, it uses a graph coloring register allocator. It's a bit buggy and only meant for testing, but I've benchmarked it on some simple enough fp code, and it does make it faster. It consistently gave a performance increase of about 20s -> 18s to run the loop.

    life on the bleeding edge eh? Is it useful?
    probably not, when the time wasted doing this shit won't be recoved by a faster running program...
  • by TioHoltzman ( 709089 ) on Thursday June 03, 2004 @12:41PM (#9327014) Homepage
    I am kind of surprised no one has mentioned this.
    GCC

    From my experience, as well as other articles I have read (there was a Dr. Dobbs article comparing GCC compiling performance and code peformance to MSVC6/7, BCC, Digital Mars, Open Watcom, and GCC was near the bottom on most benchmarks), GCC just ain't that great at producing really fast binary code, whereas MS has spent considerable effort to make their compilers produce very fast code for windows.
    I'll bet that if a major effort were made to improve GCC code, then this might make a big difference.
  • by mjh ( 57755 ) <mark@ho[ ]lan.com ['rnc' in gap]> on Thursday June 03, 2004 @01:01PM (#9327209) Homepage Journal
    The problem, if Windows came "bare-bones", is that no one would buy it.
    Why? Microsoft has a monopoly on operating systems. People don't buy windows because it looks pretty, performs better, has the correct API set. They buy windows because it came on the computer they bought, and that's the computer that they know will run the software they have.

    I'm sure there are some consumers who buy windows based on other criteria, but the vast majority of windows purchases are as a consequence of compatibility. If the actual statistics showed only 99% of retail windows purchases were as a result of pre-installation, that's about 0.999% less than I would have expected.

    $.02

  • by tgibbs ( 83782 ) on Thursday June 03, 2004 @01:26PM (#9327459)
    Microsoft tends to spend more time figuring out ways to trick their users into *thinking* that things are faster even though it's actually taking as long, if not longer than previous versions. In this case, you've been tricked. Microsoft moved more stuff after the user is logged on. In other words, your system is still doing all of the things it used to do, plus probably more, it's just that you think it's done.

    Perception is what matters. I enjoy working at a computer that feels fast and responsive. If a developer can hide time consuming activities so they occur at a time when I don't notice them, that is a significant improvement.
  • Two kinds of speed (Score:5, Insightful)

    by Erik Hensema ( 12898 ) on Thursday June 03, 2004 @01:39PM (#9327581) Homepage

    There are two kinds of speed: things that are fast and things that feel fast.

    The article and the comments here on /. are mainly talking about true benchmarkable speed. Things that are fast.

    But some apps don't really need to be fast. They just have to feel fast. This holds true for most interactive applications. It's all about psycholigy with this one.

    Ever wondered why Windows Explorer builds up its icons from the right bottom to the top left? Doesn't matter in real speed, but it just feels faster. Your brain just isn't used to this flow: usually you read from the top left to the bottom right, or you read from the top right to the bottom left. Your eyes immediately focus on the spot your brain expects the icons to appear. But instead the appear in the opposite corner. By the time your brain figures out it has been tricked, the window is already full of icons.

    More tricks: ever wondered why windows wastes memory by trying to have some free memory ready all the time? It makes starting new apps faster. But on average the system is slower.

    In the Unix world there is only raw, benchmarkable speed. And that's why KDE and Gnome are slow. They aren't slow, they just feel slow.

  • OS are not slow (Score:3, Insightful)

    by AmericanInKiev ( 453362 ) on Thursday June 03, 2004 @01:42PM (#9327622) Homepage
    This discussion is pedantic.

    Sure - speed is good,

    But the speed of application is simply this - they must be fast enough to be tolerable - no faster.

    customers are not going to choose a product which makes drastic speed enhancements at the expense of features - provided those features can be run at reasonable speeds on available hardware.

    Rather - there are features out their waiting for hardware speeds to see the limelight.

    Voice recognition is often touted as waiting for higher CPU speeds.

    So is Live renderings - (when you watch a movie by rendering each frame in real time from the actor and motion files alone.)

    Add to this teleconferencing, cryptography, etc

    selling software amounts to a compromise of features to speed - and the right compromise is as close to the edge as you can get away with.

    The guy with a two feature database that runs like bloody hell is not going to beat Access - even if it is occassionaly slower.

    AIK

  • by rainman_bc ( 735332 ) on Thursday June 03, 2004 @02:08PM (#9327923)
    You're comparing apples to oranges and they mod u up as insightful?

    You're comparing a major revision to a minor revision; 10.2 -> 10.3 isn't a major revision as I read it.

    Upgrading from 2K to XP is like going from MacOS 9.x to OSX.

    Going from OSX 10.2 to OSX 10.3 is like going from XP SP1 to XP SP2.

    Upgrading from MacOS 9 to OSX on the same hardware will slow you down. Upgrading from 2k to XP on the same hardware will slow you down.

    So what's the point of your observation exactly?
  • Re:That's 2 words. (Score:3, Insightful)

    by misleb ( 129952 ) on Thursday June 03, 2004 @02:23PM (#9328095)
    Great, but how do you synchronize changes to the RAM disk to the hard disk? What do you do when you want to install a new app or apply an OS patch or whatever? Sounds like a big PITA to me. I'd rather just stuff my machine with RAM and let the VM do all the work. The peformance gain is about the same and it is way more efficient overall.

    -matthew
  • by Reapy ( 688651 ) on Thursday June 03, 2004 @02:59PM (#9328454)
    Games.
  • by DunbarTheInept ( 764 ) on Thursday June 03, 2004 @04:26PM (#9329296) Homepage
    They are fine so long as they remain optional. There are times when a transparent window has functionality beyond just looking cool. The ability to see what's printed in the window behind the one you're typing into is useful when reading a manual (in the form of on-line help or a web page), and using that manual to decide what to type into an editor or shell prompt. (This is the same reason I hate systems that force the keyboard focus window to always be the topmost window. Ever since I first felt what it was like to have the two decoupled, using Sun's openView system in 1992, I never wanted to go back.)

    What really bothers me, and it is the main reason I have stopped using Gnome, is this: Developers often assume that the moment the computers get fast enough that they can respond to fancy graphic requests using 100% of the CPU time, that this is the point where all reasonable people would stop complaining about the time they take up, and would be happy to have the little graphic toys unconditionally turned on at all times. This I call "bullshit". It's only when the fancy graphic requests end up taking a teeny, tiny fraction of the CPU time that it starts to become acceptable to leave them uncoditionally on.

    I don't just want fast response from my UI when the system is under light load. I also want fast response from my UI when there's a runaway process I need to find and kill, or when I'm calculating some big raytrace in the background. So, yes, even in this day and age where you can't find a new computer with less than a Gigahertz clock rate, it is STILL worth it to provide the user with the ability to turn off features that require a good amount of CPU usage.

    It's up to the owner of the computer to decide what to spend their CPU time on, not the maker of the UI.

  • by mduell ( 72367 ) on Thursday June 03, 2004 @05:50PM (#9330050)
    Actually, Quartz does an extremely good job of displaying 6.2 megapixel images on the desktop even on slow and old Macs.

    Dare I ask what is the point of putting a 6.2Mpix pic on a 2.3Mpix (for the 23", in reality most macs are in the .5-1.5Mpix range) screen?
  • by HeghmoH ( 13204 ) on Thursday June 03, 2004 @07:03PM (#9330582) Homepage Journal
    The point is that it's easier to have the computer automatically resize it than it is to do so manually; after all, this is the kind of thing that computers are for: doing boring tasks behind your back so you don't have to think about them.
  • by dave420 ( 699308 ) on Friday June 04, 2004 @04:28AM (#9332955)
    I use windows because I want to play games on my PC, and watch videos without having to compile anything.

    I'm all for linux, but it's much easier to get windows to do what I want than linux. You can convince yourself the only people with windows are mindless sheep, but it's a very usable OS for lots of people. I use computers all the time (I'm a professional open-source-based developer), and I only use windows as my desktop (granted, my servers are linux). Every once-in-a-while I'll see how the alternative apps on linux are doing, but they're still behind. Heck, I'm using homesite 4.5.2 from 2000 and it's better than any editor I've found in linux.

    I'm not having a go at linux, or trolling, but trying to make people understand that even though people here hate windows, it's still a very functional operating system. My desktop machine at work is up months at a time, rock-solid. It does dual-display (twin 19" tfts on one geforce4) out of the box. I know you can do everything it does on linux, but it takes longer and is more difficult.

    I'm rambling. I'll shut up now.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...