Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

New Linux 2.5 Benchmarks 244

Sivar writes "Andrew Morton of EXT3 fame has posted benchmarks of Linux 2.5.47 prerelease compared with the latest from the current 2.4 series. With some tasks more than tripling in performance, the future looks very promising."
This discussion has been archived. No new comments can be posted.

New Linux 2.5 Benchmarks

Comments Filter:
  • Triple? (Score:5, Funny)

    by sheepab ( 461960 ) on Saturday November 16, 2002 @04:53PM (#4687279) Homepage
    With some tasks more than tripling in performance, the future looks very promising

    Damn, I wish my video card had kernel updates :-(
    • Re:Triple? (Score:4, Informative)

      by Alan ( 347 ) <.gro.seifu. .ta. .xeretcra.> on Saturday November 16, 2002 @05:47PM (#4687548) Homepage
      Well, there was an nvidia driver update that advertised an increase in performance of something like 25%, which isn't that bad....
      • Re:Triple? (Score:1, Interesting)

        by Anonymous Coward
        What really amazes me is that _every_ major release of nVidia drivers has a huge performance gain for an already-great card. Hell, ATi can't even get their own cards to work right!
  • I'd be happy to get a 3x kernel speed up out of my 16-way Hammer.... Speaking of which I'm still waiting for it Kandy... {grin} E-Mail me ;)
  • by T-Kir ( 597145 )

    I'm glad you put the "of EXT3 fame" bit, I was worried the article might be talking about the infamous author [amazon.com].

    Although he might end up on the front page of /. if he writes an unauthorized biography of Mr. Gates, what kind of juice could be dragged up from the past... I wonder?

  • 2.5 (Score:5, Funny)

    by Anonymous Coward on Saturday November 16, 2002 @04:55PM (#4687293)
    Will it make the internet faster?
    • Of course it will, with linux servers everything should be snappy :P
    • by Fefe ( 6964 )
      No, you will need a Pentium 4 for that.
    • I looks like if you are talking about large internet operations and large cpu intensive tasks and large io tasks all at the same time... then YES! it will make the internet faster.

      So maybe this will help some people stand up to being /.'ed.

  • by zanerock ( 218113 ) <zane&zanecorp,com> on Saturday November 16, 2002 @04:58PM (#4687309) Homepage
    Nice to see Linux doing good on big machines with standard packages and such. I love linux, and it's the only thing I use at home for anything serious, but commercial software has always had the edge on *big* things (big disks, large processes, etc.). With recent advances in process management, and now this, a lot more people will be able to use Linux top to bottom.

    I think one interesting thing that could come out of this is that IBM (and others) will be pushed more and more towards a pure service or application only niche. They won't always be able to say, "Sure Linux is great for the workstation, but what about your 8 TB database?" There's a ways to go, but a lot of the features are falling into place.

    Having a unified OS from your palmtop to your TB file server will open up a lot of possibilities for people. My personal interest is in a next level of integration which is more natural to use and easier to develop, and we're getting close.
    • IBM is also going to stay in the high-end hardware department; it'll be "Sure commodity hardware is great for the workstation, but what about your 8 TB database that has to survive even if someone saws it in half down the middle?" This also puts them in, essentially, the BIOS department for these machines (you want to run you web site off of whatever portion of your database machine isn't actually being used by the database, without risking problems if the web server gets hacked).
      • I don't disagree at all. I said that this would begin to push others more to service. With each new thing that you can get for free that works just as well as what other's charge for, you capture a little bit more of the market. This alone, and what has been developed to date, will not push IBM out, nor is everything that needs to be done for such things as you say been done. But it's a step.

        I'm not talking now, nor even tomorrow, but in 5-10 years, I think we could see a very different landscape in how old school commercial software and hardware companies (or, in IBM's case, departments) work.

        If you can spend $1 million on developing your whizzy new file system, or you can use something that's freely available (or spend $100,000 to tweak it), then the economics of it start to push people out of commercial development in some areas, especially around OS and OS functionality. Instead, you just consult, or deploy, or support and such.
  • Excellent (Score:3, Troll)

    by PhysicsScholar ( 617526 ) on Saturday November 16, 2002 @05:01PM (#4687327) Homepage Journal
    This was a major reason that 2.5 is, put simply, needed by any and all serious Lunix users.

    Based on this image (0202_lab_xp_4.gif [interex.org]), one can see that large volumes of asynchronous I/O is, as the author puts it, the "Achilles' Hell" of Linux.

    The Linux kernel itself in all versions 2.5 serializes disk Input/Output with a single spinlock.

    (The yellow is the Windows XP box; the green line is the data for the SuSE Linux pee sea)
    • If I pulled a gif compleatly out of context as proof of anything would you trust me?
    • How about this image [interex.org], from the same article [interex.org]. Note how green, which is SUSE Linux, is winning :)

      Needless to say, context is everything.
      • He newer say that linux is worse, just that linux has an achilles heal.
        • He newer say that linux is worse, just that linux has an achilles heal.

          On the contrary, you stated that there was a reason why serious Linux users needed the 2.5 kernel. But if Linux and XP perform the same overall, why do serious Linux users need the 2.5 kernel? They don't.

          I'm not saying that this particular article doesn't make certain conclusions about asynchronous I/O. It's a simple fact that it does. But you made a conclusion based on those conclusions, and your conclusion is what disagreed with.

          I also pointed out that presenting a single image without understanding the full context of the article is silly. The tests in the article you quoted were run on laptop computers. What kind of "serious Linux users" with great need for asynchronous I/O use a laptop for that purpose? The image I quoted showed performance in a more typical laptop use pattern (according to the article's author). For that matter, would they use ReiserFS, or would they use XFS, JFS, or another filesystem?
    • Hello moderators! This is not a troll. He pointed out something that has a nice graph to support what he's saying. Not only that, but it's very well known. AIO on Linux has never been stellar...but should be soon enough.

      Someone, please mod the partent back up. He wasn't trolling and was simply stating fact!
      • Then it's just karma whoring then? Look he pulled the giff out of his ass, and tried to pass himself off as a expert. You might look at some of his other posts.
        • If you say so.

          Meanwhile I do recall seeing such a graph much like, if not that graph, by Open Bench Labs (name my not be correct -- it's been a while). Not only that, but the graph EXACTLY matches known expectations on a common Linux AIO Linux implementation (which is purely userland). This is why kernel level AIO implementations have been underway for some time now.

          In other words, you may not like the message but it EXACLY matches the current state of some AIO implementations on Linux. Call is a lie if you like, meanwhile those of us that know, will simply nod, move on, and occationally laugh at those that continue to hide their heads in the sand.

          Believe it or not, Linux isn't the be-all, end-all of OS's. If it were, there wouldn't be a need for the 2.6 kernel.
  • by pardasaniman ( 585320 ) on Saturday November 16, 2002 @05:05PM (#4687354) Journal
    For a guy such as myself, who does all his daily tasks on a linux box, what does this mean? Will it mean faster loading time/stability. Or will it make little difference at all?
    • It means that you won't see too much speed-up on your desktop machine. But, if you run a big server that does multiple processes at once, say Oracle, you could see significant performance gains.
      • The use of "big server" is somewhat misleading.

        Fact is, anyone that heavily uses their Linux box will see some difference. It's just that the heavier your box gets used, the bigger difference you'll see. :)

        Those that do little serious multitasking may see "smoother" multitasking but little more. Those that perform concurrent compiles, heavy CPU or I/O database servers, big time-share systems, etc, will see larger and larger note worthy gains.

    • by iabervon ( 1971 ) on Saturday November 16, 2002 @05:30PM (#4687483) Homepage Journal
      You'll get better interactive performance under load. So if you're encoding an mp3 and writing your home directory to a CD, your mouse cursor won't stick and your windows will refresh reasonably well. Unless you're doing something kind of disk/processor intensive, you won't notice the difference, because 2.4 is too good already for there to be much improvement. If you try to encode 32 mp3s at the same time, 2.6 will actually do worse than 2.4, but at least it won't make ls quite so slow.

      The main goals are interactivity (input gets handled quickly), low latency (your mp3 player gets a chance to send the next second of audio to the sound card before this second is over), and fairness (every program makes at least a little progress after a short amount of time).
    • by Azar ( 56604 ) on Saturday November 16, 2002 @05:31PM (#4687486) Homepage
      Overall throughput has not increased (actually, it is believed to have decreased). So the overall speed of the system is relatively equal to the 2.4 series of kernels. You probably won't see any major performance speedups in any apps you use.

      However, the overall responsiveness of the system is improved. Most people who have used it have claimed that it felt much faster than the 2.4 series. You won't have starved processess.

      This means if you're running XMMS and you compile a kernel, XMMS won't just hang until the compilation is done. The kernel developers have done a great job in improving -fairness- between processes.

      Mostly, the results will be seen on Big Iron and server applications, but the overall desktop experience is expected to improve.
  • by Dacmot ( 266348 ) on Saturday November 16, 2002 @05:08PM (#4687369)
    I'm a huge linux fan and I love to brag about how much better than Windows it is, etc. However I don't think it's right to say false truth like "linux 2.6 will be 3 times faster!!!!!" KernelTrap mentions that:
    Most significant gains can be expected at the high end such as large machines, large numbers of threads, large disks, large amounts of memory etc. [...] For the uniprocessors and small servers, there will be significant gains in some corner cases. And some losses. [...] Generally, 2.6 should be "nicer to use" on the desktop. But not appreciably faster.

    Some of the biggest improvements for desktop responsiveness can be found (for Kernel 2.4.x) at Con Kolivas' web site of performance linux patches [optusnet.com.au].

    --
    • However I don't think it's right to say false truth like "linux 2.6 will be 3 times faster!!!!!"
      That would be why I said: "With some tasks more than tripling in performance,..."

      I doubt any semi-knowledgeable person is going to take that statement to mean that kernel 2.6 makes a Linux system three times faster, but depending on what they use that system for, it may do just that. The performance figures are very respectable alone, but when you consider that they kernel hasn't even been frozen yet and that tuning hasn't begun, as I sais, the future looks very promising.
  • When I first heard about some of the things going on in the 2.5 branch, such as the newly tuned VM system, improved filesystem code, and especially Ingo Moyar's O(1) scheduler project, I was ecstatic. The promise of an average workstation computer handling 100,000 threads with as much grace as it handles 100 sounded too good to be true. And alas, it was. There are a number of serious problems with Linux 2.5's scalability pushes, trading performance for normal tasks in order to run better at more esoteric tasks, and many of these can be traced back to the new O(1) scheduler.

    A month ago, I downloaded the 2.5.44 kernel, and have been benchmarking it extensively on one of the Pentium 4 2GHz workstations in the computer lab. For a control, I ran a stock 2.4.19 kernel on the Athlon XP 2000+ machine next to it. My test consisted of running an increasing number of parallel processes each executing a for(;;) loop repeatedly nanosleep(2)ing for 10ms, thus yielding the scheduler every time they awake. This made sure that the scheduler was more or less the only thing running on the system, and that I could get an accurate count of the average task-switching time.

    By gradually increasing the number of threads on each machine in parallel, I was able to graph the comparative performance of the two schedulers. The results do not bode well for the new scheduler: (forgive my somewhat clumsy approximation, text is not the best medium for graphic content)

    S |
    c | .
    O(n) scheduler (2.4.19)
    h | .
    e | .
    d |-----.-------
    O(1) scheduler (2.5.44)
    T | . |
    i | |
    m | |
    p
    e |_______|_______
    No. of Threads
    As you can see, the new scheduler is in fact O(1), but it adds so much initial overhead that task switching is slower than under the old scheduler until you have a certain number of threads, labeled p above. My benchmarking experiments put p at around 740 threads.

    Now, this is obviously good for high-end applications that run thousands of processes concurrently, but the average Linux user rarely has more than 100 processes on his machine at a time. The vast majority of servers rarely exceed more than 250 concurrent processes. In these cases, the O(1) scheduler is cutting their computer's performance almost in half.

    What we're seeing here is the likes of IBM and Sun putting their needs before those of the hobbyist hackers and users who make up the majority of the Linux user base. While the O(1) scheduler project is a noble cause and should certainly be made available as an option for those few applications that benefit from it, outright replacing the old scheduler at this point is a folly.

    • by be-fan ( 61476 ) on Saturday November 16, 2002 @05:19PM (#4687430)
      Um, doing benchmarks between an Athlon XP and a Pentium 4 is folly. The P4 has notoriously slow context switching performance. Also, if you are running a small number of threads, your computer isn't spending a whole lot of time thread switching anyway, so the hit doesn't really affect you. When you have lots of threads, scheduling becomes far more important, and so the increase is much more noticible.
      • The P4 has notoriously slow context switching performance.

        The Pentium IV has notoriously slow performance in some areas, but a processor being slow in context switching doesn't make sense. Depending on the context(English context, not computer context), context switching is either the system switching from kernel mode (running kernel code) to user mode (user applications) or vice-versa, OR it is simply moving from one execution path to another (as was scheduled by the, um, scheduler)

        The processor has nothing to do with it. Context switching in BOTH instances is handled entirely by the operating system. While Windows NT 3.1 may have "slow context switching" and Linux with the O(1) scheduler may have "fast context switching", the Pentium IV cannot "have fast or slow context switching" because it doesn't have anything to do with the Pentium IV.

        One might theorize that the original poster's comment was refering to the Pentium IV being particularly slow at the actual instructions used in context switching. Regarding the discussion of the kernel scheduler, the meaning of "context switching" that we are using probably refers to switching between tasks (AKA multitasking), so the important instructions would simply be jump instructions like "jmp", which AFAIK are not particularly slow on the Pentium IV like, say, bit shifting (which is glacially slow on the Pentium IV).

        • I think some processors have multiple register sets, so threads do not have to thrash the same set of registers for every thread context switch.
        • by be-fan ( 61476 ) on Saturday November 16, 2002 @08:47PM (#4688274)
          The instructions involved in the context switch are slow on the Pentium 4. The P4 has a long internal pipeline to flush, and a huge amount of internal state to synchronize, which makes context switches slow. For example, an interrupt/return pair take 2000 clock cycles on the P4!
        • The Pentium IV has notoriously slow performance in some areas, but a processor being slow in context switching doesn't make sense.

          Well, those of us who actually design CPUs and stuff rather than pretend we know about them use the term "context switch" to describe dumping the current CPU state (to memory, other registers, whatever) then loading a new state, or something logically equivalent. This can be for a thread switch, interrupt handling, whatever.

          The processor has nothing to do with it.

          A CPU level context switch is part of what happens during an OS level context switch, and therefore has a significant effect on OS performance.

    • You'd notice that the respectable Prof. Collins is one of the more sophisticated trolls. He generally garners quite a few responses.
    • well, this guy is apparently a troll, but just for the sake of argument... Anyone repeating his test would probably find very similar results. HZ (the constant controlling how often the scheduler runs) has been changed from 100 to 1000, improving smoothness for many things (multimedia apps espescially) at the cost of making the schduler overhead 10 times what it was before.

      Luckily, it was very small before, and it's still very small. Maybe it went from taking 0.001% of your CPU power to 0.01% :-). The *only* times the scheduler was really a problem before were a) when it made bad choices and b) when there was gazillions of tasks. The rest of the time, it was totally negligible.

      So, even if the scheduler did slow down by a factor of 2 as he claimed (and in fact, it would have slowed down by a factor of 10 due to the HZ changes so his claim would leave O(1) 5 times faster than the old scheduler) it really wouldn't matter to an ordinary desktop/server. The scheduler time is too small to be important on normal machines .
  • inexperience (Score:1, Flamebait)

    by sstory ( 538486 )
    I don't know much about linux, but seeing these benchmarks suggests that the performance is getting faster confuses me. I recently tried linux again when Mandrake 8 came out, and on a hellafast computer it was taking a long time for basic things to be accomplished. I thought winXP was slow compared to win2k, but this mandrake was taking even longer to do the comparable things like open Konqueror, and open text-editing programs. Is there a simple explanation?
    • Re:inexperience (Score:1, Informative)

      by Anonymous Coward
      I believe part of the slow load times is due to the fact that glibc on most modern distros does not include object preloading technology. The latest glibc has this I believe, and the only distro I know that uses the latest glibc are currently beta. I think around the Redhat 9.2 timeframe you will see linux that is more than suitable for the desktop.
    • There was at least one performance bug with Mandrake 8 that resulted in extremely slow X performance. I don't remember the details but maybe someone will share them...
    • Re:inexperience (Score:2, Informative)

      by myz24 ( 256948 )
      The short answer is that KDE is written in C++.

      The long answer is that anything written in C++ on Linux will load slow (but should run fairly quick once loaded) because of something to do with loading the C++ libraries and some other compiler gook. I can't remember where I read it, or how I found it on google, but aparently this will be fixed soon in glibc.

      Of course, I could be WAY off, so if someone could back me up...
    • I tried Mandrake 9 recently, and it turned me off linux for weeks. This is what you do: get a different distro. RedHat is good, and KNOPPIX is great too (knoppix.de).
      • Most distributions go with a more or less specific kernel (586/686/Atholon, etc) but only i386 applications. Newer processors only really sing with specially compiled code.

        A distribution such as Gentoo [gentoo.org] may not be the easiest to install but you get the whole gubbins, X, Gnome or KDE and the apps compiled for your system.

        Microsoft tend to distribute generic code, and if you are lucky you may get a model specific dll. What Microsoft can not do is to distribute code that can be compiled for a specific model, well not until they deliver code that gets compiled during setup. Note, this can be done with optimisable intermediate code rather than source, but it wouldn't be easy.

  • For all of you who have issues compiling 2.5.x let me remind you that 2.5.x are development kernels.
    They aren't perfect.
    They may have issues building in certain configurations, because they are development kernels.

  • Why did they get rid of the old make xconfig, it sucks now, it uses gconf or kconfig, stupid.... Makes life harder, I wish they would never had changed, the old system rocked, now I have to have either gnome or kde installed, or use make menuconfig from the terminal! arg! Come on go back to normal! But good work Linus! keep up the great work!
  • by Anonymous Coward on Saturday November 16, 2002 @09:55PM (#4688477)
    I'm a big time VMware user (I use it for testing and Windows). I usually have 2 or 3 VMware machines running at any given time and I have plently of memory (usually 1GB, sometimes more). However, the disk buffer (or disk caching) of Linux sucks ass. I'm not kidding, if I have 1GB of memory, 900+ megs will be used for disk buffers and my very important interactive VMware processes will be swapped out to the slow disk swap file. Just using one of the VMware processes causes a lot of disk I/O and all that I/O gets loaded into the disk buffers in memory then when I go to use another VMware process it has to come out of swap. Linux is pretty bad about this with normal processes, but VMware exasperates the problem.

    To boil it down: The disk buffering in 2.4 is way , way too aggressive and I haven't figured out a way to fix it. I need to be able to either limit the total ammount of memory the buffers will use or a better method would be to tag certain processes so that they will never be moved into swap for disk buffers (moving to swap "normally" is OK, just not for disk buffers). Or maybe just make it never swap out any process for disk buffers.

    It seems Windows uses a more reasonable disk buffering technique and VMware works better there (especially when using several instances). I don't want to use Windows as my primary OS though because I like the built-in disk encryption and network security of Linux (the ip filter stuff is much better than Windows).

    Anyone know if 2.5 has got any better disk buffering?
    • > would be to tag certain processes so that they
      > will never be moved into swap for disk buffers

      I beleive that this is what the sticky bit was intended for. Before I go about explaining what it is and how to use it, does anyone know if Linux actually *honors* the sticky bit or does it just have it for compatibility?
      • Linux does not honor the sticky bit.

        man chmod: ...and the Linux kernel ignores the sticky bit on files. Other ker-
        nels may use the sticky bit on files for system-defined purposes. On
        some systems, only the superuser can set the sticky bit on files.
        --
        Matt
    • Disable your swap. (Score:3, Informative)

      by Effugas ( 2378 )
      Buy more RAM and disable swap. Or just disable it -- at 1Gb, you're close to what you need anyway.

      I'm serious. With another gig costing a hundred dollars -- maybe less -- the overhead of disk-based VM is just no longer justified.

      WinXP benefits from this optimization even more than Linux.

      Yours Truly,

      Dan Kaminsky
      DoxPara Research
      http://www.doxpara.com
    • by sheepman ( 459396 ) on Sunday November 17, 2002 @09:41AM (#4690300)
      Configure kswapd.
      For example, add the following to /etc/sysctl.conf

      vm.kswapd = 12800 512 8

      When no free memory, kswapd will free more
      memory than that in default.
  • It may be very interesting to run the same tests on various other free operating systems, especially BSD.
  • Inspired by the numbers and new "snappyness" under load, I decided to download and compile the 2.5.47 kernel, and see for myself, disappointed is all I can say,
    2.4.19 with preempt and low-latency is snappier by quite a bit than 2.5.47. My test isn't quite as numeric as the stories... I simply start ripping a DVD (oops did I say that...) to avi, and compiling something (in this case xmms) and then get my term window, open limewire, and drag the term window around on the maximized limewire window, under 2.4.19 I can never get the whole window grey (as I drag the term window it acts as an eraser on the limewire window, until that window is redrawn) undery 2.5.47 I can easily grey out the entire limewire window, normally for 2 or 3 seconds before it redraws... under 2.4.19 I can maybe grey out about 1 term window worth of area in the limewire window before it is redrawn...

    Of course it states in the story that 2.5 has not been tuned at all really, so hopefully this will improve, but for now I'm sticking with 2.4.19 preempt low latency
    • You wouldn't happen to know where I can get a pre-patched 2.4.19 kernel that has O(1), low latency, preempt and XFS all rolled together would ya?

      • I use gentoo,
        the gentoo-sources kernel has low latency preempt pre-patched
        but it doesn't have xfs, they have an xfs kernel as well, but I don't know if it has low latency and preempt already, I seem to remember something about low latency and or preempt causing problems with the xfs kernel, but I might just be smoking crack.
        check forums.gentoo.org. Ok I just did, and yeah, preempt + XFS is a bad idea, much instability, the patches fight with each other (XFS trying to journal, preempt trying to let something else use the cpu) result, massive instability. So, no I don't think you can get all three to play nice, but you can run low-latency+XFS, or low-latency+preempt, but you can't throw preempt in with XFS... gentoo is nice and patches the kernel automatically, if not running gentoo, you'd have to patch the kernel yourself...
        • Okay. Thanks.

          I've tried manually patching XFS, O(1) and a couple of other odds and ends (latency, preempt) and wind up with something that doesn't even boot or crashes/panics right after words.

          So thanks...that pretty much confirms that it's not something I'm doing... ;)

  • Older boxen? (Score:2, Interesting)

    by soupforare ( 542403 )
    I'm still milling over which kernel to use with my old 486's.
    Right now, they're running 2.2.10, iirc; whatever the debian stable had on her boot disks.
    I'm not going to compile any kernels until my dual ppro is fixed, because compiling a kernel on a PoS 486 portable is not fun :P

    Anyone have any comments/recommendations on if/which new kernels are good to run on old shite?
    • I've just installed NetBSD+Apache on a 33Mhz 486 laptop with 8MB RAM. I did recompile the kernel on another machine, though. This was essential because the generic laptop kernel was taking too much memory. The end result is nice and quite smooth.

      Of course there are lighter webservers such as Boa, but I needed PHP, and the Apache process is taking less than a megabyte of memory.

  • I'd LOVE to try out the 2.5 series, but because LVM is still not in there (not a week ago at least), and I have all my data (movies, oggs, etc) on LVM, I'm unable to use it... :(

    Does anyone have a clue when there will be LVM for 2.5?

Keep up the good work! But please don't ask me to help.

Working...