Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Hardware Virtualization Slower Than Software? 197

Jim Buzbee writes "Those you keeping up with the latest virtualization techniques being offered by both Intel and AMD will be interested in a new white paper by VMWare that comes to the surprising conclusion that hardware-assisted x86 virtualization oftentimes fails to outperform software-assisted virtualization. My reading of the paper says that this counterintuitive result is often due to the fact that hardware-assisted virtualization relies on expensive traps to catch privileged instructions while software-assisted virtualization uses inexpensive software substitutions. One example given is compilation of a Linux kernel under a virtualized Linux OS. Native wall-clock time: 265 seconds. Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem. "
This discussion has been archived. No new comments can be posted.

Hardware Virtualization Slower Than Software?

Comments Filter:
  • by mnmn ( 145599 ) on Sunday August 13, 2006 @04:48AM (#15897487) Homepage
    If you search back on Vmware vs Xensource, you'll see Vmware is doing everything to discredit Xen and hardware hypervisors. Instead of saying 'it doesnt work' its more effective to say it works, we have it too, it fails on its own so it needs our software too. From everything I've read about hypervisors including the Power CPU hypervisors from IBM (which have been functional for years) and the original Cambridge paper that created Xen, Hypervisors really outperform software solutions. You do need a software mini-OS as the root on top of which you'd install the OSes which is better than using Windows as the root OS.

    But Vmware's agitation is understandable. They're about to lose it all to an open source project. Where have I seen this before?
  • by Anonymous Coward on Sunday August 13, 2006 @05:02AM (#15897513)
    See title... VMWare make software virtualisation products. Of course they're going to try and find that software methods are better.

    Disclaimer: I work for VMware.

    1. VMware already supports VT, but it's not enabled by default because for normal workloads it's slower. If VT really were faster, do you really think we'd be choosing to use a slower approach and making customers unhappy?
    2. Even Intel admits the first generation of VT hardware wasn't so great and now claims that they were aiming for correctness instead of performance:
  • by arivanov ( 12034 ) on Sunday August 13, 2006 @05:09AM (#15897531) Homepage
    While they offer software virtualisation products, they are also interested in these products having hardware assistance. The AMD and Intel specs were designed with input from them (amidst other vendors).

    As far as the results there is nothing surprising here. This has happened before. Fault driven emulation of 80287 was nearly 50%+ slower than compiled in emulation. There were quite a few other examples x86 which all revolve around the fact that the x86 fault handling in protected mode is hideously slow. Last time I have had a look at it in asm was in the 386 days and the numbers were in the 300 clock cycle range for most faults (assuming no wait on memory accesses). While 486 and Pentium improved the things a bit in a few places, the overall order remains the same (or even worse, due to memory waits). Anything that relies on faults in x86 is bound to be hideously slow.

    Not that this matters, as none of the VM technologies is particularly caring about resources. They are deployed because there is an excess resource in the first place.
  • No not really (Score:3, Informative)

    by Sycraft-fu ( 314770 ) on Sunday August 13, 2006 @06:11AM (#15897629)
    In the end, the software instructions are actually executed on hardware, and that hardware imposes limits on what they do. In the case of virtualization the problem comes with privlidge levels. Intel processors see 4 levels of privlidge called Ring 0-3, of which two are used by nearly all OSes, 0 and 3. The kernel and associated code runs in Ring 0, everything else in Ring 3. Now the effect of what ring you are in controls what instructions the processor will allow you to execute, and what memory you can access. So if software in Ring 3 tries to execute a certian instruction, the processor will just not do it, it'll generate a fault.

    Virtulization software has to deal with this, when the computer it's virtualizing wants to execute such an instruction, it can't just hand it off to the processor, it has to deal with it itself, it has to translate it to instrucitons that can be executed and virtualize what happens, hence the name vitrualization.

    The idea with hardware support like VT is that the processor itself will take a more active hand. Virtual machines will actually be able to execute Ring 0 instructions on the processor, because they won't really be running in the main Ring 0, it'll create a seperate isolated privlidge space for it.

    A more simple analogy would be to think of basic math. Suppose you want to multiple two numbers and now suppose again that you have a processor that only has an add instruction. Well, you'd have to do the multiplication in software, as in you'd have to do an add loop. Now suppose that a new version of that processor adds a multiplication instruction, that actually commands a multiplication unit. Now you are doing it in hardware. It is not only less code, but faster because there's a dedicated unit for it.

    It's not like companies just whack instruction on their CPUs for the fun of it, they command different parts of the hardware to do different things. SSE, 3DNow, etc don't just have the processor run little add or multiply loops, they actually kick on seperate sections of hardware, designed for SIMD. Hence why they get the results they do.
  • Look to IBM (Score:3, Informative)

    by dpilot ( 134227 ) on Sunday August 13, 2006 @06:40AM (#15897665) Homepage Journal
    IBM has been shipping virtualization since before many of these newcomers were even born. What do you think the 'V' in MVS or VM stands for? I wonder how well IBM's expired patents compare to modern virtualization. Of course in this case it helps to own the hardware, instruction set, and operating system.
  • by julesh ( 229690 ) on Sunday August 13, 2006 @06:46AM (#15897673)
    From everything I've read about hypervisors including the Power CPU hypervisors from IBM (which have been functional for years) and the original Cambridge paper that created Xen, Hypervisors really outperform software solutions.

    Note that Xen's original hypervisor implementation *is* a software solution -- it relies on rewriting the guest operating system kernel so that the kind of hardware traps that VMware are talking about here are unnecessary. Note that it worked flawlessly before the virtualisation technology (eg. Intel VT) that VMware is testing was avialable.
  • by Anonymous Coward on Sunday August 13, 2006 @07:42AM (#15897766)
    VMware fully support VT, but they don't enable it by default.
  • This won't be the first time software beats hardware.

    The original Stacker product was a combination of a hardware card and software. Think of the hardware card as an accelerator for doing the comression/decompression.

    The hardware was faster on the oldest machines, but on anything above a 286/12 (I had a 286/20 at the time), or almost any 386, it ran faster without the hardware card. And on every 486, the card was useless.

    So, while you may want to "consider the source" of this news, this is only one factor to weigh. As time goes on, I'm sure we'll see more studies, benchmarks, etc.

    Remember, there are 3 things that are inevitable in a programmers' life - death, taxes, and benchmarks.

  • Re:Bullshit (Score:1, Informative)

    by Anonymous Coward on Sunday August 13, 2006 @02:24PM (#15898879)
    Just because VMware is incompetent and can't implement hardware VT correctly, doesn't mean VT is slower. Try Parallels or Xen to see VT implmented correctly. Vmware has broken VT.

    Do you have any evidence or benchmarks to back that up? Did you read Keith's paper? If there are flaws in the reasoning and testing methodology, please point them out.

    My understanding is that our performance data indicates that in apples-to-apples comparisons (same host OS on the same hardware), VMware-with-binary-translation is faster than Parallels-with-VT for normal workloads.

    And note that I'm not in VMware's performance group, nor do I work on low-level virtualization stuff, so I'm not too familiar with the specific performance metrics. However, I'll add that logically it doesn't make sense for VMware to put its head in the sand by biasing any of its internal performance comparisons: if we were doing worse, we'd be quite upset about it and would be working really hard to correct it any such performance anomaly.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...