Hardware Virtualization Slower Than Software? 197
Jim Buzbee writes "Those you keeping up with the latest virtualization techniques being offered by both Intel and AMD will be interested in a new white paper by VMWare that comes to the surprising conclusion that hardware-assisted x86 virtualization oftentimes fails to outperform software-assisted virtualization. My reading of the paper says that this counterintuitive result is often due to the fact that hardware-assisted virtualization relies on expensive traps to catch privileged instructions while software-assisted virtualization uses inexpensive software substitutions. One example given is compilation of a Linux kernel under a virtualized Linux OS. Native wall-clock time: 265 seconds. Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem.
"
Re:Sponsored by VMWare.. what do you expect? (Score:5, Informative)
But Vmware's agitation is understandable. They're about to lose it all to an open source project. Where have I seen this before?
Re:Sponsored by VMWare.. what do you expect? (Score:5, Informative)
Disclaimer: I work for VMware.
Re:Sponsored by VMWare.. what do you expect? (Score:5, Informative)
As far as the results there is nothing surprising here. This has happened before. Fault driven emulation of 80287 was nearly 50%+ slower than compiled in emulation. There were quite a few other examples x86 which all revolve around the fact that the x86 fault handling in protected mode is hideously slow. Last time I have had a look at it in asm was in the 386 days and the numbers were in the 300 clock cycle range for most faults (assuming no wait on memory accesses). While 486 and Pentium improved the things a bit in a few places, the overall order remains the same (or even worse, due to memory waits). Anything that relies on faults in x86 is bound to be hideously slow.
Not that this matters, as none of the VM technologies is particularly caring about resources. They are deployed because there is an excess resource in the first place.
No not really (Score:3, Informative)
Virtulization software has to deal with this, when the computer it's virtualizing wants to execute such an instruction, it can't just hand it off to the processor, it has to deal with it itself, it has to translate it to instrucitons that can be executed and virtualize what happens, hence the name vitrualization.
The idea with hardware support like VT is that the processor itself will take a more active hand. Virtual machines will actually be able to execute Ring 0 instructions on the processor, because they won't really be running in the main Ring 0, it'll create a seperate isolated privlidge space for it.
A more simple analogy would be to think of basic math. Suppose you want to multiple two numbers and now suppose again that you have a processor that only has an add instruction. Well, you'd have to do the multiplication in software, as in you'd have to do an add loop. Now suppose that a new version of that processor adds a multiplication instruction, that actually commands a multiplication unit. Now you are doing it in hardware. It is not only less code, but faster because there's a dedicated unit for it.
It's not like companies just whack instruction on their CPUs for the fun of it, they command different parts of the hardware to do different things. SSE, 3DNow, etc don't just have the processor run little add or multiply loops, they actually kick on seperate sections of hardware, designed for SIMD. Hence why they get the results they do.
Look to IBM (Score:3, Informative)
Re:Sponsored by VMWare.. what do you expect? (Score:5, Informative)
Note that Xen's original hypervisor implementation *is* a software solution -- it relies on rewriting the guest operating system kernel so that the kind of hardware traps that VMware are talking about here are unnecessary. Note that it worked flawlessly before the virtualisation technology (eg. Intel VT) that VMware is testing was avialable.
Re:Sponsored by VMWare.. what do you expect? (Score:1, Informative)
This HAS happened before - with Stacker (Score:3, Informative)
This won't be the first time software beats hardware.
The original Stacker product was a combination of a hardware card and software. Think of the hardware card as an accelerator for doing the comression/decompression.
The hardware was faster on the oldest machines, but on anything above a 286/12 (I had a 286/20 at the time), or almost any 386, it ran faster without the hardware card. And on every 486, the card was useless.
So, while you may want to "consider the source" of this news, this is only one factor to weigh. As time goes on, I'm sure we'll see more studies, benchmarks, etc.
Remember, there are 3 things that are inevitable in a programmers' life - death, taxes, and benchmarks.
Re:Bullshit (Score:1, Informative)
Do you have any evidence or benchmarks to back that up? Did you read Keith's paper? If there are flaws in the reasoning and testing methodology, please point them out.
My understanding is that our performance data indicates that in apples-to-apples comparisons (same host OS on the same hardware), VMware-with-binary-translation is faster than Parallels-with-VT for normal workloads.
And note that I'm not in VMware's performance group, nor do I work on low-level virtualization stuff, so I'm not too familiar with the specific performance metrics. However, I'll add that logically it doesn't make sense for VMware to put its head in the sand by biasing any of its internal performance comparisons: if we were doing worse, we'd be quite upset about it and would be working really hard to correct it any such performance anomaly.