Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Hardware Virtualization Slower Than Software? 197

Jim Buzbee writes "Those you keeping up with the latest virtualization techniques being offered by both Intel and AMD will be interested in a new white paper by VMWare that comes to the surprising conclusion that hardware-assisted x86 virtualization oftentimes fails to outperform software-assisted virtualization. My reading of the paper says that this counterintuitive result is often due to the fact that hardware-assisted virtualization relies on expensive traps to catch privileged instructions while software-assisted virtualization uses inexpensive software substitutions. One example given is compilation of a Linux kernel under a virtualized Linux OS. Native wall-clock time: 265 seconds. Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem. "
This discussion has been archived. No new comments can be posted.

Hardware Virtualization Slower Than Software?

Comments Filter:
  • by thegrassyknowl ( 762218 ) on Sunday August 13, 2006 @04:21AM (#15897444)
    See title... VMWare make software virtualisation products. Of course they're going to try and find that software methods are better.
  • by cp.tar ( 871488 ) <cp.tar.bz2@gmail.com> on Sunday August 13, 2006 @04:25AM (#15897452) Journal

    Even so, they may be at least partially right.

    Besides, if a hybrid approach is necessary, VMWare will need to adjust as well. Or am I missing something?

  • by zerogeewhiz ( 73483 ) on Sunday August 13, 2006 @04:26AM (#15897453)
    Haven't read it, but I wonder if they were using VT/Pacifica chipsets or no...

    It's like Apple's claim that their Intel jobbies are 5x faster - a bit silly and very, very specific...

    And yes, VMWare are hardly likely to mention that Xen-style virtualisation is going to be better now, are they?
  • by MrFlannel ( 762587 ) on Sunday August 13, 2006 @04:31AM (#15897461)
    Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem.
    So, um, a hybrid approach is better because it will take 439* seconds? Why?

    * - I imagine in real life it's not a 1:1 ratio, but for the sake of argument, work with me.
  • by njdj ( 458173 ) on Sunday August 13, 2006 @04:37AM (#15897469)

    The correct conclusion is not that virtualization is better done entirely in software, but that current hardware assists to virtualization are badly designed. As the complete article points out, the hardware features need to be designed to support the software - not in isolation.

    It reminds me of an influential paper in the RISC/CISC debate, about 20 years ago. Somebody wrote a C compiler for the VAX that output only a RISC-like subset of the VAX instruction set. The generated code ran faster than the output of the standard VAX compiler, which used the whole (CISC) VAX instruction set. The naive conclusion was that complex instructions are useless. The correct conclusion was that the original VAX compiler was a pile of manure.

    The similarity of the two situations is that it's a mistake to draw a general conclusion about the relative merits of two technologies, based on just one example of each. You have to consider the quality of the implementations - how the technology has been used.

  • by cp.tar ( 871488 ) <cp.tar.bz2@gmail.com> on Sunday August 13, 2006 @04:43AM (#15897478) Journal

    I suppose there are certain things hardware virtualisation does better.

    The trick is, I'd guess, to find out which works better in which circumstances.

    You see that people suspect this white paper because of its origin; they are right in doing so at least because only one type of test has been performed; surely not all computing tasks perform the same way as a kernel compile.
    This suggests that VMWare have found the example which supports their claims the best; the question is, of course, whether this is the only such example.

    So if we suppose that there are certain types of problems where hardware virtualisation outperforms software virtualisation, hybrid solutions seem to be the right way to go.

    P.S. I don't really know what I'm talking about...

  • by toolz ( 2119 ) on Sunday August 13, 2006 @04:44AM (#15897479) Homepage Journal
    When are people going to figure out that "hardware solutions" are really software running on hardware, just like any other solution?

    Sure, the instructions may be hardcoded, coming out of ROM, or whatever, but in the end its instrructions that tell the hardware what to do. And those instructions are called "software", no matter how the vendor tries to spin it. And if the solutions performs badly, it is because the software is designed badly. Period.

  • by XMLsucks ( 993781 ) on Sunday August 13, 2006 @04:53AM (#15897496) Journal
    VMware sells both hardware-accelerated and software virtualization products. They implemented full support for VT (how else would they benchmark it? Plus they were the first to support VT). If you run VMware on 64-bit Windows, then you use VMware's VT product. But because VMware's original software method is faster than the VT method on 32-bit, they continue to use the software approach.

    VMware's paper is a typical research paper, published at a peer-reviewed conference. This means that they have used the scientific method. The chances are 99.9999% that you will easily reproduce their results, even if changing the benchmarks.

    I, on the other hand, am smart enough to see that they are stating the obvious. If you read the Intel VT spec, you'll see that Intel does nothing for page table virtualization, nor anything for device virtualization. Both are extremely expensive, and besides sti/cli, are the prime candidates for hardware assists. Intel will likely solve this performance issue in future revs, but right now, VT isn't fast enough.

    Hmmm, virtualisation? Do you happen to work on Xen?

     
  • Re:Bias? (Score:4, Insightful)

    by RegularFry ( 137639 ) on Sunday August 13, 2006 @04:56AM (#15897502)
    Insisting on third-party verification of results is hardly damning either of them... It's just scientific. You (and everyone else) are absolutely right to be sceptical, and not just because VMware have a vested interest in this case. They might just be wrong. Or not.
  • wrong (Score:4, Insightful)

    by m874t232 ( 973431 ) on Sunday August 13, 2006 @05:04AM (#15897519)
    Hardware virtualization may be slower right now, but both the hardware and the software supporting it are new. Give it a few iterations and it will be equal to software virtualization.

    It may or may not be faster eventually, but that doesn't matter. What matters is that small changes in the hardware make it possible to stop having to depend on costly, proprietary, and complex software--like that sold by VMware.
  • by rwhiffen ( 141401 ) on Sunday August 13, 2006 @05:25AM (#15897555) Homepage Journal
    I don't see how that tracks. How is the %2 impact going to save me a bundle? Moving to linux suposedly will save me money if I virtualize or not, don't see how it being virtualization friendly improves things. Are you saying I'll spend less in hardware by switching to linux? Migrating to linux isn't free (man-hours wise), so the hardware savings better be pretty damn substantial to offset it.

    I should be sleeping.

    Rich
  • by graf0z ( 464763 ) on Sunday August 13, 2006 @05:28AM (#15897560)
    Paravirtualization (running hypervisor-aware guest kernels, eg patched linux on xen) is faster than both, binary translation and "full" virtualization. And you don't need CPUs with VT extension.

    g

  • by julesh ( 229690 ) on Sunday August 13, 2006 @06:58AM (#15897687)
    Because if you actually RTFA it shows that the hardware virtualization is faster for some benchmarks (e.g. processing system calls) and slower for others (e.g. performing I/O requests or page-table modifications); if you combine the best features of each you should be able to get a virtual machine that is faster than both.
  • by interiot ( 50685 ) on Sunday August 13, 2006 @07:03AM (#15897702) Homepage
    It just seems like many people who try to move away from Windows seem to want to at least have the option to use Windows once in a while.... The Mac-moving-to-Intel thing was met with a lot of excitement because of this, a lot of linux people seem to say this, and it seems like in a lot of companies employees must be productive with specific document formats. Certainly Windows isn't the only point of virtualization, but it seems like it's a really big one, especially for desktop users.
  • by andreyw ( 798182 ) on Sunday August 13, 2006 @07:44AM (#15897769) Homepage
    If VMWare's solution still needs a host OS (I remember them using stripped-down Linux for their server offering), then no... they might use use a subset of VT, but its not a true hypervisor.

    And by the way... yes... device virtualization is still not there, but your page tables claim is bullshit. If you read the VT (and the SVM) docs, you would realize that you can implement shadow page tables RIGHT NOW. The hardware assists are there.
  • by Sycraft-fu ( 314770 ) on Sunday August 13, 2006 @09:16AM (#15897902)
    It's not that people don't look to old mainframe solutions for things, they do, it's that often what was feasable on those wasn't on normal hardware, until receantly. There was no reason for chip makers to waste silicon on virtualization hardware on desktops until fairly receantly, there just wasn't a big desktop virtualization market. Computers are finally powerful to the point that it's worth doing.

    It's no supprise that large, extremely expensive computers get technology before home computers do. You give me $20 million to build something with, I can make it do a lot. You give me $2000, it's going to have to be scaled way back, even with economies of scale.

    You see the same thing with 3D graphics. Most, perhaps even all, the features that come to 3D cards were done on high end visualizaiton systems first. It's not that the 3D companies didn't think of them, it's that they couldn't do it. The orignal Voodoo card wasn't amazing in that it did 3D, it was much more limited than other thigns on the market. It was amazing in that it did it at a price you could afford for a home system. 3dfx would have loved to have a hardware T&L engine, AA features, procedural textures, etc, there just wasn't the silicon budget for it. It's only with more developments that this kind of thing has become feasable.

    So I really doubt Intel didn't do something like VT because they thought IBM was wrong on the 360, I think rather they didn't do it because it wasn't feasable or marketable on desktop chips.
  • by Hal_Porter ( 817932 ) on Sunday August 13, 2006 @11:16AM (#15898244)
    > I'm curious why making a VAX fast is such a problem?

    I read that the calling convention specified a general call instruction which was architected to do a lot of stuff - build a stack frame, push registers and so on, so even an efficient implementation will be slow. Much of the time, you could get away with something much simpler.

    >I would have thought the 16(if memory serves) orthogonal registers would have made a nice
    >target for compilers, contrary to the ridiculous number of (non-orthogonal) registers on x86..

    x86 is orthogonal in protected mode, and register renaming helps with the low number of architectural registers. And if you're doing something intensive, you have SSE registers to use too. And x86-64 has more architectural registers anyway. So most of the architectural problems with the 8086 have been solved or mitigated.

    I guess if the VAX had been as popular, something similar would have happened of course.
  • Re:Say whaa??? (Score:2, Insightful)

    by Anonymous Coward on Sunday August 13, 2006 @06:07PM (#15899669)
    So in what way is it different for VMWare? It also is free! And in addition lets you run unmodified kernels.

Never call a man a fool. Borrow from him.

Working...