Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Hardware Virtualization Slower Than Software? 197

Posted by Hemos
from the the-jury-is-still-out dept.
Jim Buzbee writes "Those you keeping up with the latest virtualization techniques being offered by both Intel and AMD will be interested in a new white paper by VMWare that comes to the surprising conclusion that hardware-assisted x86 virtualization oftentimes fails to outperform software-assisted virtualization. My reading of the paper says that this counterintuitive result is often due to the fact that hardware-assisted virtualization relies on expensive traps to catch privileged instructions while software-assisted virtualization uses inexpensive software substitutions. One example given is compilation of a Linux kernel under a virtualized Linux OS. Native wall-clock time: 265 seconds. Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem. "
This discussion has been archived. No new comments can be posted.

Hardware Virtualization Slower Than Software?

Comments Filter:
  • by thegrassyknowl (762218) on Sunday August 13, 2006 @03:21AM (#15897444)
    See title... VMWare make software virtualisation products. Of course they're going to try and find that software methods are better.
    • Even so, they may be at least partially right.

      Besides, if a hybrid approach is necessary, VMWare will need to adjust as well. Or am I missing something?

      • by mnmn (145599) on Sunday August 13, 2006 @03:48AM (#15897487) Homepage
        If you search back on Vmware vs Xensource, you'll see Vmware is doing everything to discredit Xen and hardware hypervisors. Instead of saying 'it doesnt work' its more effective to say it works, we have it too, it fails on its own so it needs our software too. From everything I've read about hypervisors including the Power CPU hypervisors from IBM (which have been functional for years) and the original Cambridge paper that created Xen, Hypervisors really outperform software solutions. You do need a software mini-OS as the root on top of which you'd install the OSes which is better than using Windows as the root OS.

        But Vmware's agitation is understandable. They're about to lose it all to an open source project. Where have I seen this before?
        • by julesh (229690) on Sunday August 13, 2006 @05:46AM (#15897673)
          From everything I've read about hypervisors including the Power CPU hypervisors from IBM (which have been functional for years) and the original Cambridge paper that created Xen, Hypervisors really outperform software solutions.

          Note that Xen's original hypervisor implementation *is* a software solution -- it relies on rewriting the guest operating system kernel so that the kind of hardware traps that VMware are talking about here are unnecessary. Note that it worked flawlessly before the virtualisation technology (eg. Intel VT) that VMware is testing was avialable.
          • This won't be the first time software beats hardware.

            The original Stacker product was a combination of a hardware card and software. Think of the hardware card as an accelerator for doing the comression/decompression.

            The hardware was faster on the oldest machines, but on anything above a 286/12 (I had a 286/20 at the time), or almost any 386, it ran faster without the hardware card. And on every 486, the card was useless.

            So, while you may want to "consider the source" of this news, this is only one f

            • I think it has to do with the age of the tools involved - in the case of the current virualization issue, I believe it's simply because the virtualization companies have spent years grinding down perfomance lag, wheres the virtualization tools have not yet been properly refined for use with the virtualization capabilities of the processors. Of course, this is a guess, as I have never looked closer at how it's done; I've been happy as long as it worked.

              The Stacker thing is simply because the other hardware
        • Where have I seen this before?

          Citrix?
          Not an open source product and not lost it to an open source product, but they made a product that has been largely made superflouos because MS built it right into the OS.
        • Where have you seen VMware discrediting XenSource? I haven't seen that. Can you back this up with some links? Searching for "VMware vs Xensource" was fruitless for me. And searching for "VMware discredits XenSource" was also fruitless.

          But Vmware's agitation is understandable. They're about to lose it all to an open source project. Where have I seen this before?

          I'll let you in on a secret: if you consider all costs, and return on investment, using VMware is a competitive advantage over using Xen.

          • I'll let you in on a secret: if you consider all costs, and return on investment, using VMware is a competitive advantage over using Xen.

            Xen: free. Linux: free. I don't understand where I would spend any money turning 1 linux server into 2 linux servers ith Xen. We don't use Windows on anything but the domain controllers, but Xen doesn't windows (nor would I want to virtualize our DCs...)
        • But Vmware's agitation is understandable. They're about to lose it all to an open source project. Where have I seen this before?

          Reports of VMWares demise have been greatly exaggerated. They're just reacting to a new threat. VMWare is an EMC company, and I doubt EMC is going to let virtualization die. The future of EMC's 23.9 billion dollar empire depends on their ability to virtualize and cluster their machines. This is the quiet before the storm in the storage market...

        • If you search back on Vmware vs Xensource, you'll see Vmware is doing everything to discredit Xen and hardware hypervisors. But Vmware's agitation is understandable. They're about to lose it all to an open source project. Where have I seen this before?

          --- Um, I dunno, where? Oh, you mean how open source Linux and BSD have killed off proprietary OS's like Windows. Yeah ... I've seen it before...or are you referring to something else? :-)

          Excuse my ignorance, but last I heard [dated info], was that Xen re

    • Haven't read it, but I wonder if they were using VT/Pacifica chipsets or no...

      It's like Apple's claim that their Intel jobbies are 5x faster - a bit silly and very, very specific...

      And yes, VMWare are hardly likely to mention that Xen-style virtualisation is going to be better now, are they?
    • by XMLsucks (993781) on Sunday August 13, 2006 @03:53AM (#15897496) Journal
      VMware sells both hardware-accelerated and software virtualization products. They implemented full support for VT (how else would they benchmark it? Plus they were the first to support VT). If you run VMware on 64-bit Windows, then you use VMware's VT product. But because VMware's original software method is faster than the VT method on 32-bit, they continue to use the software approach.

      VMware's paper is a typical research paper, published at a peer-reviewed conference. This means that they have used the scientific method. The chances are 99.9999% that you will easily reproduce their results, even if changing the benchmarks.

      I, on the other hand, am smart enough to see that they are stating the obvious. If you read the Intel VT spec, you'll see that Intel does nothing for page table virtualization, nor anything for device virtualization. Both are extremely expensive, and besides sti/cli, are the prime candidates for hardware assists. Intel will likely solve this performance issue in future revs, but right now, VT isn't fast enough.

      Hmmm, virtualisation? Do you happen to work on Xen?

       
      • I see you mentioned most of what I was 'clarifying' in my post. Sorry, I didn't read the whole thing ;-). So about the only new info is that VMware runs both 32 and 64-bit AMD VMs in software mode.
      • If VMWare's solution still needs a host OS (I remember them using stripped-down Linux for their server offering), then no... they might use use a subset of VT, but its not a true hypervisor.

        And by the way... yes... device virtualization is still not there, but your page tables claim is bullshit. If you read the VT (and the SVM) docs, you would realize that you can implement shadow page tables RIGHT NOW. The hardware assists are there.
        • Incorrect. Read the specs, and you'll see that VT's original spec makes no mention of shadow page tables and SVM lists it as an implementation-dependent feature. BTW, implementation of nested page tables (AMD's version) was slated for RevF, but slipped and last I saw (from The Register) was slated for RevH. It's on Intel's road map, I don't know how far out.
    • by Anonymous Coward on Sunday August 13, 2006 @04:02AM (#15897513)
      See title... VMWare make software virtualisation products. Of course they're going to try and find that software methods are better.

      Disclaimer: I work for VMware.

      1. VMware already supports VT, but it's not enabled by default because for normal workloads it's slower. If VT really were faster, do you really think we'd be choosing to use a slower approach and making customers unhappy?
      2. Even Intel admits the first generation of VT hardware wasn't so great and now claims that they were aiming for correctness instead of performance:
      • Is AMD's Pacifica virtualisation system any better?
        • by Morgaine (4316) on Sunday August 13, 2006 @09:27AM (#15898072)
          Is AMD's Pacifica virtualisation system any better?

          Apparently, yes, and by a good margin.

          There are several documents and articles out there which point out VT's problems and how Pacifica is quite dramatically better. Here's an excerpt from "AMD Pacifica turns the nested tables" [theinq.net], part 3 of an informative series of articles:

          • The basic architecture of the K8 gives AMD more toys to play with, the memory controller and directly connected devices. AMD can virtualise both of these items directly while Intel has to do so indirectly if it can do so at all.

            This should allow an otherwise identical VMM to do more things in hardware and have lower overhead than VT. AMD appears to have used the added capability wisely, giving them a faster and as far as memory goes, more secure virtualisation platform."

          So, it looks like AMD are ahead on hardware virtualization at the moment.

          If I read it correctly, this is because Intel's VT actually requires a lot of software intervention, so it's not actually a very strong hardware solution at all.
          • It does seem like an advantage but there are still no benchmarks to be found although the hardware has been out there for more than a month.
            • Nested page tables isn't in the current generation of AMD chips (and thus, they will do no better than Intel's VT). I am VERY eagerly awaiting the next generation of chips.

              Pacifica has a slight advantage in that it supports ASIDs (Address Space IDs, see your OS textbook's section on page tables), a long-overdue x86 feature. But even theoretically, that's not going to make up the difference.

      • now claims that they were aiming for correctness instead of performance

        But hey, let's hear it for correctness!
      • by Anonymous Coward
        I designed one of the x86 h/w virtualization offerings. It's obvious that outside of device emulation, the biggest overhead of virtualization is the s/w emulation of what amounts to two levels of address translation (especially hairy in multiprocesor systems due to the brain-dead x86 page table semantics that do not require explicit invalidation). So clearly you want nested-paging support in h/w. However, that support is a little more complex than a few microcode changes to trap selected privileged instruct

      • Actually the reason why VMware will use a slower approach is becouse of Xen.

        If there's any condition were HW virtualization is slower than SW virtualization, you need to use it or lose to Xen Source.

        Don't get me wrong, I work with VMware in Argentina and the guys that sells VMware here are really great to work with. I love the technology and the easy it is to manage it. But certeanly Xen+HW virtualization changes everything....

        The worst thing that VMware has is the the EULA p
    • by arivanov (12034) on Sunday August 13, 2006 @04:09AM (#15897531) Homepage
      While they offer software virtualisation products, they are also interested in these products having hardware assistance. The AMD and Intel specs were designed with input from them (amidst other vendors).

      As far as the results there is nothing surprising here. This has happened before. Fault driven emulation of 80287 was nearly 50%+ slower than compiled in emulation. There were quite a few other examples x86 which all revolve around the fact that the x86 fault handling in protected mode is hideously slow. Last time I have had a look at it in asm was in the 386 days and the numbers were in the 300 clock cycle range for most faults (assuming no wait on memory accesses). While 486 and Pentium improved the things a bit in a few places, the overall order remains the same (or even worse, due to memory waits). Anything that relies on faults in x86 is bound to be hideously slow.

      Not that this matters, as none of the VM technologies is particularly caring about resources. They are deployed because there is an excess resource in the first place.
    • More importantly they sell virtualization products that do not support VT, and their primary competitor does.

    • Their measurements may be accurate. The question for me is.. what are they measuring? The slowest things about virtualisation for me are: a) swapping and memory use, because I tend to want LOTS of virtualisation, or none; b) peripheral hardware sharing issues, such as 3D video card acceleration; c) handling many users or workloads, so that each doesn't slow the other to a crawl.

      If hardware solutions can do a better job of compressing the memory that's not in use (unlikely) or virtualising 3D video, so tha
    • I just found out the hard way that Xen isn't quite ready to do hardware virtualization either. It does support the VT intruction set, but it doesn't handle disk IO well at all to the point where you can get up to 50% performance loss. They say that this will be eventually fixed but that doesn't change the fact that I spent time looking for the right hardware virtualization solution and it still doesn't perform. Software paravirtualization under Xen is probably still better than VMware though.

      So don't be
  • The real question is what type of test was performed... It would make sense that different applications would function differently in a variety contexts. How about some variance? I dig VMWare, but come on...
    • So why don't you actually read the paper? It has quite a good explanation of what they did. FWIW, it wasn't a clear win for software; there were things the hardware implementation did better, but they're things that don't seem to be quite so important for real-world applications.
  • Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem.
    So, um, a hybrid approach is better because it will take 439* seconds? Why?

    * - I imagine in real life it's not a 1:1 ratio, but for the sake of argument, work with me.
    • I suppose there are certain things hardware virtualisation does better.

      The trick is, I'd guess, to find out which works better in which circumstances.

      You see that people suspect this white paper because of its origin; they are right in doing so at least because only one type of test has been performed; surely not all computing tasks perform the same way as a kernel compile.
      This suggests that VMWare have found the example which supports their claims the best; the question is, of course, whether this is th

    • It probably won't work that way at all. This could be more of an additive thing.

      For example, say you have a boat powered by 393 horsepower engine and a 484 horsepower engine. If you run them both at the same time, the net power is not going to be 439hp.

      Software+hardware won't add in nearly the same way, but I wouldn't be surprised if a hybrid approach was %50 faster than either method alone.
    • Because if you actually RTFA it shows that the hardware virtualization is faster for some benchmarks (e.g. processing system calls) and slower for others (e.g. performing I/O requests or page-table modifications); if you combine the best features of each you should be able to get a virtual machine that is faster than both.
  • by njdj (458173) on Sunday August 13, 2006 @03:37AM (#15897469)

    The correct conclusion is not that virtualization is better done entirely in software, but that current hardware assists to virtualization are badly designed. As the complete article points out, the hardware features need to be designed to support the software - not in isolation.

    It reminds me of an influential paper in the RISC/CISC debate, about 20 years ago. Somebody wrote a C compiler for the VAX that output only a RISC-like subset of the VAX instruction set. The generated code ran faster than the output of the standard VAX compiler, which used the whole (CISC) VAX instruction set. The naive conclusion was that complex instructions are useless. The correct conclusion was that the original VAX compiler was a pile of manure.

    The similarity of the two situations is that it's a mistake to draw a general conclusion about the relative merits of two technologies, based on just one example of each. You have to consider the quality of the implementations - how the technology has been used.

    • The Intel processor design has been a pile of manure ever since the first 8086. On the other hand, the IBM zSeries range of computers has been doing virtualization since the 1960s, and presumably the hardware has been designed to get it right. Can anyone give comparable performance figures for programs running in a virtual machine or the bare metal for a zSeries machine?
      • You can't run on bare metal on zSeries. The whole architecture just isn't designed to work that way. It has to have the virtualisation layer, or at least something that provides nearly all of the functionality of what would traditionally be considered a virtualisation layer.
        • You can't run on bare metal on zSeries. The whole architecture just isn't designed to work that way. It has to have the virtualisation layer, or at least something that provides nearly all of the functionality of what would traditionally be considered a virtualisation layer.

          Linux can run on the bare metal (the only OS on the entire system), as a first-level image in an LPAR (the LPAR is actually managed by a lightweight hypervisor), and on top of of z/VM (itself on top of the bare metal or an LPAR) which is

      • by TheRaven64 (641858) on Sunday August 13, 2006 @06:09AM (#15897712) Journal
        The easiest architecture to virtualise is the Alpha. It had a single privileged instruction, and all that did was shift to a higher privilege mode (which had a few shadow registers available) and then jump to an address in firmware. The firmware could be replaced by using one of these calls. If you wanted to virtualise it then you could do so trivially be replacing the firmware with something that would check or permute the arguments and then vector off into the original firmware.

        It also had a few other advantages. Since you were adding virtual instructions, they all completed atomically (you can't pre-empt a process in the middle of an instruction). This meant you could put things like thread locking instructions in the PALCode and not require any intervention from the OS to run them. The VMS PALCode, for example, had a series of instructions for appending numbers to queues. These could be used to implement very fast message passing between threads (process some data, store it somewhere, then atomically write the address to the end of a queue) with no need to perform a system call (which meant no saving and loading of the CPU state, just jumping cheaply into a mode that could access a few more registers).

        • These could be used to implement very fast message passing between threads (process some data, store it somewhere, then atomically write the address to the end of a queue) with no need to perform a system call (which meant no saving and loading of the CPU state, just jumping cheaply into a mode that could access a few more registers).

          I'm confused, why would you need a system command to pass messages between threads? Isn't that what atomic read-and-write ASM commands are for? That plus thread-shared memory
      • The IBM iSeries (identical to the pSeries hardware) also have a hardware HyperVisor.

        Their entry models (10k US$) are slow as shit though. Can't say anything about the more expensive machine, but anything that requires around 12 hours to upgrade it's operating system can't be trusted.
    • >The naive conclusion was that complex instructions are useless. The correct conclusion was that the original VAX compiler was a pile of manure.

      Note that the 'naive conclusion' and the 'correct conclusion' are not contradictory: I remember an article recently where it was shown that the Alpha had three times the power of a correspondig VAX, which made nicely the point that CISC is shit.

      Now as Intel has shown, given enough efforts and money even x86 the poorest CISC ISA ever (VAX ISA was much nicer than x
      • '' Now as Intel has shown, given enough efforts and money even x86 the poorest CISC ISA ever (VAX ISA was much nicer than x86 ISA: more registers, orthogonal design) can be competitive and sofware compatibility makes the rest.. ''

        This was heavily discussed a while ago on comp.arch. Conclusion: VAX instruction set was an absolute nightmare for hardware designers; while today the problem of making x86 fast in spite of the instruction set is basically solved, making a VAX fast would have taken superhuman effor
        • I'm curious why making a VAX fast is such a problem?

          Sure some VAX instruction such as 'list management' cannot really be made fast, but the x86 has also such kind of instructions, but those instructions are irrelevant, they can be trapped and handled by microcode, and the compiler writers avoid those instruction as they know that they are slower than doing it 'by hand'.

          I would have thought the 16(if memory serves) orthogonal registers would have made a nice target for compilers, contrary to the ridiculous n
          • > I'm curious why making a VAX fast is such a problem?

            I read that the calling convention specified a general call instruction which was architected to do a lot of stuff - build a stack frame, push registers and so on, so even an efficient implementation will be slow. Much of the time, you could get away with something much simpler.

            >I would have thought the 16(if memory serves) orthogonal registers would have made a nice
            >target for compilers, contrary to the ridiculous number of (non-orthogonal) reg
      • The "naive conclusion" was unsupported by the facts due to the "correct conclusion". Your additional argument, that Alpha was 3x more "powerful" than a corresponding VAX, is also meaningless. What constituted a "corresponding VAX"? Alphas where to be used in future VAX'es so there were inherent generational differences in play. Finally, there is no reason to believe that VAX'es were representative of the best of breed in CISC design.

        All that limiting instructions to gain performance means is that the inst
    • by itsdapead (734413)

      The naive conclusion was that complex instructions are useless. The correct conclusion was that the original VAX compiler was a pile of manure.

      Perhaps the intended conclusion was that it was feasible to write an efficient compiler using only a small, intelligently chosen with compiler optimization in mind, subset of the instruction set. Perhaps the fact that the original compiler was (as you assert) "a pile of manure" was not unconnected to the fact that it tried to achieve speed by exploiting the entir

      • "PS: If you think RISC lost the war, then remember that modern x86 processors consist of a RISC core with a translator stage to handle all those pesky, legacy CISC instructions."

        The "core" of modern x86 processor is nothing like RISC processors from back when the arguments raged. All modern processors implement designs similar to x86 rather than execute their instruction sets directly. Claiming that these internal engines are "RISC" is preposterous.

        The "IS" in RISC and CISC stands for "Instruction Set" an
  • by toolz (2119) on Sunday August 13, 2006 @03:44AM (#15897479) Homepage Journal
    When are people going to figure out that "hardware solutions" are really software running on hardware, just like any other solution?

    Sure, the instructions may be hardcoded, coming out of ROM, or whatever, but in the end its instrructions that tell the hardware what to do. And those instructions are called "software", no matter how the vendor tries to spin it. And if the solutions performs badly, it is because the software is designed badly. Period.

    • I don't see how this is a significant distinction. The question, in terms you might prefer, is how virtualization using specialized hardware compares to doing the same thing in general purpose hardware. There doesn't seem to be any semantic difference. Are you just pointing out that a hardware implementation's performance is predicated on its design?
    • No not really (Score:3, Informative)

      by Sycraft-fu (314770)
      In the end, the software instructions are actually executed on hardware, and that hardware imposes limits on what they do. In the case of virtualization the problem comes with privlidge levels. Intel processors see 4 levels of privlidge called Ring 0-3, of which two are used by nearly all OSes, 0 and 3. The kernel and associated code runs in Ring 0, everything else in Ring 3. Now the effect of what ring you are in controls what instructions the processor will allow you to execute, and what memory you can ac
      • Yes, it is all well and good that hardware virtualization gives you tools that allow you to do virtualization more efficiently. The problem is, why in these tests did software virtualization come out ahead of hardware virtualization? You can dispute the methodology as giving misleading or inappropriate results, but unless they are lying (which is not impossible), you still have the issue that software virtualization performed better.

        Imagine a man with a computer and a man with a pen and paper both tasked
        • Re:No not really (Score:4, Interesting)

          by Sycraft-fu (314770) on Sunday August 13, 2006 @08:01AM (#15897876)
          I haven't read the results, and I doubt I have the technical knowledge to properly analyze them properly. However if I were to guess as to why this might be the case I'd say it's because they didn't do it right. This is a new and fairly complex technology, I somehow doubt it's easy to get right on the first try.

          I am not willing, based on a single datapoint, to make any conclusions. That's tanget to my point anyhow, my point was that doing something in hardware and software are quite different.
          • However if I were to guess as to why this might be the case I'd say it's because they didn't do it right.
            Holy crap, you just bloated "They're wrong." into 26 words. Do you work as a government advisor in your free time?
        • The problem is, why in these tests did software virtualization come out ahead of hardware virtualization?


          Presumably because Intel did a lousy job on the implementation. It's not like it hasn't happened before (see: x86 ISA, SSE, memory buses for multiprocessors, etc). Hopefully they will do better for the next version; if not there is always AMD's implementation.

  • It sounds to me like a hybrid approach may be the best answer

    As so many times and so many cases before has it proven to be the optimal solution. What gives ? Good is that we have all these alternatives, and every vm company will try to evaluate, then optimize, which will lead to better performing software VMs, and because hw is slower to catch up, probably software VMs will be better for a while.

  • wrong (Score:4, Insightful)

    by m874t232 (973431) on Sunday August 13, 2006 @04:04AM (#15897519)
    Hardware virtualization may be slower right now, but both the hardware and the software supporting it are new. Give it a few iterations and it will be equal to software virtualization.

    It may or may not be faster eventually, but that doesn't matter. What matters is that small changes in the hardware make it possible to stop having to depend on costly, proprietary, and complex software--like that sold by VMware.
    • Yes, let's move it to our costly, proprietary, and complex hardware instead!

      Not to say that you're wrong or that hardware should be free-as-in-freedom, but the irony was to great to resist.
    • Not just the CPU (Score:5, Interesting)

      by kripkenstein (913150) on Sunday August 13, 2006 @04:55AM (#15897610) Homepage
      What matters is that small changes in the hardware make it possible to stop having to depend on costly, proprietary, and complex software--like that sold by VMware.

      I am 100% in favor of cheap and open solutions. But I don't agree that this will soon be the case for virtualization. VMWare and the few other major vendors do a lot more than software virtualization of a CPU (which is all TFA was talking about). To have a complete virtualization solution, you need to also virtualize the rest of the hardware: storage, graphics, input/output, etc. In particular graphics is a serious issue (attaining hardware acceleration in a virtual environment safely), which from last I heard VMWare were working hard on.

      Furthermore, Virtualization complements well with software that can migrate VMs (based on load or failure), and so forth. So, even if hardware CPU virtualization is to be desired - I agree with you on that - that won't suddenly make virtualization as a whole a simple task.
      • In particular graphics is a serious issue (attaining hardware acceleration in a virtual environment safely), which from last I heard VMWare were working hard on.

        Actually, the people who have made the most headway are Microsoft. The Vista driver model is designed for support for virtualisation in mind. This means that the OS has access to video driver commands for things like saving and restoring GPU state. As far as I know, other operating systems currently lack this; Linux has a problem even switchi

        • Actually, the people who have made the most headway are Microsoft.

          Actually, Microsoft is just trying to catch up. Linux already has a very flexible driver model in place, and its GUI uses an architecture that is ideally suited to virtualization.

          Linux has a problem even switching between virtual consoles

          Linux has no problem switching between virtual consoles if you use the correct drivers for your hardware.
      • Most virtualization is for servers or Linux desktops, and they don't require more than virtual disks and networks, plus minimal console emulation; all that code already exists in open source form.

        VMware's big thing was a JIT-like x86 engine, a complex piece of software that is now not needed anymore. That really is a big deal.
      • To have a complete virtualization solution, you need to also virtualize the rest of the hardware: storage, graphics, input/output, etc.

        Kripkenstein mentions the dirty little secret that Intel doesnt tell you. Virtualization of the CPU is just that, CPU Virtualization, the memory management, peripherals, and IO hardware is still managed mostly by a host OS.

        So, Software is faster until the CPU vendor includes a bios and chipset thats more virtualized oriented.

        Vmware's main advantage is they provide the host
        • Vmware's main advantage is they provide the host OS for enterprise users without modifying the guest OS, which Xen requires the guest to be modified.

          Yes, and the reason it does is because the old x86 had a few non-virtualizable instructions; an efficient workaround was a lot of effort, and Xen chose the simpler route of simply disallowing those instructions in the guest OS.

          Kripkenstein mentions the dirty little secret that Intel doesnt tell you. Virtualization of the CPU is just that, CPU Virtualization, th

    • Hardware virtualization may be slower right now, but both the hardware and the software supporting it are new. Give it a few iterations and it will be equal to software virtualization.

      It may or may not be faster eventually, but that doesn't matter. What matters is that small changes in the hardware make it possible to stop having to depend on costly, proprietary, and complex software--like that sold by VMware.


      Maybe I'm crazy, but I just don't see that happening anytime soon in the mainstream. When the
  • by graf0z (464763) on Sunday August 13, 2006 @04:28AM (#15897560)
    Paravirtualization (running hypervisor-aware guest kernels, eg patched linux on xen) is faster than both, binary translation and "full" virtualization. And you don't need CPUs with VT extension.

    g

    • The whole point of virtualization is so you can run your favoriate OS most of the time, and only switch over to Windows when you want to run games, isn't it?

      Well, I suppose if choose to stay in the Windows world most of the time, the whole point of VM is to try to keep malware off your computer... But either way, you're not getting a FLOSS paravirtualized Windows kernel any time soon.

      • Eerrr ... tautologically true, yes: if there is no paravirtualized version of the OS you want to use, paravirtualization is not an option. But there are many scenarios where you are only interested in running lots of paravirtualizable unixish OSes, eg server farming.

        Your windows desktop is not the whole point.

        g

        • It just seems like many people who try to move away from Windows seem to want to at least have the option to use Windows once in a while.... The Mac-moving-to-Intel thing was met with a lot of excitement because of this, a lot of linux people seem to say this, and it seems like in a lot of companies employees must be productive with specific document formats. Certainly Windows isn't the only point of virtualization, but it seems like it's a really big one, especially for desktop users.
        • Your windows desktop is not the whole point.

          Very true, but a lot of the folks that have been deriding VMware as being inferior to Xen are failing to notice that Linux server farming is not the whole point either. At work, I'm limited to using Windows on my desktop, but I still have quite a bit of Linuxy stuff to do and don't have the space or budget to set up additional machines. I also need to have a flexible networking environment in which to test, so I run VMware Workstation. VMware has proven to
  • Look to IBM (Score:3, Informative)

    by dpilot (134227) on Sunday August 13, 2006 @05:40AM (#15897665) Homepage Journal
    IBM has been shipping virtualization since before many of these newcomers were even born. What do you think the 'V' in MVS or VM stands for? I wonder how well IBM's expired patents compare to modern virtualization. Of course in this case it helps to own the hardware, instruction set, and operating system.
    • Re:Look to IBM (Score:4, Interesting)

      by pe1chl (90186) on Sunday August 13, 2006 @06:02AM (#15897699)
      IBM's VM also started as a software product that had to cope with virtualisation problems in the hardware.
      Just like what is happening now, they added specific support to the hardware to make VM perform better.
      This all happened before the development of today's architectures, but in the early days of microcomputing, IBM had the position that Microsoft has today: they were the big company that had 90% of the market, and in the eyes of the newcomers all they did was by definition the wrong thing. So nobody would bother to look at 360 mainframes, VM and how it was done before designing their own processor.
      (this would be similar to telling a Linux geek to look at how certain problems are solved in Windows... it is Windows, it is Microsoft, so it has to be the wrong solution)
      • True. But actually IBM's experience is a pretty accurate analog to this thread.

        VM370 was a dog. Why? Because they relied on hardware traps and software simulation of CCW's (channel command words), to run the host operating system "perfectly."

        A hack to this, used by National CSS and other timesharing vendors (because, remember that CP/CMS was open source software and VM370 was just one implementation of it), was to replace CCW's inside CMS with specific traps for OS services. The result was that National
      • by Sycraft-fu (314770) on Sunday August 13, 2006 @08:16AM (#15897902)
        It's not that people don't look to old mainframe solutions for things, they do, it's that often what was feasable on those wasn't on normal hardware, until receantly. There was no reason for chip makers to waste silicon on virtualization hardware on desktops until fairly receantly, there just wasn't a big desktop virtualization market. Computers are finally powerful to the point that it's worth doing.

        It's no supprise that large, extremely expensive computers get technology before home computers do. You give me $20 million to build something with, I can make it do a lot. You give me $2000, it's going to have to be scaled way back, even with economies of scale.

        You see the same thing with 3D graphics. Most, perhaps even all, the features that come to 3D cards were done on high end visualizaiton systems first. It's not that the 3D companies didn't think of them, it's that they couldn't do it. The orignal Voodoo card wasn't amazing in that it did 3D, it was much more limited than other thigns on the market. It was amazing in that it did it at a price you could afford for a home system. 3dfx would have loved to have a hardware T&L engine, AA features, procedural textures, etc, there just wasn't the silicon budget for it. It's only with more developments that this kind of thing has become feasable.

        So I really doubt Intel didn't do something like VT because they thought IBM was wrong on the 360, I think rather they didn't do it because it wasn't feasable or marketable on desktop chips.
    • Re:Look to IBM (Score:3, Interesting)

      by TheRaven64 (641858)
      IBM contribute to Xen. I was at a talk last year by one of the IBM Xen guys. He made the point that IBM has a real advantage in virtualisation because, when they get stuck, they can pop along the hall to the grey-bearded mainframe guys and say 'hey, you remember this problem you had twenty years ago? How did you solve it?'
  • by Anonymous Coward
    I don't doubt their numbers, they've been creating virtualized systems very effectively for years.
    I think that any kind of "full virtualization" is going to be subject to these issues. If you want to see performance improvements then you should modify the guest os.

    VMware's BT approach is very effective and their emulated hardware and bios are efficient, but that won't match the performance of a modified OS that KNOWS it's virtualized and cooperates with the hypervisor rather than getting 'faked out' by some
  • A friend of mine works at Intel, and he flat out told me (several months ago) that Vanderpool/Pacifica will be slower than VMWare-only for the 1st generation. However this will change in a few years.
  • Parallels on Mac OS? (Score:3, Interesting)

    by akac (571059) on Sunday August 13, 2006 @09:28AM (#15898080) Homepage
    Well OK. But it could also mean that VMWare doesn't know yet how to properly create a hardware virtualized vm.

    Parallels on OS X switches between software and hardware virtualization and using hardware virtualization its about 97% the speed all around of native hardware (consider that virtualization on current Yonah CPUs is equal to one core only). Software virt on Parallels is much slower - on par with running Windows Virtual PC on the same box using Windows XP (not Mac Virtual PC).
  • There's no reason that a system using hardware virtualization (which still requires a lot of software anyhow) can't employ the same sort of code modifications used by all-software virtualization. But the all-software approach must scan ALL code before it is executed, to find the trouble spots, while the hardware-software combo can simply wait for a trap, then modify the code that cause the trap.
  • I was using Parallels on Linux on my Core Duo laptop and it was _fast_. I was very impressed. Then I tried vmware-server on the same machine and it ran just as fast if not faster. Later on I discovered vmware-server was actually not even using Intel's VT instructions, it was being done in software. Something to think about.
  • You just have to read the paper and it spells it out. Other than VMWARE making a hybrid software/hardware mode it's not going to get any faster. Every time there is data recieved or sent there is an expensive context switch for hardware VM. So databases, web servers, and file servers will waste more cycles virtualized than compared to Seti or Folding programs. Clicking your heals together is not going to change the result. This is first generation virtualization results. How many SSE amd MMX revisions

Whenever a system becomes completely defined, some damn fool discovers something which either abolishes the system or expands it beyond recognition.

Working...