Hardware Virtualization Slower Than Software? 197
Jim Buzbee writes "Those you keeping up with the latest virtualization techniques being offered by both Intel and AMD will be interested in a new white paper by VMWare that comes to the surprising conclusion that hardware-assisted x86 virtualization oftentimes fails to outperform software-assisted virtualization. My reading of the paper says that this counterintuitive result is often due to the fact that hardware-assisted virtualization relies on expensive traps to catch privileged instructions while software-assisted virtualization uses inexpensive software substitutions. One example given is compilation of a Linux kernel under a virtualized Linux OS. Native wall-clock time: 265 seconds. Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem.
"
Not just the CPU (Score:5, Interesting)
I am 100% in favor of cheap and open solutions. But I don't agree that this will soon be the case for virtualization. VMWare and the few other major vendors do a lot more than software virtualization of a CPU (which is all TFA was talking about). To have a complete virtualization solution, you need to also virtualize the rest of the hardware: storage, graphics, input/output, etc. In particular graphics is a serious issue (attaining hardware acceleration in a virtual environment safely), which from last I heard VMWare were working hard on.
Furthermore, Virtualization complements well with software that can migrate VMs (based on load or failure), and so forth. So, even if hardware CPU virtualization is to be desired - I agree with you on that - that won't suddenly make virtualization as a whole a simple task.
Missing the point (Score:1, Interesting)
Re:Sponsored by VMWare.. what do you expect? (Score:2, Interesting)
I'll let you in on a secret: if you consider all costs, and return on investment, using VMware is a competitive advantage over using Xen. But I don't care whether you believe me, because if you don't, you'll be at a competitive disadvantage, which is to my benefit.
Re:Look to IBM (Score:4, Interesting)
Just like what is happening now, they added specific support to the hardware to make VM perform better.
This all happened before the development of today's architectures, but in the early days of microcomputing, IBM had the position that Microsoft has today: they were the big company that had 90% of the market, and in the eyes of the newcomers all they did was by definition the wrong thing. So nobody would bother to look at 360 mainframes, VM and how it was done before designing their own processor.
(this would be similar to telling a Linux geek to look at how certain problems are solved in Windows... it is Windows, it is Microsoft, so it has to be the wrong solution)
Re:The correct conclusion is more limited (Score:5, Interesting)
It also had a few other advantages. Since you were adding virtual instructions, they all completed atomically (you can't pre-empt a process in the middle of an instruction). This meant you could put things like thread locking instructions in the PALCode and not require any intervention from the OS to run them. The VMS PALCode, for example, had a series of instructions for appending numbers to queues. These could be used to implement very fast message passing between threads (process some data, store it somewhere, then atomically write the address to the end of a queue) with no need to perform a system call (which meant no saving and loading of the CPU state, just jumping cheaply into a mode that could access a few more registers).
Re:Look to IBM (Score:3, Interesting)
Re:The correct conclusion is more limited (Score:3, Interesting)
Note that the 'naive conclusion' and the 'correct conclusion' are not contradictory: I remember an article recently where it was shown that the Alpha had three times the power of a correspondig VAX, which made nicely the point that CISC is shit.
Now as Intel has shown, given enough efforts and money even x86 the poorest CISC ISA ever (VAX ISA was much nicer than x86 ISA: more registers, orthogonal design) can be competitive and sofware compatibility makes the rest..
I smell a straw man... (Score:2, Interesting)
Perhaps the intended conclusion was that it was feasible to write an efficient compiler using only a small, intelligently chosen with compiler optimization in mind, subset of the instruction set. Perhaps the fact that the original compiler was (as you assert) "a pile of manure" was not unconnected to the fact that it tried to achieve speed by exploiting the entire, eclectic, VAX instruction set (wonder how they worked the famous polynomial instruction in?) instead of sticking to a subset and applying generalised optimization techniques.
PS: If you think RISC lost the war, then remember that modern x86 processors consist of a RISC core with a translator stage to handle all those pesky, legacy CISC instructions.
Re:No not really (Score:4, Interesting)
I am not willing, based on a single datapoint, to make any conclusions. That's tanget to my point anyhow, my point was that doing something in hardware and software are quite different.
Re:The correct conclusion is more limited (Score:1, Interesting)
Grandparent post is correct. As of the z990 series/model (I think), LPAR (lowest level of virtualization) is required. There is no further option for 'bare metal' operation. IOW, what used to be 'bare metal' is now a single LPAR; the difference being that the hypervisor is always engaged. But this is a fairly recent development, and I can't dis the parent post for not knowing.
As pertains to TFA and this subject overall, it took IBM many years to get hardware virtualization 'right', and it's under constant refinement, even now. There were incremental hardware (and OS, if we want to talk z/VM) improvements all the way across the zSeries (and predecessors) line to support it, dating all the way back to the 1970s.
I don't expect that Intel or AMD would have gotten it right on the first shot -- efficient h/w virtualization is not as easy as it sounds it might be. If the benchmarks are correct, I'm not too surprised. Getting there may take Intel/AMD many more years, depending on commercial (read: paying customers with $$$ on the table) demand for efficiency.
If I were on the VMWare or Xen staff, I wouldn't be losing any sleep -- at least for a while, yet.
And then there's paravirtualization (Score:2, Interesting)
I think that any kind of "full virtualization" is going to be subject to these issues. If you want to see performance improvements then you should modify the guest os.
VMware's BT approach is very effective and their emulated hardware and bios are efficient, but that won't match the performance of a modified OS that KNOWS it's virtualized and cooperates with the hypervisor rather than getting 'faked out' by some emulation.
CMS/370 (Score:1, Interesting)
You really want to look at the other guest OSes, like MVS, and what VM did to manage performance on them. Things like various microcode vm assists and dedicating hardware to guests so no virtual to real hardware translation had to be performed.
Re:Sponsored by VMWare.. what do you expect? (Score:0, Interesting)
It's no supprise that large, extremely expensive computers get technology before home computers do. You give me $20 million to build something with, I can make it do a lot. You give me $2000, it's going to have to be scaled way back, even with economies of scale.
You see the same thing with 3D graphics. Most, perhaps even all, the features that come to 3D cards were done on high end visualizaiton systems first. It's not that the 3D companies didn't think of them, it's that they couldn't do it. The orignal Voodoo card wasn't amazing in that it did 3D, it was much more limited than other thigns on the market. It was amazing in that it did it at a price you could afford for a home system. 3dfx would have loved to have a hardware T&L engine, AA features, procedural textures, etc, there just wasn't the silicon budget for it. It's only with more developments that this kind of thing has become feasable.
So I really doubt Intel didn't do something like VT because they thought IBM was wrong on the 360, I think rather they didn't do it because it wasn't feasable or marketable on desktop chips.
Yes, AMD Pacifica seems to be far better (Score:4, Interesting)
Apparently, yes, and by a good margin.
There are several documents and articles out there which point out VT's problems and how Pacifica is quite dramatically better. Here's an excerpt from "AMD Pacifica turns the nested tables" [theinq.net], part 3 of an informative series of articles:
This should allow an otherwise identical VMM to do more things in hardware and have lower overhead than VT. AMD appears to have used the added capability wisely, giving them a faster and as far as memory goes, more secure virtualisation platform."
So, it looks like AMD are ahead on hardware virtualization at the moment.
If I read it correctly, this is because Intel's VT actually requires a lot of software intervention, so it's not actually a very strong hardware solution at all.
Parallels on Mac OS? (Score:3, Interesting)
Parallels on OS X switches between software and hardware virtualization and using hardware virtualization its about 97% the speed all around of native hardware (consider that virtualization on current Yonah CPUs is equal to one core only). Software virt on Parallels is much slower - on par with running Windows Virtual PC on the same box using Windows XP (not Mac Virtual PC).
I designed h/w virtualization (Score:3, Interesting)
Note that this doesn't make all the other stuff in VT/SVM useless; there are lots of places on the x86 where pure s/w virtualization has to go to great lengths of complexity just to get things correct. As a simple example: there's no way on "old" x86 h/w to save & restore segment descriptors (which you need to do on world switch) --- all you get is the selector, and if the guest O/S has overwritten the in-memory copy, you're out of luck. "Fixable" in s/w (obviously; VMWare does it), but just plain grody. So a major advantage of SVM/VT is that it becomes a lot *easier* to write a VMM (opening up the market to more players; this is starting to show in the Macintosh market) --- eventually, it should become faster, too.
On a separate note, over the next years, expect h/w assistance for dealing the device emulation (and not just from the CPU vendors).