Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

An Overview of Virtualization Technologies 204

PCM2 writes "Virtualization is all the rage these days. All the major Linux players are getting into the game with support for Xen, while Sun has Solaris Containers, Microsoft has Virtual PC, and VMware arguably leads the whole market with its high end tools. Even AMD and Intel are jumping onto the bandwagon. InfoWorld is running a special report on virtualization that gives an overview of all these options and more. Is it just a trend, or will server virtualization be the way to go in the near future?"
This discussion has been archived. No new comments can be posted.

An Overview of Virtualization Technologies

Comments Filter:
  • by Flying pig ( 925874 ) on Friday July 07, 2006 @04:24AM (#15673833)
    Yesterday's mainframe, today's rackmount server, tomorrow's desktop. As computers get faster, software functions at ever higher levels of abstraction. The holy grail is when you have the array of blade servers which you can grow or shrink on the fly, the sea of running operating systems, and the application that spreads itself across the lowest loaded operating systems as needed. Fault tolerance, load balancing, all out of the box.

    With the growing evidence of the human brain's ability to rewire itself and route around failures on the fly, and the effective virtualisation of perception (why do I appear to see a three dimensional picture of the world when I have only 2 curved arrays of photosensors?) we are probably just following a well trodden evolutionary path.

  • The business plan? (Score:3, Insightful)

    by rangeva ( 471089 ) on Friday July 07, 2006 @04:31AM (#15673851) Homepage Journal
    I remember the MS vision of making light operating systems which are basically terminal computers and visualizing the OS with remote powerful servers. This way the user will pay monthly/yearly fees to use his computer. Upsides: OS is automatically upgrades with security patches and new features; Data is backed up and can be accessed from any computer. Downsides: Well basically the monthly payment and the fact the MS got your base ;)
    I think many companies are looking for a way to monetize software by monthly or yearly fees - this can be their way...
  • by ptitvert ( 183440 ) on Friday July 07, 2006 @04:39AM (#15673877)
    What kind of article is that?

    They talk about VMWare, Intel/AMD, the future Solaris on E10000, other things... but where is IBM?
    They can do Virtualization for at least 3 years with their Regatta technology (P670, P690 (Power 4 technology), P530, P550, P560, P570, P575, P590, P595 (Power 5 technology)) and their OS AIX 5L.

    they are able to give a few percentage of a cpu to virtual server, with their Virtual IO server, they also are able to virtualize network and disks. They can do workload management between virtual servers. Add/remove disks/cpu/memory in real time.

    etc...

    So for a complete discussion an overview of the virtualization in the industry, IBM is now a big player, and they are now surpassing SOLARIS & HP in the "closed" unix world.

    So for me this overview is not complete and should not have passed the "draft" version until someone was looking at the actual and running alternatives.

    L.G.

  • by interiot ( 50685 ) on Friday July 07, 2006 @04:41AM (#15673883) Homepage
    Not a huge percentage of people dual-boot. But hopefully virtualization will increase the ease of use of Linux and ALL other alternative operating system as well. There are hundreds of home-grown OS's out there, and it would be cool if virtualization were easy enough to use that people just download and run it to test it out, making OS's as easy to try out as applications.
  • by jkrise ( 535370 ) on Friday July 07, 2006 @04:59AM (#15673912) Journal
    Virtualisation is a disruptive technology... in that it requires a lot of intellectual investment on the part of the sysadmin. The reason Unix and Windows Servers have gotten by without adding much features, yet retaining market share is simple... admin lethargy and apathy.

    Microsoft does not seem to like virtualisation.. hell, they didn't like Terminal Services.. so they crippled it in NT4, made extra licensing restrictions with Win2K, and made the WinXP / Metaframe XP combn. a non-starter. In microsoft's world, users must only license MS's servers and everything needs a separate server /client.

    Now that the virtualisation market has grown IN SPITE OF the apathy of these s/w vendors... and the tremendous mindshare with Open Source technologies, these old chaps are trying to make money without doing anything themselves.. witness the recent MS licenmsing options in virtual segments, acquisition of IP, Intel's hypervisor efforts, AMDs efforts etc.

    If virtualisation succeeds, it could spell the end for DRM and Treacherous Computing initiatives... since these need collective collusion by all parties involved. Looks like the firms mentioned will try their damnedest to sidetrack virtualisation.. just like terminal servics and thin clients never reached their full potential. Open Source firms and nerdy sysadmins might well have the last laugh...
  • by jeswin ( 981808 ) on Friday July 07, 2006 @05:19AM (#15673953) Homepage
    Well, the fact is the virtualization is a work-around poorly written and designed OSes and applications. Virtualization is succeeding because we cannot build OSes that: 1. Prevent applications from littering and destroying public space 2. Do a decent migration without re-installs 3. Can scale without re-installing and re-configuration 4. Do better throttling and pooling And we cannot build applications that: 1. Know how to co-operate with other applications, atleast be aware that the system cannot be monopolized. 2. Install in a private space Some time back I had written a blog about Virtualization, isn't it a Diversion? [process64.com] Summary: Virtualization looks like necessary evil, because we are incompetent to write better OSes and Application. Virtualization is the easier route. And, you wait till it reaches critical mass, gets everywhere and brings its share of problems. I would have preferred a better, from the ground-up OS any day. Hurd, or ever better Singularity!
  • by Savage-Rabbit ( 308260 ) on Friday July 07, 2006 @05:27AM (#15673967)
    Virtualization is one of the best things since sliced bread and I believe it's here to stay. First of all, it spells an end to multi-booting. I have erased my secondary OSs and I run them in VMs under my main system. A performance hit does definitely occur by I am willing to pay such price for the greater ease of use. Secondly, just think of the possibility to move server images from a physical server to another one, literally freezing it here and awakening it over there - InstaScaleOut(tm) must be a server admin's wet dream.
    Of course, as with all abstraction layers, it introduces complexity and takes a toll in the form of performance - but we all know absraction layers have been increasing all the time since the beginning of time.


    Mostly I agree with that but there are a few pitfalls. What tends to happen is that people go wild setting up VMs and whenever an old machine needs to be retired whatever is running on that OS doesn't get migrated to a new machine with a new OS any more. Why bother when you can just turn the half a dozen old web/mail/file servers you need to get rid of into VM's complete with their OS and move them all to a single new computer and thus save loads of rackspace? Well yes, VM'ing is nice I love using it for development test setups, rescue migrations for OS instances running on faulty hardware and it has lots of other uses but it isn't more than a temporary substitute for migrating and merging the web/mail/file servers or whatever other servers you are using when this is appropriate. Even though migrations can be a quite problematic to implement there are situations when you will be better off merging and migrating, for example, a few old webservers onto a single new webserver on a new OS instance rather than just VM'ing all the old web servers. Convincing PHB's of this can be difficult. Some of them don't always seems to immediatley understand that if you just collect VM'ed OS instances and reduce only rackspace the growing number of OS instances will eventually become a burden. PHB's also tend to have strange notions on how many VMs you can run concurrently on a single computer and how heavily you can load those VMs.
  • by Anonymous Coward on Friday July 07, 2006 @05:29AM (#15673972)
    Or, if you could build an OS that did all that, people would call it a virtual machine monitor. Go figure.
  • by IDontLinkMondays ( 923350 ) on Friday July 07, 2006 @05:36AM (#15673988)
    Well, first of all, I'd like to point out that I've run on virtualized systems for the entire extent of my career. Not specifically in the sense which we run now, but in the sense that back in the old days, we ran IBM mainfraim operating systems on IBM systems that actually were virtual machines. They included features such as segmentation and all the good stuff which is just coming around now.

    Thanks to other technologies I've run similar systems for ages. It is entirely common for me to develop a file system driver while keeping Mac OS X, Windows, Linux, and DOS running on the same system. I've done this for a long time as well. The difference is that the operating systems would be virtualized by running system emulators instead of using CPU technologies for system segmentation. I did this in the old days under DOS using Quartdeck Desqview and a CPU emulator.

    First thing that people really need to understand at this point that virtualization as we're using it today is little more than finding a method to lauch operating systems as "processes" under another operating system. This is not magic, for the most part it's something that any operating system developer should be capable of. The issue is more of grinding. It takes the right kind of people to sit and grind through each of the problems that come up with running like this. It's the same idea as writing a Windows compatible API stack. You start off with simple programs you have the source for and work your way up through more complex applications that require direct hardware access. It's a matter of intercepting the calls and handling them as if you were the real thing.

    So here's the deal. As a system level developer, I am more interested in what these guys are actually doing in order to make it happen. Let's face it, although Intel and AMD are adding virtualization technologies to their processors, the actual task of switching between CPU contexts is hardly an issue. The real issue is how are they handling hardware emulation.

    See, to me, I focus on high performance workstation related tasks. Servers are cool and great, but in reality, it's how it performs on the desktop that is truly important to me. What I want to see is that a vendor grinds a little more on this issue.

    VMWare has classically written device drivers to handle hardware interfacing with better performance than others. So instead of simply emulating the VESA BIOS extensions and providing access to an SDL style frame buffer, instead they have written drivers to allow graphics acceleration. So what I really want to see is that they take it a step further....

    I want more than just accelerated BitBlt functions. Of course in the 2D desktop world, high performance frame buffer moves are not optional but required since the bus bandwidth required to copy large frame buffers all around is outrageous. But in the days where OS X uses OpenGL and Windows Vista uses DirectX, I want drivers that interpret 3D contexts as well.

    So here's what I'm thinking... write a 3D driver for Windows, Mac OS X, X. The driver should of course offer frame buffer handling, but this shouldn't be the focus since it isn't used for much more than boot and text mode processing. When an OpenGL context is created, instead of creating the context native to the virtual machine, the context should occur on the host operating system and should be managed there. The only interprettation should occur when the graphics driver informs the guest operating system of the top level context.

    For direct X, well, I've seen at least one virtual driver in the past which implemented Direct X on Open GL. For professional graphics, Direct X is typically seen as a toy although in reality in many ways it's more powerful than OpenGL (don't argue, it has to do with what's more important to hardware vendors so their drivers are optimized for game based testing). So, since most professional graphics packages are OpenGL based, then the virtualization software vendor should simply implement a translation layer ov
  • by Anonymous Coward on Friday July 07, 2006 @05:37AM (#15673991)
    I am not sure that UML is a better overall idea. Xen is more efficient and easier to maintain. Plus, for a hosting company, being able to load balance by moving the clients VMs around must be pretty damn nice.
  • by Colin Smith ( 2679 ) on Friday July 07, 2006 @05:40AM (#15673999)
    Mainframes have been using virtualisation for decades. It's not going away, it's simply too useful.

     
  • by KiloByte ( 825081 ) on Friday July 07, 2006 @08:20AM (#15674364)
    Do I misunderstand, or is there are real advantage on running product X in one VM and product Y in another (or even second instance of product X). What is the advantage of that scenario over simply running X and Y (or two X) on the same box. If the answer is that the software doesn't properly handle binding to particular IPs or that it requires exclusive access to a single file, then the software is crap.

    Security. Modularization. Having one part falling down not take down everything else.

    For example, in my setup there are two servers:

    * the old one: mysql, postgres, apache
    * the new one: Xen
        * pound (reverse http proxy)
        * postgres
        * mysql, apache
        * subversion+backups
            + viewvc running as a different user with read-only access to the repositories
        * a VM hosted for someone else

    When I break the dev apache, the production one stays up. When apache goes down, subversion stays up. When any of my VMs go down, the one hosted for someone else stays; and the other way around.

    And when someone pwns anything other than the dom0 (which runs just Xen and ntpd), they took over just that single part.

    Sure, I could run everything without virtualisation. But I don't think I have to say why I prefer the way I've chosen.

    And you can't claim that Citrix is a good product. Slapping a GUI on a server and "network efficiency" don't belong in the same sentence.
  • by m874t232 ( 973431 ) on Friday July 07, 2006 @08:21AM (#15674370)
    Various people in this thread have claimed that virtualization is a workaround for not being able to write a decent operating system. I think that's wrong. Different operating systems are legitimately different in the way in which they present high level interfaces and abstractions of low-level hardware features.

    What virtualization really is is a long overdue standardization of a set of APIs that exist in many operating systems but remain hidden. By finally exposing them, we gain functionality that didn't exist previously.
  • by TheRealFixer ( 552803 ) on Friday July 07, 2006 @08:35AM (#15674426)
    Even VMware will tell you that virtualization is not a solution for everything, and not every server or application should be virtualized. For instance, heavy-duty database servers (like Oracle) are not really a good idea. Or are putting up servers that do an extreme number of disk writes (especially the case if you're using shared LUNs between multiple ESX servers) because you still have to deal with SCSI locks. Microsoft supposedly won't support you if you put your AD controllers on a VM (even their own Virtual Server!).

    However, we have NEVER had an issue with the network and VMware. If you design your host correctly and put in plenty of NICs to bond to scale to your need, it seems to work just fine. And make sure that all the servers are using VMware Tools and the VMXNET interface. I'm also interested in the corrupt VM statement. What exactly corrupted?
  • by basil montreal ( 714771 ) on Friday July 07, 2006 @08:52AM (#15674508) Homepage
    True, but current blade servers are an unimpressive implementation of an excellent idea.

    Only because of their initial expense. I do pre-sales technical support for IBM storage, so I asked my server counterpart, and the reason they're prices higher than rack or tower servers is that they cost less to cool. Over a year you get the difference in cost back in your AC bill.
  • by trevor-ds ( 897033 ) on Friday July 07, 2006 @10:02AM (#15674954)
    I use VMWare for testing and it is great, but virtualisation tends to be used in production to compensate for software that doesn't cooperate well with other software.

    I remember this kind of argument from Mac devotees in the pre-OS X days when the Mac didn't have real protected memory, and still used cooperative multitasking. People would say that pre-emptive multitasking was just a crutch, that cooperative multitasking was cleaner and potentially more efficient, and that "good" programs would consistently yield processor time in tight loops to let other programs run.

    It turns out that putting yield statements in every inner loop of every program you run is a big huge hassle, and that pre-emptive multitasking solves the problem elegantly; so elegantly that everyone does it. Not yielding CPU time is not "bad code"; it's just leaning on an abstraction that you know exists.

    This same pattern of argument has been used to downplay high level languages ("optimizing compilers are just a crutch--quality software has hand-scheduled instructions"). Now we'd legitimately have to call the x86 ISA a crutch, since modern processors effectively process x86 instructions in emulation.

    Don't fear abstraction! It's good for you.

  • by mikearcher ( 987438 ) on Friday July 07, 2006 @10:41AM (#15675305)
    Let's say you have a farm of VMWare servers. You have application A in one VM, and application B in another VM on the same physical server. For whatever reason, the load on application A takes a sharp spike upwards. In your scenario of A and B installed on the same physical hardware, you are pretty limited in your options. Call some poor engineer in the middle of the night, pray you have a spare server, get that app B installed on the new server, and hope everything works. In the VM world, you just grab the VM for either app A or B and move it to a lower utilization server, with no significant downtime. I believe with VMWare 3.0, this process can even be automated to a large degree.
  • Trendy!!!!! (Score:3, Insightful)

    by fm6 ( 162816 ) on Friday July 07, 2006 @08:20PM (#15680523) Homepage Journal
    First of all, it spells an end to multi-booting. I have erased my secondary OSs and I run them in VMs under my main system.
    Well, yes, if you're a geek who likes to play with a dozen OSs, you'd much rather open on a new VM then reboot your machine. But as usual, we're confusing geekworld with the real world. The use of desktop VMs is pretty limited outside geekworld — mostly Mac folks who have one or two Windows apps they can't live without. That doesn't do a lot to explain why so many heavy hitters are interested in the technology.
    Secondly, just think of the possibility to move server images from a physical server to another one, literally freezing it here and awakening it over there...
    Cloning systems is hardly new. Of course that's different from what you're talking about, which is cloning a running system. Still, is that really something you have to do very often?
    Is it just a trend...
    Of course it's a trend. I won't play language nazi here. I'll just suggest that everybody stop and think about how they've heard "trend" used in other contexts. I think the word PCM2 was looking for was "fad".

    And of course it's not a fad. There are already a lot of server farms out there that are highly dependent on virtualization. It allows them to provide specific OSs, and even OS versions (notice that Sun is mainly interested in letting folks run multiple versions of Solaris), without dedicating a machine to each installation. Less expensive, more flexible. No big mystery there.

    And it's not even a new idea, though this particular implementation of it is. For years, supercomputer companies have sold software that divided their multiprocessor systems into "cells", each running its own OS. Virtualization is pretty much the same thing, only it doesn't require a dedicated CPU — or the purchase of an expensive supercomputer.

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...