Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

An Overview of Virtualization Technology 147

Jane Walker writes to tell us that TechTarget has a short writeup on virtualization and some of the ins and outs of using this technology effectively. From the article: "Virtualization is a hot topic in the enterprise space these days. It's being touted as the solution to every problem from server proliferation to CPU underutilization to application isolation. While the technology does indeed have many benefits, it's not without drawbacks."
This discussion has been archived. No new comments can be posted.

An Overview of Virtualization Technology

Comments Filter:
  • by tinkertim ( 918832 ) * on Wednesday April 12, 2006 @05:28AM (#15112180)
    From TFA:
    >>>>>
      Novell is investing lots of effort in optimizing Xen specifically for running a virtualized copy of NetWare on top of Linux. The company's goal is to provide its customers with a migration path over to the Linux platform without giving up NetWare.
    >>>>>

    One of the many un-sung uses for Xen is a swiss army SAN. I'm glad to see someone touch on this.

    >>>>>
    If you want to use Linux as your host OS, you'll definitely have to go with VMware.
    >>>>>

    That wasn't so cool. I appreciate the fact that there are just too many products available to touch on everything in one short summary article / writeup, and while the majority of the article was informative even to the lay person, you need to end a sentence like that with a 'Because .... [summary]'. That's a really broad and sweeping statement to make.

    Or perhaps even "I recommend VMWare" would have been better.

    It looks like the author lost interest in what they were writing near the end of the article. They talk about IRC or newsgroups being the only support options available for OS products [another sweeping statement], however have you checked out the wiki at xensource.com [xensource.com] lately?

    Just seems like TFA lost coherency after 'What's best?' It went from really informative to misleading rather quickly. If your going to go to a virtualized platform you owe it to yourself to spend a month trying each candidate to see what works best for you, not the author of whatever article you read :) This is not a pro Xen rant but I'd like to point out that it does install effortlessly on most Debian systems in under an hour, the TFA sort of indicated otherwise.

    • Just seems like TFA lost coherency after 'What's best?' It went from really informative to misleading rather quickly. If your going to go to a virtualized platform you owe it to yourself to spend a month trying each candidate to see what works best for you, not the author of whatever article you read :) This is not a pro Xen rant but I'd like to point out that it does install effortlessly on most Debian systems in under an hour, the TFA sort of indicated otherwise.

      That's when the phone call from Redmond

    • by Anonymous Coward
      I found the entire article to be almost content free. This is the sort of thing that PHBs like to read to feel they're "staying in touch" with technology.

      For example, there was no mention of the reason behind the performance differences between VMware (ie. you're emulating everything, right down to the CPU) and Zones (ie. you're running one kernel and only jailing processes).

      Fortunately, it was short enough that I could get to the end without wasting too many seconds of my life. But the /. keepers really
    • by IO ERROR ( 128968 ) * <errorNO@SPAMioerror.us> on Wednesday April 12, 2006 @06:59AM (#15112417) Homepage Journal

      Looks like you took at least one of those quotes out of context. Here's the context:

      If you're a developer looking for a flexible way to test your application in multiple environments, you'll probably want to go with either Virtual PC or VMware Workstation. If you want to use Linux as your host OS, you'll definitely have to go with VMware.

      Virtual PC doesn't run on a Linux host, so you'll definitely have to go with VMware.

      • I re-read it a few times just to be sure. Its not very well constructed, but I think I got the context right in my post .. I think. Read it this way swapping a space with a new line:

        If you're a developer looking for a flexible way to test your application in multiple environments, you'll probably want to go with either Virtual PC or VMware Workstation.

        If you want to use Linux as your host OS, you'll definitely have to go with VMware.

        The way that reads to me is, if you want to use Linux as your host (root) O
        • by Torne ( 78524 ) <torne@wolfpuppy.org.uk> on Wednesday April 12, 2006 @08:39AM (#15112735)
          You can't run Xen on Linux - Xen is a freestanding hypervisor that runs directly on the metal. So, the statement in the article seems perfectly reasonable. (this is, incidentally, one of the (numerous) advantages of Xen over VMWare in the performance stakes - being able to control the hardware directly instead of having to mess around with what the host OS will let you do is quicker).

          Every OS on a Xen system is a guest OS. Some of them just have permission to create new OS instances, or access particular bits of real hardware directly.
          • by Hercynium ( 237328 ) <HercyniumNO@SPAMgmail.com> on Wednesday April 12, 2006 @09:31AM (#15113009) Homepage Journal
            Xen is a freestanding hypervisor that runs directly on the metal.
            I feel I should get a little pedantic with this statement. Xen, specifically, is a modification of the linux kernel that provides hypervisor capabilities for the host OS (Linux) and integration with the guest OSes. Xen's host components can't run directly on the metal, like VMWare ESX can. It needs the rest of the linux (I can't remember if it's been ported into other kernels) kernel to provide hardware access. Also, Xen requires (until pacifica, et al) that the guest OS kernels be modified to integrate with the host's hypervisor layer. Without that, Xen does nothing. (VMWare ESX does not require a modified guest OS)

            So, if you think of a Xen-enabled linux kernel as Xen, you're right. But I see it as a seperately developed, ported, and integrated extension that requires a kernel to operate. Again - I believe there are efforts to get it running inside other kernels, but I don't remember.

            On second thought - someone fill me in here - I'm guessing that VMWare ESX probably runs as part of some ther's OSes HAL, but of course they don't say so in the sales pitch...

            *disclaimer* -- While I really like VMWare's product for functionality and ease-of-use, for performance I'd go with Xen. I'm currently involved with a project at my company to virtualize as much of our datacenter as possible, and I've been pitching Xen to the group, over VMWare, provided XenSource's product lives up to it's marketing specs.
            • by tinkertim ( 918832 ) * on Wednesday April 12, 2006 @10:09AM (#15113271)
              Correct. I work with Xen daily and most of my products and services are built around it. One of which is a replacement for Virtuozzo for the purposes of maximizing and isolating resources or web hosting companies.

              Xen augments the kernel, it does not replace it. The Xen hypervisor then interacts with the host (dom-0) kernel.

              dom-u (guest) images can then boot using any kernel modified to interact with the Xen hypervisor. Currently we play with:

              Debian (Sarge)
              FC4
              CentOS 4
              NetBSD

              As dom-u's (guest) OS's.

              We have also enjoyed some success but not 100% stability bringing Win2k3 up as a dom-u.

              I have deployed clusters that use Xen as a management layer and I can tell you, it *does* live up to its marketing specs. Xen's bridging is the fastest most efficient layer available, bar none. Its also a wonderful tool in helping to integrate a centralized storage area network into any size network and let people keep all of the protocols they like.

              A *very* good source of information about Xen, what it does, how it does it is available on the option-c wiki (Here [option-c.com]) , they also have some ready to go Debian installers that make installation quite easy (apt-get able).

              Xen + OpenSSI is another fantastic combination if you take the time to really understand the networking possiblities and set it up appropriately. Good luck with the bean counters .. the price is right :)
            • Close... but here's the difference between Xen and VMWare ESX:

              Xen does 'para-virtualization', wherein it virtualizes MOST of the hardware, but allows some passthrough to the bare metal. This requires virtualization-aware kernels and modifications to some software, perhaps. Since it's a 'lighter' application than the ESX server, it should run a bit faster.

              VMWare does a full virtualization of every hardware component, like most other virtualization products (Virtual Server 2k3, Virtual PC, VMWare Workst
            • And in my previous comment, I forgot to answer the 'second thought'.

              ESX server is a modified Redhat kernel (2.4.9) and the ESX-specific apps, plus device modules to connect to whatever server-class hardware you have.
            • Actually I'm the one being a pedant. It's not a modification of the Linux kernel at all, though older versions did borrow heavily from the Linux device driver code - the Xen hypervisor is a totally seperate chunk of code and would theoretically be capable of running without a Linux in sight at all, as long as some other guest was able to perform the tasks of domain 0.

              Xen's host components *do* run directly on the metal, they just don't talk to I/O devices. Xen controls the processor and memory, and provides
              • I appreciate the correction, and the credentials. Xen is an exciting technology to me so I have a tendency to jump in to the discussion. My only defense is that there is *a lot* of inaccurate information available, even from what would normally be considered reliable sources.

                I hope my employer will choose Xen as their virtualization platform, but I know that much of that decision will hinge on whether or not Windows can run stably. Since the decision won't be made until next quarter, I'm crossing my fingers
                • I was working on porting Windows XP to Xen (without the benefit of VT/Pacifica, which is now the 'right' way to do this - it was some time ago, before those techs were announced). It was kinda fun ;)

                  Got to stick my hands in the Windows source, and futz with it. It got quite far but it was never what you might call 'stable'. It also took about 15-20 minutes to do the first few seconds' worth of booting due to the vast, vast amount of debug information we had it dumping out of the serial port. 295MB bootlogs
    • by bigtallmofo ( 695287 ) on Wednesday April 12, 2006 @07:52AM (#15112574)
      I agree with you and even take it a step further. The article could not be more plain in that even though it was dated 4/7/2006, it did not take events of the last month into account which makes it totally useless in my opinion.

      Three major announcements in the last month have radically changed server virtualization and made the article obsolete:

      1. VMWare renamed GSX to Virtual Server and made it free.
      2. Microsoft made their Virtual Server free.
      3. Microsoft announced support for certain Linux distributions in their Virtual Server product.

      The parts of the article that show it's obsolete in light of the above facts:

      An open source solution will win the cost battle almost every time
      If you want to use Linux as your host OS, you'll definitely have to go with VMware.

      Also, for my own personal review - I'm a pretty heavy Microsoft user and was excited about them making Virtual Server free. Evaluating VMWare's free product against Microsoft makes Microsoft look pretty unpolished though. For instance, compare VMWare's P2V application to convert Physical to Virtual servers against Microsoft's offering which requires having a spare server lying around which must run Windows Server 2003 Enterprise with Automated Deployment Services. Give me a break - the cost becomes so prohibitive it's not even worth it. Microsoft may get there but right now their product looks like what it is - a bunch of things hastily thrown together. VMWare's products have coherence.

      • For instance, compare VMWare's P2V application to convert Physical to Virtual servers against Microsoft's offering which requires having a spare server lying around which must run Windows Server 2003 Enterprise with Automated Deployment Services.

        I hope you will endulge this spam-vertainment.

        For the record, there are third party vendors of P2V software including Platespin and Leostream [leostream.com], whom I work for.

        The VMWare P2V Assistant is arguably easier to use that Microsoft's VSMT solution, which appears to

  • Psst. btw (Score:4, Informative)

    by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday April 12, 2006 @05:40AM (#15112219)
    Microsoft has made their server virtualization software available for free.

    http://www.microsoft.com/windowsserversystem/virtu alserver/software/default.mspx [microsoft.com]
    • Re:Psst. btw (Score:5, Insightful)

      by Decker-Mage ( 782424 ) <brian.bartlett@gmail.com> on Wednesday April 12, 2006 @06:46AM (#15112374)
      With VMWare Server (ex-GSX) switching to free status, frankly I don't think they had a choice. I've been working with, and beta-testing for years, with both and the VMWare product still wins in my opinion. No win situation for MS.
      • Re:Psst. btw (Score:3, Interesting)

        by dc29A ( 636871 )
        I virtualized a Windows NT4 IIS server running an ASP application with some VB COM components, VMWare ESX is incapable of running it without insane CPU usage. A one CPU physical server is running at about 30 tx/sec with 15% CPU usage, a virtual server inside ESX is getting 90% CPU usage with barley 5tx/sec, the VMWare host itself is at 65% CPU usage with 4 CPUs.

        VMWare seems unable to deal with many object creations and many context switches, the application basically creates a COM object, deals with it and
        • Agreed, there are situations that are totally unsuitable for virtualization. For instance, don't even think about a DirectX game under a VM instance which also exhibit similar behaviors in many ways. I do try it every time a new beta hits and am frequently disappointed.
          • You know there is no 3D capable hardware emulated under VMWare (neither any other solution), which makes visualisation of 3D (or even 2D, since the hardware emulated is not capable of accelerated 2D sprites and other stuff) graphics really slow? And why the hell would you play UT200x under a VM anyway? It was not meant this way. Buy an old P4 at 300 bucks and you've paid your VM license.
            • It's something that VMWare is experimenting (playing) with and marked has "Experimental" for a while now. As for playing games, well that's one way to test it but there are quite a few apps out there, serious (e.g. scientific) apps, that also make use of the Direct3D functions.

              When I do a beta for someone, I not only test the working stuff (which all gets certified first here) but also test all the experimental stuff as well. For instance, Solaris 10 is also "Experimental" and yep, I test it, each draw.

    • Re:Psst. btw (Score:3, Insightful)

      Psst yourself.

      You have to pay for the OS to run the virtualisation server on, you have to register to download it, and then you have to follow the usual licences- i.e (From MS own Virtual Server 2005 Technical Overview White Paper):
      * you may not transfer original OEM server licenses from one computer to another,
      * Each installed copy of Windows Server must be separately licensed. This means, for example, that if you are setting up four virtual machines within Virtual Server 2005 to run one instance of Windo
      • While you are correct about the cost of the physical server software, I don't find anywhere in the licensing agreement that says I can't run some other OS. In fact, I am running 4 VMs with various Linux distros installed as we speak.

        And as for those licensing restrictions, they will apply to any VM software that runs your Windows Server OS. So what's the solution? Well, don't use it to run Windows, for one.
        • ...because combining the (relative) stability and security problems of Windows as the host O/S, with the (relative) user unfriendliness and reduced application market share of Linux as a host system is really the best of both worlds.

          </sarcasm>

          Seriously. Maybe for development, and/or if you're really doing it on the cheap. Otherwise I fail to see the benefits of such a setup.
          • The big idea is that these are cheap, disposable machines that can be configured as you like. The obvious benefits like running multiple server operating systems are pretty limited, because as you say, you need to pony up the cash to really run something worthwhile in the first place.

            But for development, you get a little virtual network to write your distributed apps. Or you get a little basic machine for some embedded Linux programming. Or maybe an on-demand alternative OS environment. For the home use
      • Re:Psst. btw (Score:3, Interesting)

        by Decker-Mage ( 782424 )
        Your MS licensing information is out of date. They've changed the way they handle Server 2003. Furthermore, you don't have to use Server 2003 as your base OS for VS 2005. I was using XP Pro SP2, Win'2K Server and Advanced Server, as well as Server 2003 Enterprise during the betas for both VS 2005 and VS 2005 R2. All worked just fine. Actually, I got the best performance from Win'2K AS after I really locked down the services running although that may be somewhat biased as I really know AS best and I did
      • > you have to register to download it

        Direct links (copied from Digg):
        32-bit [microsoft.com]
        64-bit [microsoft.com].
      • I manage a fairly small (40 server, all Windows) shop and, while I technically like VMware better, I can't do a cost justification for it (I've tried). Another factor to consider in the cost equation: If you use VS 2005 R2 and opt to use Server 2003 R2 Enterprise as the host, MS allows you to run four virtual instances of Windows Server R2 on that hardware for no additional charge.

        Yeah, ESX is better in some ways, and VMware's tools are better than MS's. But, VS 2005 R2 is actually a really good product.
    • Netscaping (Score:5, Insightful)

      by Savage-Rabbit ( 308260 ) on Wednesday April 12, 2006 @06:57AM (#15112405)
      Microsoft has made their server virtualization software available for free.

      Isn't this the opening phase of what Computer Business Review calls 'Netscaping' the competition? I wonder if that word will ever make it's way into the Microsoft system spelling dictionaries?
  • Shameless plug (Score:3, Interesting)

    by digitalhermit ( 113459 ) on Wednesday April 12, 2006 @06:01AM (#15112275) Homepage
    In South Florida tomorrow (Thursday), a dorky looking guy will be presenting an introduction to Xen talk. Check http://www.flux.org/ [flux.org] for details.
  • by Domini ( 103836 ) on Wednesday April 12, 2006 @06:04AM (#15112279) Journal
    One thing the article does not speak about is licensing issues when using Virtulization. For instance MS has some twists and turns...

    For instance:

    One needs 2 different licenses if you run XP in XP.
    You can run 4 instances of Windows Server for free in Windows Virtual Server.
    You can run one copy of an older windows for free in Windows Vista.
    (You can read more about this on the MS site...)

    For Windows XP General Purpose license User Rights:

    http://www.microsoftvolumelicensing.com/userights/ PUR.aspx [microsoftv...ensing.com]

    Download and read document, section "Microsoft Desktop Operating Systems" which reads:

      I) Installation and Use Rights.
        a) You may install up to two copies of the software on one device.
        b) Except as provided in Section II.a and II.b below, only one user may use the software at a time.
        c) You may run a prior version in place of the licensed version for either or both of the copies.
        d) You may only use the copies on the device on which you first install them.
        e) You may use the software on up to two processors on that device at one time.

    Thus this means that I can install and use XP as Bootcamp native and Parallels VM guest using only one license.

    yay!
    • by Anonymous Coward
      You do understand that unless you sign a contract with Microsoft or otherwise make a binding agreement, you have your full "fair use" rights to do with purchased software what you please. Thus, you are not restricted by Microsoft's verbiage found on its website. Don't be fooled by EULA's. They are not enforceable (i.e., binding) unless you agree to them before you make the purchase of the software.

      Put differently, if you are the only one using a book you purchased, you can make as many copies of that b
    • Frankly that's a can of worms that I stopped worrying about years ago as I have more licenses than I know what to do with. However, I would never run XP as my base system. Too much consumer crap in the way tying up system resources. Always select a server OS, even one of your older Windows 2000 Server or Advanced Server, as your base OS and disable as many services as you don't need, if you use Windows as your base OS. You'll see far better performance with either VMWare or Windows Virtual Server. I kn
    • You can run 4 instances of Windows Server for free in Windows Virtual Server.

      This is only true if you are running Windows Server 2003 Enterprise edition. And the host operating system counts as an instance. So, you load Windows Server 2003 to act as a Virtual Server host, that means the license allows you to run 3 additional guest operating systems within that host.

      These are two important distinctions because the Enterprise edition is far more expensive than the standard edition. It wouldn't make se
    • > a) You may install up to two copies of the software on one device.

      I'm a web developer and would like to start testing my sites with Internet Explorer 7 (which is currently in beta but overwrites IE6). Unfortunately MS requires Genuine Windows Advantage validation to download IE7. Can I validate both my host OS and my virtual guest OS with the same CD Key? Has anyone else encountered this problem? Am I the only one who can't stand this CD key validation crap?

      (just kidding about that last quest
      • From what I could understand from reading those licence agreement files, you should be allowed to install it twice... whether you are allowed to run both at the same time may be a different matter.

        I think I read someplace that the same licence key may not be present on a network on two hosts, but I'm not sure in what context I read that...

        Perhaps download the doc I linked to earlier and read up further?

        And good luck with that IE7 beast!
  • The article doesn't even touch on Intel's VT or AMD's Pacifica technologies. What gives?
    • Re:Strange (Score:5, Interesting)

      by Malor ( 3658 ) on Wednesday April 12, 2006 @06:54AM (#15112399) Journal
      Yeah, I was thinking the same thing. This article would have been interesting, say, 18 months ago... but with VT and Pacifica, things are different now. Without at least mentioning those, it's not very useful.

      Anyone have a pointer to a good writeup on the differences between VT, Pacifica, and regular old software virtualization?
      • What I'm most interested in:


        • Where and when we can buy chips with these features - no-one seems to know for sure if the new Intel Macs have VT enabled.
        • What sort of speed boost they give - if it's not too huge, it's not such a problem to buy chips which don't have the features enabled.

        Rik



    • >>
      The article doesn't even touch on Intel's VT or AMD's Pacifica technologies. What gives?
      >>>

      Or being able to have win2k3 happily running as a dom-u under Xen 3 on such hardware. There's only so much you can fit in a one page blurb however. And the average reader wants it all compressed into 5 minutes or less of reading.

      Topics like this , you just can't do that unless you link to many external resources allowing the reader to get more about whatever interests them.. which is what TFA
  • Since Apple's move to Intel processors and the recent releases of Boot Camp and Parallels Workstation, running Windows within or beside Mac OS X is suddenly all the rage. My question is, has anyone thought to use Boot Camp to load Windows, and then use a Windows virtualization solution to run OS X inside of Windows-on-a-Mac? I'd try it myself, but I fear that I might rip a hole in the fabric of space-time.
    • OSX inside a VM?

      Alas, illegal (at least in the US). Mac OS X has some trusted computing code that verifies that it is running on Apple hardware. Bypassing this code is a definite DMCA violation (hence no company will try it). AND Apple assumes a very narrow set of underlying hardware, which isn't what VMs provide - so there is another pile of effort to emulate chipset and so on.

      In short, it's not going to happen until Apple wants it to happen.

  • FTA: "If you're trying to solve one of the server-based issues like consolidation or application isolation, you'll want to go with a server solution"

    Hmm - I think there are a few vendors who'd disagree with that.. Softricity, Altiris, Citrix, Wise to name a few..
    • I don't know. I work in this field, and see people dropping those technologies and moving to virtualisation. Now that hardware is powerful enough to support it, why not just roll out 32 desktop XP images with all apps installed, rather than dicking about trying to get an app installed on Citrix or Softricity?
      • That's a good approach, and I definitely see the value in it (and recommend it in some cases), but then you have the questions of how to get the apps onto those virtualised machines? Yes, you can use some form of ESD, or even manual installations, but then what about when you have apps which 'conflict', or require constant updates? And what about users who require off-line access to their apps, etc?

        The point I was trying to make was that the article completely ignored the areas of app "virtualisation", not
      • With VMware's ACE product you can exaclty that for an organization and lock 'em down.

        I run Linux as my host with VMware. Frankly, I love it. I have probably a dozen Windows images that I use each with a slatted purpouse. I even run Oracle (and it runs quite well at that) in it's own VM. The best part about doing this approach is that I am able to isolate programs that I really don't want running together, like keeping Oracle seperate from my host. Also, I have a clean enviorment where I can play with malici
  • by jnf ( 846084 ) on Wednesday April 12, 2006 @07:35AM (#15112505)
    Seriously, how did this make it on /.? The article is only a few paragraphs long, doesn't really even touch on hardware virtualization support or why its necessary (because virtualization currently sucks under 'normal' intel architecture). It even refers to qemu as virtualization, which its not, its an emulator. It mentions the program once then never touches on it again. It never explains why a person might want to use bochs or qemu even though its much slower than vmware/virtual pc. it doesn't touch on parallels [parallels.com] or any other software out there.

    Even more it doesn't even explain why the suggestions it makes are made. This article is basically a badly written advertisement for vmware or virtual pc.
  • These claims (server proliferation, cpu/resource under utilisation) have been made before with utility computing. A company called Ejascent (later purchased by Veritas) offered utility computing software that very closely resembled Solaris 10's containers. And now, of course, Solaris 10 is offering containers natively. So the technology to consolidate servers has already been around for quite a while.

    I realise that it's not quite the same -- virtualisation offers multiple operating systems on a single syste
    • Consolidation technology IS important. And it is "taking off".

      Servers are more powerful now. If a company decides to consolidate physical resources (to save A/C, power, rack space, buildings), they can certainly "vertically stack" applications that used to run on multiple servers onto a single server.

      However, if this is done with old-hat technology, the system becomes very difficult to manage. For example, I just worked on a 4 way Opteron with 8GB of memory. The NORMAL process list was 1800 lines long!

      So, c
      • Agreed. We make extensive use of virtualization/partitioning technologies with IBM pSeries hardware, running various versions of AIX. We've also got some interest in virtualization for our Windows servers, but nothing's actually be done in that area so far: something for the future, perhaps.

        For us, although there is an additional administrative load that's pretty minor and is vastly outweighed by the increased flexibility: being able to create new virtual servers or change existing ones at very short noti
    • OK, let's say you have 5 Win2k03 applications that require 5 servers.

      Your capital outlay for the servers will be around $10k. Add in a support package for hardware and you're talking another $500 or so per year. Each box uses power, requires a KVM interface, physical space, a network port and puts a heat load onto the air conditioner. These marginal costs will add another $100 per box, so we're now looking at a 3 year cost of around $13k, not including any labor costs.

      A GSX rollout would have been about
    • > And talk about a patching nightmare! A virtualisation solution running on Windows with, say 5 instances of Windows. That's 6 copies of patches to apply, resulting in at least 11 reboots...

      --This is why serious virtualization servers don't get run on Windoze. It's fine if you need to run Windows GUESTS - use a caching proxy server like Squid to download the patches, and stagger the automatic-update times for when they automatically reboot.
    • by LurkerXXX ( 667952 ) on Wednesday April 12, 2006 @12:16PM (#15114329)
      A virtualisation solution running on Windows with, say 5 instances of Windows. That's 6 copies of patches to apply, resulting in at least 11 reboots (1 for each instance and 1+5 for the primary OS).

      6 copies of patches to apply? Um no. Any admin working with that kind of setup SHOULD know about WSUS server and be rolling out patches (after he's evaluated them on a test rig to make sure they don't break any of his company-specific software) automatically.

      And no, it's not 11 reboots. That's a really really dumb way to do it. You set a group policy to prevent the machines from automatically rebooting after patch installation. When it's time for the scheduled maintenence you shut down all the VM's, reboot the host OS, then crank back up the VMs. That's a total of 6 reboots for 6 windows machines.

      Virtualisation is a fun toy and may be a useful tool if you're a multi-platform developer. But it does not seem to be a serious enterprise solution for the datacenter.

      Virtualization IS a serius enteprise solution. Lots and lots of us have it in production. Then again, we know a bit about the field and don't patch every machine by hand and do unneccessary reboots.

      The cost savings are real if you hire someone competent to run the machines.

  • by mpcooke3 ( 306161 ) * on Wednesday April 12, 2006 @08:09AM (#15112630) Homepage
    I have been using a few Xen based virtual servers from a commercial company recently - I used to manage physical machines. Here are some of my thoughts:

    Advantages:
    * Low performance overhead of Xen compared to other virtual solutions, and full OS level access as if it was a normal server.
    * The cost of a hosted Xen solution is very low given that the hardware is usually managed.
    * Reduced/No trips to the data center to replace hard disks etc,
    * From the provider i use you can also reinstall the OS, snapshot and restore snapshots over a web interface and get access to the console. These are features you can set up in your own data center but most people never get round to.
    * Quicker turn around if you need new servers, since normally they already have the spare hardware it's 1 or 2 days to get a new server set up rather than 1 to 2 weeks to order, install and configure it.
    * You could do loadbalancing over several Xen Virtual hosts on physically separate machines very cost effectively. This would also mitigate against the variable performance on different Xen hosts if you used a dynamic weighting loadbalancer.

    Disadvantages:
    * Sometimes other users on the Xen system cause problems, or the server is restarted due to Xen related problems. This hasn't happened that often but you wouldn't currently run a system that needed 99.999% availability on a XEN virtual host if the system is vulnerable to a single server going down.
    * You never know quite what your worst case performance is going to be like.
    * If your system doesn't scale laterally (more servers) but only by buying a more powerful single server (some databases for example) then the Xen virtual hosting is not cost efficient.
  • CoLinux (Score:5, Interesting)

    by radarsat1 ( 786772 ) on Wednesday April 12, 2006 @08:18AM (#15112665) Homepage
    I noticed that whenever virtualization comes up, no one ever mentions CoLinux. I've tried it once and was quite impressed. It takes a different approach entirely--rather than running in a virtualized environment, it is actually a port of the Linux kernel to run as a Windows process. (Some hardware is virtualized by this method, however, such as the network interface.) Are there any advantages to this approach? In terms of reliability, speed, etc.?

    Just curious.
    • I noticed that whenever virtualization comes up, no one ever mentions CoLinux.

      That's because no one really takes it seriously. It's not that it's not novel or not interesting, but the Linux guest runs as a *kernel thread*, not as a process. This means the guest has as much access to hardware as the host. It relies on a very well behaved Linux guest to not bring your system to a screaming halt.

      If CoLinux ever adapts to run the Linux guest in a lesser ring (perhaps 1 or 2) then it will be considerably more
  • QEMU is not just an x86 emulator.

    It is a system emulator. What it does very well is support Linux binary applications for other CPUs. Want to run an ARM binary on an x86? QEMU will do it. Want to run an x86 binary on a Sparc? QEMU will do it.

    QEMU also does system level emulation.

    As a special case, QEMU runs x86 on x86 as well.

    VMWare and Xen don't do that.

    Ratboy.
    • by Cytlid ( 95255 ) on Wednesday April 12, 2006 @09:48AM (#15113121)
      I've been working with VMWare and virtual servers for a while now (Xen still won't run on my main workstation at home, some ACPI problem or whatnot), but I was really amazed at QEMU. I never really tried it until I read this month's issue of LinuxJournal (all about Virtualization!) ... some of the Xen and VMWare stuff I was already familiar with.

      QEMU's ability to emulate other CPUs is invaluable. You can emulate a MIPS architecture and test your favorite Linksys firmware (I believe the OpenWRT guys already do this). I would really like the m68k emulation to stabalize so I can run old Amiga stuff (or try linux on m68k). Or emulate an ARM processor , drop a PocketPC firmware on it, and test drive Windows Mobile software (or porting Linux to those devices). The possibilities are endless.
    • Qemu does sound interesting. But the last time I tried it (admittedly months ago) it crashed while running a Knoppix Livecd that ran fine in Vmware. Has the stability of Qemu improved? (Honestly curious; I'm a VMware fan but have just downloaded Parallels and intend to eval it.)
      • QEMU is not a virtualizer. VMMware will do a better and faster job of that.

        QEMU *can* be used as a virtualizer -- if you have a problem, report it.

        QEMU can run x86 Linux on a Sparc (and so can BOCHS). Where they differ is that QEMU does so by translating the binary instructions. BOCHS has this available as a limited experimental feature, but generally interprets each instruction. Which means that BOCHS can run just about anything x86 *slowly*.

        QEMU can run just about anything (x86, ARM, MIPS, etc.) on anythi
  • I think what makes VMware stand out from all of the rest is Virtual Center and what it brings to the table. Being able to manage ALL of your VM servers hosted on ALL of your ESX servers is a huge plus. And while this version of Virtual Center absolutely has is shortcomings, the next version of both VC and ESX are really going to raise the bar from what I've seen. The mainframe has come full circle.
    • And what really makes VirtualCenter stand out is that VMware has bundled their VMotion product with it.

      VMotion is the ability to migrate running virtual machines from one physical server to another. It's roughly the equivalent of Mosix for VMs.

      And it works across ESX versions, so there is no VM downtime when we need to patch our VMware farm. And it works across (some) hardware platforms, so we can upgrade our VMware farm from 2-way servers to 8-way servers, again with no downtime.

      --Joe
  • by pnuema ( 523776 ) on Wednesday April 12, 2006 @09:46AM (#15113100)
    I'm a performance tester who has had to completely reinvent how we do business thanks to virtualization. How do you give assurances to an application that they will perform adequately in a virtual environment when by definition performance will always be dynamic?

    The primary approach we have had to take was to stop looking at whether an app will perform on a virtual machine, and start looking at whether or not it will be cost effective for the app to perform virtually (in general, apps that will perform in the physical world can be made to perform in the virtual world if you throw enough resources at them).

    It's an interesting problem. We found that our company's big push into virtualization had to be scaled back a bit - not every server is truly a good candidate for virtualization.

    • by kma ( 2898 ) on Wednesday April 12, 2006 @04:14PM (#15116100) Homepage Journal
      I'm a performance tester who has had to completely reinvent how we do business thanks to virtualization. How do you give assurances to an application that they will perform adequately in a virtual environment when by definition performance will always be dynamic?

      VMware ESX Server provides proportional-share guarantees for CPU, memory, network and storage performance. I.e., if you always want 50% of a CPU, or 200% of 2 CPUs, or 75% of the bandwidth of a gigE nic, etc., that can be arranged.

      HTH,
      Keith (vmware employee)
      • VMware ESX Server provides proportional-share guarantees for CPU, memory, network and storage performance. I.e., if you always want 50% of a CPU, or 200% of 2 CPUs, or 75% of the bandwidth of a gigE nic, etc., that can be arranged.

        We are aware. The problem is, if I have to guarantee 2 CPUs to make an app perform, it is more cost effective to buy a physical box - those hosts aren't cheap. We determined our break even point to be 35% of a CPU - any more than that, and we make it a physical server.

  • My dream is to run an installer on a single host on my LAN. Give it the root passwords of each of the various hosts on my LAN. Then watch as it halts and backs up each of those hosts, then installs the virtualization SW on each host, making a pool, then reinstalls each host as a virtual instance in the pool. Then I'd like to see the virtual pool balance load and failover among my hosts. When I add a new host to the LAN, I'd like the pool to just use the new capacity proportionately.

    Of course a live, good sy
    • I've got this *kinda* done. This was our goal:

      * Total centralized command
      * Dynamically provision / reprovision based on application demand, roles and rule sets
      * paranoid sanity checks
      * use, but don't force LVM. Make use of just conventional images.
      * Integrate HPC's that can be dynamically reprovisioned.

      This is done with Xen + openSSI. In a case like yours it would not be a conventional single system image, you'd be using a couple redundant wasabi style NAS's, which can also be built with Xen for additional
      • So when you go the rest of the way to completing your project, I'd be able to take my 4-host LAN (4 different P3/P4 motherboard/CPU/RAM/HD configuration hosts), add a 5th host, replace their internal IDE storage with iSCSI, and run your software to create a virtual pool with the extra host's capacity? Or allow a host to fail with immediate autorecovery? Does it have to be iSCSI - what kind of NAS do you support?

        Are we really talking about a "compute RAID" (RAIH?) available sometime this year?
        • Doesn't have to be iscsi that's just what we're using because its simple. As far as the netfs , anything you can put into the 2.6 kernel will work. We figured on a local gig-e network over copper which is why we're using iscsi for developing it .. but by no means is it built 'around' iscsi.

          It could be made to work with just about anything. That's the other part of why it's taking so long to do. But yes, you could accomplish that.

          Bear in mind the definition of high availability is no single point of failure.
          • Well that's all great news. I don't actually require "High Availability", any higher than a P4/3GHz/2GB/500GB (before it goes down :) in the pool. If I can pool the heterogenous IDEs in each host into a pooled RAID, that virtual server does what I want.

            Now, if I can install it as an autoupgrade to my current (nonvirtual) LAN install, like I described in my original post, then it sounds like my dream is actually coming true :).
  • There is NO WAY to write a *short* article on this topic without being a total waste of everyone's time. The sentence that wraped up Zones in the same space as Vservers was misleading and one of several points that didn't have the slightest chance of being explored. But as always on /. the comments will make up the difference. No wonder no one RTFA..
  • It only talks about virtualization as it exists on Intel hardware? What about Power LPAR? What about Solaris Zones? What about mainframes? How can it be called an overview when it only touches on a third of whats out there?

    I'd guess cause thats what the author could get for free in the 10 minutes it took to write that article.
  • If Paul Ferrill is collecting a paycheck for writing this, then he should give it back. The article is a montage of nothingness. Did Ferrill actually do any research on the subject? It seems to me that he must have read some Novell marketing material but little else.

    Ferill mentions, "On the downside, the x86 architecture does not lend itself to efficient virtualization." however he appears oblivious to Intel VT or AMD's Pacifica chips which are made specifically for virtualization.

    Lets not forget that Pa
  • "If you want to use Linux as your host OS, you'll definitely have to go with VMware."

    What is he talking about? From this one sentence it is obvious that this person should have never written this article. VMWare ESX server is not Linux. VMWare ESX VMWare's own creation kernel and all. Not to say they could have barrowed parts from who knows where but it is not a 2.4.x Linux kernel or anything of that nature.

    Now to the inexperienced user, installing and administration VMWare ESX you might get the impression
    • ESX isn't the only flavor of VMware. There's VMware Workstation, as well as the new (free) VMware Server, which do, in fact, use Linux (or Windows) as the host.
  • I'm using Xen on Gentoo Linux running on an OLD Celeron 400Mhz box with 384 megs of RAM and it runs three VMs at completely native speeds. There ARE no drawbacks. Period. Xen blows EVERYTHING else away in terms of ease of use, flexibility, and even the ability to keep a VM running while it's original physical host is down by migrating it to another physical host. When you combine it with Vanderpool or AMD's upcoming Pacifica hardware virtualization techniques, the sky will be the limit.
    • If you are so happy with Xen, I suggest you try OpenVZ ( http://openvz.org/ [openvz.org] -- I bet you'll be even more happy. Unlike Xen, OpenVZ does not have that big I/O overhead (our tests shows Xen guests do I/O about 30% slower than native system). The biggest thing though is you can run not 3 but 30 virtual environments, and dynamically manage their resources (like adding/removing memory from the environment without any need to restart it).

      Finally, live migration for OpenVZ will be released Real Soon Now.

  • I'm surprised the article didn't mention software virtualization. The package from Altiris creates a virtual layer between the OS and the file system. It captures changes during the setup and then creates an virtual application 'layer' that you can turn on and off at will, or reset back to a default state.

    The way they're pitching it to my department is that we can use it to deploy applications Enterprise wide with little testing. If a service pack or patch breaks the system it can be turned off. If a user d
  • No, but's invading this thread and my inbox. So help me if I get one more e-mail with one more product that adds Virtualization as if it's the new iPod, I'm going to get cranky...

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...