Forgot your password?
typodupeerror

An Overview of Virtualization Technologies 204

Posted by CowboyNeal
from the an-operating-system-by-any-other-name dept.
PCM2 writes "Virtualization is all the rage these days. All the major Linux players are getting into the game with support for Xen, while Sun has Solaris Containers, Microsoft has Virtual PC, and VMware arguably leads the whole market with its high end tools. Even AMD and Intel are jumping onto the bandwagon. InfoWorld is running a special report on virtualization that gives an overview of all these options and more. Is it just a trend, or will server virtualization be the way to go in the near future?"
This discussion has been archived. No new comments can be posted.

An Overview of Virtualization Technologies

Comments Filter:
  • Hmmm (Score:5, Funny)

    by Anonymous Coward on Friday July 07, 2006 @04:13AM (#15673803)
    Is it just a trend, or will server virtualization be the way to go in the near future?

    What happened to the CowboyNeal option?
  • Just a trend? NO WAY (Score:5, Interesting)

    by giorgiofr (887762) on Friday July 07, 2006 @04:17AM (#15673812)
    Virtualization is one of the best things since sliced bread and I believe it's here to stay. First of all, it spells an end to multi-booting. I have erased my secondary OSs and I run them in VMs under my main system. A performance hit does definitely occur by I am willing to pay such price for the greater ease of use. Secondly, just think of the possibility to move server images from a physical server to another one, literally freezing it here and awakening it over there - InstaScaleOut(tm) must be a server admin's wet dream.
    Of course, as with all abstraction layers, it introduces complexity and takes a toll in the form of performance - but we all know absraction layers have been increasing all the time since the beginning of time.
    • by interiot (50685) on Friday July 07, 2006 @04:41AM (#15673883) Homepage
      Not a huge percentage of people dual-boot. But hopefully virtualization will increase the ease of use of Linux and ALL other alternative operating system as well. There are hundreds of home-grown OS's out there, and it would be cool if virtualization were easy enough to use that people just download and run it to test it out, making OS's as easy to try out as applications.
      • by QuantumG (50515)
        I remember once someone, who will remain nameless cause he got shitty as me last time I mentioned his name, told me about this great idea they had for the Ubuntu Linux CD. As many people probably know, when you put the install CD in the drive under Windows it currently autoplays and gives you the opportunity to install a number of Open Source apps. He had this great idea (I think) to give people the opportunity to run the live image from the live CD under coLinux [colinux.org]. I believe he ran into problems because s
        • by wrecked (681366)
          Actually, Knoppix can be run under Windows using qemu [bellard.free.fr]: see the Slashdot article WinOS+QEMU+Knoppix 3.8 = WinKnoppix! [slashdot.org]

          All you need to do is insert the Knoppix LiveCD during a Windows session, let autoplay do its thing, then you are given an option of running Knoppix right from Windows. I never tried networking with it, so I don't know how it well it does that.
      • Hah dual-boot? With prices of computers these days, I just purchase another low-end computer for my "alternate" operating system so that I can run both at the same time. At least with virtualization, hopefully I won't even have to do that anymore!
      • by GWBasic (900357)
        Actually, it's more likely that the opposite will happen. The general public can run their operating system of choice, and virtualize Windows for games and misc. utilities. If we see someone write hooks into Windows GUI so that its applications can appear to be native in the host OS, we can be assured that Microsoft's dominance on the consumer desktop will be broken.
    • by jeswin (981808) on Friday July 07, 2006 @05:19AM (#15673953) Homepage
      Well, the fact is the virtualization is a work-around poorly written and designed OSes and applications. Virtualization is succeeding because we cannot build OSes that: 1. Prevent applications from littering and destroying public space 2. Do a decent migration without re-installs 3. Can scale without re-installing and re-configuration 4. Do better throttling and pooling And we cannot build applications that: 1. Know how to co-operate with other applications, atleast be aware that the system cannot be monopolized. 2. Install in a private space Some time back I had written a blog about Virtualization, isn't it a Diversion? [process64.com] Summary: Virtualization looks like necessary evil, because we are incompetent to write better OSes and Application. Virtualization is the easier route. And, you wait till it reaches critical mass, gets everywhere and brings its share of problems. I would have preferred a better, from the ground-up OS any day. Hurd, or ever better Singularity!
      • We could build and OS Like that but it will get Poo-Pooed by the technical users. Because it would require People to build Static Binaries, and place all the apps in their own directory, with the configs in their own directory. But people like DLLs, Shared Libraries, Spread out config directory. Then there is the issue of geeting people to agree and follow these standards. Macs come close but still you get the apps that need to install stuff in other places do to lack of effort from the developers.
      • wrong (Score:5, Interesting)

        by m874t232 (973431) on Friday July 07, 2006 @08:32AM (#15674416)
        Virtualization looks like necessary evil, because we are incompetent to write better OSes and Application. Virtualization is the easier route.

        It's not a question of "competence", there simply is no such thing as a uniformly "better" operating system or application. DOS, for example, is an excellent operating system for some narrow set of applications, and you can hack Mach or Singularity until the cows come home and you're not going to create something better.

        I would have preferred a better, from the ground-up OS any day. Hurd, or ever better Singularity!

        People like you are part of the reason why software sucks so badly: you simply don't understand real-world tradeoffs. People like you design systems like Mach or Windows, systems that try to be everything to everybody; people like you throw in MLOCs of useless features and generalizations and extensibility, and all you are doing is create bigger and bigger headaches.

        Virtualization is doing the right thing: it lets people focus on creating operating systems and server configurations that focus on solving specific problems. Maybe with virtualization, we can finally kill the general purpose operating system.
    • by Anonymous Coward
      Secondly, just think of the possibility to move server images from a physical server to another one, literally freezing it here and awakening it over there - InstaScaleOut(tm) must be a server admin's wet dream. Well, you'll poo your pants when you see vMotion in work, then. the ability to move a running VM from one host server to another without a hitch is quite something. Combine that with Resource Pools, DRS and HA and suddenly the hardware doesn't matter so much anymore!
    • Virtualization is one of the best things since sliced bread and I believe it's here to stay. First of all, it spells an end to multi-booting. I have erased my secondary OSs and I run them in VMs under my main system. A performance hit does definitely occur by I am willing to pay such price for the greater ease of use. Secondly, just think of the possibility to move server images from a physical server to another one, literally freezing it here and awakening it over there - InstaScaleOut(tm) must be a server
      • Mostly I agree with that but there are a few pitfalls. What tends to happen is that people go wild setting up VMs and whenever an old machine needs to be retired whatever is running on that OS doesn't get migrated to a new machine with a new OS any more. Why bother when you can just turn the half a dozen old web/mail/file servers you need to get rid of into VM's complete with their OS and move them all to a single new computer and thus save loads of rackspace?

        I know this sounds like a bad thing but think of

      • by gkhan1 (886823) <oskarsigvardsson@@@gmail...com> on Friday July 07, 2006 @08:25AM (#15674390)
        I remember reading in The Art of Computer Programming by Don Knuth in the chapter where he gives an interpreter for his fictional MIX computer (and, just to mess with us, it's written for the MIX computer!). I remember him saying that he didn't really want to do this because he didn't like simulators and interpreters, and he describes a situation where there were three or four layers of simulators running on top of each other, just because everyone had been to lazy redo the damn programming! When new hardware came in, instead of adapting the software they simply run it through a simulator of the old machine.
      • Or with some of VMware's offerings, turn those half-dozen old web/mail/file servers into nodes and let VMware use the resources as necessary.
    • InstaScaleOut(tm) must be a server admin's wet dream

      I always thought that was meeting a Girl(tm).

    • by dc29A (636871)
      Virtualization is one of the best things since sliced bread and I believe it's here to stay.

      While I do like virtualization, I think it is still in it's infancy. We migrated this ASP/VB/C++ legacy web application to virtual machines running on VMWare ESX, and after 4 transactions / second the host server was almost dead. The same application, on the same server but running native in Windows 2003, can easily do 35 transactions per second. Granted the application is coded using some archaic COM/RDS model but t
      • Even VMware will tell you that virtualization is not a solution for everything, and not every server or application should be virtualized. For instance, heavy-duty database servers (like Oracle) are not really a good idea. Or are putting up servers that do an extreme number of disk writes (especially the case if you're using shared LUNs between multiple ESX servers) because you still have to deal with SCSI locks. Microsoft supposedly won't support you if you put your AD controllers on a VM (even their ow
        • I'm also interested in the corrupt VM statement. What exactly corrupted?

          I am just a code monkey, so bare with me, I don't know the exact details. We had ESX at a certain patch level, one of our VM guys moved a database server from one host to another. The server wouldn't boot back up (Win2k3 + SQL 2005) after the move was complete. We contacted VMWare and they told us to apply a set of patches. The SQL servers were dead though and had to be rebuilt.
      • Virtualization has been great for dealing with pesky license servers. Some very expensive software packages require a license server that talks to a hardware dongle. In a university setting, we sometimes run dozens of these license servers. Even worse, most license managers expect the dongle to always be on parallel port one. So with vmware server, we can set up a bunch of dongles on an expansion card, then map each port into the vmware image. Furthermore, each vmware image can have a particular mac ad
    • by digidave (259925)
      I've noticed quite a serious network performance issue using VMWare Server on a dev system. I was ready to go live with this on the new production server, but now I'm not sure. The benefits of virtualization are huge, but sometimes performance is too important. In my case, the web server takes noticably longer to serve requests and an especially long time if it's the first request in a long time, as if there is a delay while VMWare wakes the system from sleep mode or something like that. VMWare tools is ins
      • by GoRK (10018) <johnlNO@SPAMblurbco.com> on Friday July 07, 2006 @09:31AM (#15674741) Homepage Journal
        I am not an expert with Server (GSX) -- I mainly stick to ESX. I do; however run some VMware Server machines in the lab and know what you are talking about -- this symptom sounds like a memory management issue. I'd bet dollars to donuts that your guest is getting partially swapped out either because you have given the guest more memory than it really needs (this is a very common problem), you have not configured the host to prevent swapping ("Fit all virtual machine memory into reserved host RAM" under "Host Settings" in the server console), you do not have enough ram in the machine to allocate enough to the guest (and the guest is swapping itself out), or you are running services on the host machine that are dragging down the guests. You have to remember that even though VMWare Server lets you oversubscribe your system RAM, it is up to you NOT to do it. Unlike ESX, VMware Server does not have the ability to share identical memory pages among VM's, thus oversubscribing memory in Server although possible is never a good idea. In ESX, however, memory subscritption is probably the biggest advantage VMware has over any other solution at this point.

        If you are using VMWare Server, please keep in mind that best practices say that you should generally NOT RUN SERVICES ON THE HOST ! It is far better to minimize the footprint of the host and create another VM to handle the services instead. There are of course exceptions to this such as when an application needs physical access to hardware that VMware can not supply or emulate, but they are not common.

        If this doesn't help you, please check the VMTN forums for help; they have a points system for questions/answers and are generally one of the better free support forums for any commercial product I have ever seen.
    • by jcr (53032)
      Moving a running instance of an OS from one host to another is a great thing for servers, of course, but consider what it can mean for client workstations as well.

      Work on your laptop during your commute. When you get to the office, just close it, and have your desktop system mount the laptop as an external drive. Wake up your virtual machine on the desktop system. All your apps, your work in progress, etc, are all just as they were when you closed that lid, and the apps just get an event to tell them tha
    • Trendy!!!!! (Score:3, Insightful)

      by fm6 (162816)

      First of all, it spells an end to multi-booting. I have erased my secondary OSs and I run them in VMs under my main system.

      Well, yes, if you're a geek who likes to play with a dozen OSs, you'd much rather open on a new VM then reboot your machine. But as usual, we're confusing geekworld with the real world. The use of desktop VMs is pretty limited outside geekworld — mostly Mac folks who have one or two Windows apps they can't live without. That doesn't do a lot to explain why so many heavy hitters ar

  • by phase_9 (909592) on Friday July 07, 2006 @04:18AM (#15673816) Homepage
    We run 2x VM Ware ESX Servers on Sun x4200 servers (w. 8 gig o' ram :) - the web-gui for ESX is second to none, incredibly easy to configure virtual machines. It's got us seriously considering moving more than just our dev enviroments over to virtualised hardware.
    • I second that. We've 4x VMWare ESX running (w/ 16Gb ram .. sorry :P) and are running both dev and production machines on it. Mixed Linux and Windows guest OS's. The best feauture has to be VMotion, with which you can move a virtual machine across physical vmware hosts, without the guest OS having to shut down, or even being interrupted in its activities.
      • Are we having an argument about what the best feature of ESX is?

        In my book it's one of two things:

        1) Virtual networks (including the much improved vlan support in 3.0)

        2) Memory page sharing. People argue that solutions like VMWare have X% performance penalty over something like Xen yet when you are building up a cluster for any type of redundancy -- are you going to double the amount of RAM in your hosts just so you can take on extra VM's in a failure?
  • by Flying pig (925874) on Friday July 07, 2006 @04:24AM (#15673833)
    Yesterday's mainframe, today's rackmount server, tomorrow's desktop. As computers get faster, software functions at ever higher levels of abstraction. The holy grail is when you have the array of blade servers which you can grow or shrink on the fly, the sea of running operating systems, and the application that spreads itself across the lowest loaded operating systems as needed. Fault tolerance, load balancing, all out of the box.

    With the growing evidence of the human brain's ability to rewire itself and route around failures on the fly, and the effective virtualisation of perception (why do I appear to see a three dimensional picture of the world when I have only 2 curved arrays of photosensors?) we are probably just following a well trodden evolutionary path.

    • by Monoman (8745)
      I was with you until the "array of blade servers" part. Blade servers are acceptable IF you are short on space. They are less than ideal and more expensive than traditional servers for most other situations.

      I have been lurking some VM forums and the consensus seems to be to avoid blades whenever possible.
      • It's not as bad as you think with the blades.

        By the time you spec individual machines with all the same redunancies you get when you just plug a blade into a chassis, it about balances out if you are doing more than 5-6 machines or so, at least from what I saw when I spec'd our new equipment.

        You also have to recognize how easy they are to install and manage. All your network and storage switches are just modules in the chassis; no cable! Particularly with Fibre Channel this is a godsend and a huge time, mon
        • We have not experienced the cost savings you mention even after filling 2 chassis. Blades are just too darn expensive.

          Yes they are nice when you consider how easy it is to just slide a blade in and go but it doesn't end there. Those modules have limits and as long as you can live within those limits then you will be fine.

          * Some blades implementations are limited to 2 NICs. This is less than ideal for VMWare.
          * The same goes for FibreChannel connections. Less than ideal for some applications.
          * Some chassis th
    • The brain as a virtual machine argument is somewhat off base. It has processing modules that are highly dedicated to particular tasks and, when destroyed, those capabilities are wiped out. Partial recovery can occur but it takes years of retraining and is always a sloppy imitation of the original function.

      Evolution has focussed very tightly on specializiation within a given organism.

      Although you could make an argument that evolution virtualizes over time. Body parts and brain areas originally evolved f
  • Virtual technology is great you know. I'm using a virtual PC running on a virtual PC which is simulated on the first virtual PC. This is realy a nice solution:
    1)Upgrade: simply change a few values in the config and presto! 50Thz processor!
    2)No power consuption what-so-ever! I even get a net gain as I run a virtual powerplant.
    3)No clumsy hardware on my desk. Just type at the virtual keyboard in mid-air! The virtual monitor can project from anywhere. Heck, they even follow you to the bathroom.
    4)No virus, malware or spyware thread! All thanks to the virtual virus scanner.
    5)Store up to infinite TB data on the UberDVD drive.
    6)Comes with free pron, MP3, warez and Movie server. Complete with anti-MPAA and anti-RIAA card.


    Soon to be released: The virtual Car(tm). Just hold up your hands like your holding a steering wheel and make motor sound to get anywhere in the world in just minutes!

    Virtual technology. It's everything you ever dreamed of, and more!
  • The business plan? (Score:3, Insightful)

    by rangeva (471089) on Friday July 07, 2006 @04:31AM (#15673851) Homepage Journal
    I remember the MS vision of making light operating systems which are basically terminal computers and visualizing the OS with remote powerful servers. This way the user will pay monthly/yearly fees to use his computer. Upsides: OS is automatically upgrades with security patches and new features; Data is backed up and can be accessed from any computer. Downsides: Well basically the monthly payment and the fact the MS got your base ;)
    I think many companies are looking for a way to monetize software by monthly or yearly fees - this can be their way...
  • No Mention of UML (Score:5, Informative)

    by Zane Hopkins (894230) on Friday July 07, 2006 @04:38AM (#15673871) Homepage
    They completely forget to mention User Mode Linux, which is a well established and stable linux only offering, and many of the VPS (virtual private server) hosts you see advertised are running on UML.

    It seems that as Xen makes progress, UML is getting ignored.
    • Re:No Mention of UML (Score:5, Informative)

      by arivanov (12034) on Friday July 07, 2006 @05:22AM (#15673958) Homepage
      I host my website and mailservers at Memset [memset.com] which was one of the first to offer large scale UML hosting. They have now switched almost completely to Xen. I have seen the same happening elsewhere as well. UML is being forgotten despite being a better overall idea which is quite sad.
      • Why do you think UML is better? Does it make much of the difference to a VPS for hosting a website?
      • How would you suggest that UML is a better idea than Xen? In an acedemic sense, maybe it's more interesting, but Xen provides strict resource allocation and control for the supervised OS's meaning that you get much better control over resources in each session. After all, one of the good reasons to segment applications or customers into their own virtual environment anyway is to prevent problems with that customer or application from affecting others. This type of protection is nearly impossible in UML.

        A lo
  • by ptitvert (183440) on Friday July 07, 2006 @04:39AM (#15673877)
    What kind of article is that?

    They talk about VMWare, Intel/AMD, the future Solaris on E10000, other things... but where is IBM?
    They can do Virtualization for at least 3 years with their Regatta technology (P670, P690 (Power 4 technology), P530, P550, P560, P570, P575, P590, P595 (Power 5 technology)) and their OS AIX 5L.

    they are able to give a few percentage of a cpu to virtual server, with their Virtual IO server, they also are able to virtualize network and disks. They can do workload management between virtual servers. Add/remove disks/cpu/memory in real time.

    etc...

    So for a complete discussion an overview of the virtualization in the industry, IBM is now a big player, and they are now surpassing SOLARIS & HP in the "closed" unix world.

    So for me this overview is not complete and should not have passed the "draft" version until someone was looking at the actual and running alternatives.

    L.G.

    • by joe90 (48497) on Friday July 07, 2006 @04:53AM (#15673908) Homepage
      They talk about VMWare, Intel/AMD, the future Solaris on E10000, other things... but where is IBM?


      Since IBM practically invented virtualisation in the '60's for their mainframes (or possibly earlier (I'm not quite that old), I was quite surprised to see it missing from the Infoworld articles too.

      IIRC, VMWare modelled their solution on IBM's implementation. They may have also licensed some of the technology to do it.
      • by Anonymous Coward on Friday July 07, 2006 @07:04AM (#15674121)
        IBM is so far advanced it's not even funny.

        Intel and Xen even based their virtualization stuff on old papers from IBM documentation and whitepapers.

        You want to know how hardcore IBM is?
        THEY INVENTED VIRTUAL MEMORY. And no I am not talking about a swap file on your harddrive, you windows wennie. I am talking about the ability every PC has to abstract memory.. It's IBM's gift to the PC that made modern computing possible.

        You aren't convinced of IBM's monsterious power?
        They have it setup so that when you buy a OpenPOWER machine for running Linux you can get a optional firmware hypervisor to manage multiple operating systems. And it's pretty cheap also.. For the same price as a low end Sun Opteron box you can get a low end IBM POWER5 box.

        But it's not just that... Get this:
        IF you buy a Xeon cpu on a add-on card you can set up the machine to RUN WINDOWS.

        That's right. Run windows with a fucking x86 cpu on a PCI CARD.. Sharing the same memory and harddrives as Linux running on POWER5. On the same machine. At the same time. With NO slowdown.

        Still not convinced?
        How about this, for a show of IBM's utter superiority in this feild:
        We are running a 2000 era IBM Mainframe with a late 1970's operating system on a 1990's operating system with 1980's era tape drives for legacy reasons.

        IT'S A THIRTY-ONE BIT (no NOT 32 bits. 31bits.) OPERATING SYSTEM ON A #$%#$% 64 BIT MACHINE. It's not even like going from x86 to x86-64. They are entirely different computer archatectures. AND it runs at near bare hardware speeds. It's incredable. AND we can run Linux next to it. At the same time. And not just one Linux install, but very literally hundreds of them if we felt like it.

        It's completely nuts. They got shit that makes Vmware look like Dosbox. Microsoft's 'Virtual Server' isn't even on the radar; it's completely laughable in comparision.

        That and it has the worst possible user interface imaginable. Think about the worst thing you've ever seen. Some DOS 2.x nightmare. Now add a OS/2 GUI and make it WORSE. Now imagine it worse then that. Now your getting close. That and we pay out the ass for the pleasure of using it. Ok, now make it slightly worse. That's about right.
        • IF you buy a Xeon cpu on a add-on card you can set up the machine to RUN WINDOWS. That's right. Run windows with a fucking x86 cpu on a PCI CARD.. Sharing the same memory and harddrives as Linux running on POWER5. On the same machine. At the same time. With NO slowdown.
          Wow. IBM re-invented the bridgeboard? I remember the Amiga 2000 having those in the 80's. Not really "virtualization", though.
        • by Anonymous Coward
          You got to lay off the crack there buddy.... yeah IBM is the pioneer in virtualization, but they are lagging behind the new comers.

          For example, IBM cannot currently migrate a running LPAR. In the next iteration of their technology they say they will be able to do that, but not now.

          For the same price as a low end Sun Opteron box you can get a low end IBM POWER5 box.

          The lowest priced POWER5 is the p505, which lists for $3,399. The lowest end Sun Opteron is priced at $745. At that baseline price of $3,399 you

  • by asifyoucare (302582) on Friday July 07, 2006 @04:41AM (#15673884)
    I use VMWare for testing and it is great, but virtualisation tends to be used in production to compensate for software that doesn't cooperate well with other software.

    Do I misunderstand, or is there are real advantage on running product X in one VM and product Y in another (or even second instance of product X). What is the advantage of that scenario over simply running X and Y (or two X) on the same box. If the answer is that the software doesn't properly handle binding to particular IPs or that it requires exclusive access to a single file, then the software is crap.

    To me it seems like Citrix - a good product to support bad (network inefficient) ones.

    • Dependencies. Package A is tested and certified to run with Foo 1.5 and Bar 2.0. Package B is tested and certified to run with Foo 2.0 and Bar 2.1.
    • Do I misunderstand, or is there are real advantage on running product X in one VM and product Y in another (or even second instance of product X). What is the advantage of that scenario over simply running X and Y (or two X) on the same box.

      Well for one, it makes separating X and Y onto different boxes a year down the road pretty well effortless. (whether its for load balancing, hardware upgrades, or whatever)

      For another it makes upgrading X possible without having to worry about an impact on Y. Doubly hand
    • Do I misunderstand, or is there are real advantage on running product X in one VM and product Y in another (or even second instance of product X)

      Well, yes, it is a nice bandaid for some of the problems of bad software.

      However, based on a previous attempt at a physical server consolidation, I can see a big advantage in that you can upgrade/reboot individual applications that live on the same physical machine as other applications without disrupting everything.

      Also, it is easier to deploy an OS with spec

    • by KiloByte (825081) on Friday July 07, 2006 @08:20AM (#15674364)
      Do I misunderstand, or is there are real advantage on running product X in one VM and product Y in another (or even second instance of product X). What is the advantage of that scenario over simply running X and Y (or two X) on the same box. If the answer is that the software doesn't properly handle binding to particular IPs or that it requires exclusive access to a single file, then the software is crap.

      Security. Modularization. Having one part falling down not take down everything else.

      For example, in my setup there are two servers:

      * the old one: mysql, postgres, apache
      * the new one: Xen
          * pound (reverse http proxy)
          * postgres
          * mysql, apache
          * subversion+backups
              + viewvc running as a different user with read-only access to the repositories
          * a VM hosted for someone else

      When I break the dev apache, the production one stays up. When apache goes down, subversion stays up. When any of my VMs go down, the one hosted for someone else stays; and the other way around.

      And when someone pwns anything other than the dom0 (which runs just Xen and ntpd), they took over just that single part.

      Sure, I could run everything without virtualisation. But I don't think I have to say why I prefer the way I've chosen.

      And you can't claim that Citrix is a good product. Slapping a GUI on a server and "network efficiency" don't belong in the same sentence.
    • The thing is you are on gentle path to think different about your network topology, instead of servers doing x,y and z, you have service x, y and z.
      That will sneak in to you're machine naming as well, instead of big-frikking-server-doing-it-all. you got fs-01., fs-02., fs-03. distributed by say dfs.
      Other example are database clustering, dns, firewalls and webservers.

      Now this not a revolutionaire thing, bigger networks always worked this way, but with virtualization, the bigger network can better scale to th
    • Sometimes is just easier to pinpoint problems when you have an isolated environment, running one, or only related services.

      For example, I had lot's of headaches tunning a mailserver running a PostFix+Cyrus+Ldap, plus Apache+PHP+MySQL+IMP webmail. We started with 3000+ users, and it was everything ok until we reached 8000... then all sorts of performance issues appeared, an we could only understand what was going bad when whe isolated the services on separated machines.

      A virtual machine is a nice way to do t
    • Let me guess; you are one of those people who have everything from DNS and DHCP to a database, web, mail, and file server all running on one big SMP box with RAID and all kinds of other redundant goodness, right? I've been there.

      1) What do you do when you have to take it down or have a hardware problem? All that stuff stops all at once. With VM's (depending on your solution) you can move services to other machines either live (while they are still running) or at least schedule the move during normal downtim
    • I use VMWare for testing and it is great, but virtualisation tends to be used in production to compensate for software that doesn't cooperate well with other software.

      I remember this kind of argument from Mac devotees in the pre-OS X days when the Mac didn't have real protected memory, and still used cooperative multitasking. People would say that pre-emptive multitasking was just a crutch, that cooperative multitasking was cleaner and potentially more efficient, and that "good" programs would consistent

    • Let's say you have a farm of VMWare servers. You have application A in one VM, and application B in another VM on the same physical server. For whatever reason, the load on application A takes a sharp spike upwards. In your scenario of A and B installed on the same physical hardware, you are pretty limited in your options. Call some poor engineer in the middle of the night, pray you have a spare server, get that app B installed on the new server, and hope everything works. In the VM world, you just gra
  • Haven't got to use it firsthand, but if it's less of a reason to require someone to go into a server room (such as recovering after an OS crash) - it is here to stay.
  • by jkrise (535370) on Friday July 07, 2006 @04:59AM (#15673912) Journal
    Virtualisation is a disruptive technology... in that it requires a lot of intellectual investment on the part of the sysadmin. The reason Unix and Windows Servers have gotten by without adding much features, yet retaining market share is simple... admin lethargy and apathy.

    Microsoft does not seem to like virtualisation.. hell, they didn't like Terminal Services.. so they crippled it in NT4, made extra licensing restrictions with Win2K, and made the WinXP / Metaframe XP combn. a non-starter. In microsoft's world, users must only license MS's servers and everything needs a separate server /client.

    Now that the virtualisation market has grown IN SPITE OF the apathy of these s/w vendors... and the tremendous mindshare with Open Source technologies, these old chaps are trying to make money without doing anything themselves.. witness the recent MS licenmsing options in virtual segments, acquisition of IP, Intel's hypervisor efforts, AMDs efforts etc.

    If virtualisation succeeds, it could spell the end for DRM and Treacherous Computing initiatives... since these need collective collusion by all parties involved. Looks like the firms mentioned will try their damnedest to sidetrack virtualisation.. just like terminal servics and thin clients never reached their full potential. Open Source firms and nerdy sysadmins might well have the last laugh...
    • If virtualisation succeeds, it could spell the end for DRM and Treacherous Computing initiatives... since these need collective collusion by all parties involved.

      Isn't that the case with python, perl/parrot, java, ksh, tcl, etc? Any kind of virtual machine will have to have its own DRM, if DRM is to work at all.

    • Well the reality is with MS at least they are opening up to some extent to virtualization. Their license revision to Windows Server 2003 Enterprise grants you explicit permission to run up to 5 (I think) copies of the software on the same machine -- and I believe this is independent of the virtualization layer. I don't use windows for much, but this is a step in the right direction. I don't see many other commercial OS vendors stepping up to do the same.
  • by caluml (551744) <slashdot&spamgoeshere,calum,org> on Friday July 07, 2006 @05:04AM (#15673917) Homepage
    Don't forget Linux-vserver [linux-vserver.org] - it's very good, and very fast - as root in a vserver is root on the actual host - processes just can't "see" or kill any outside their own context. Props to Bertl.
  • a quote (Score:5, Interesting)

    by haupz (970545) <mhaupt@gm a i l . com> on Friday July 07, 2006 @05:07AM (#15673924)
    "Virtual machines have finally arrived. Dismissed for a number of years as merely academic curiosities, they are now seen as cost-effective techniques for organizing computer systems resources to provide extraordinary system flexibility and support for certain unique applications."

    Now guess who said that, and when. :-)

    Robert P. Goldberg said that, in 1974.

    The fun thing about this is, it's still a very accurate statement. Other than in 1974, though, it doesn't solely apply to mainframes, but, as someone wrote in an earlier post, to everyday computers: desktop systems. I think that's great, and the above quote is more true than ever. Working on Mac OS X and having a Parallels session up and running where some Java application (for example) is tested in a Windows or what environment... lovely.

    Yes, I'm a virtualisation enthusiast, if you haven't guessed so already. ;-)

  • by tinkertim (918832) * on Friday July 07, 2006 @05:12AM (#15673937) Homepage
    I read the article about Xen, because Xen is what interests me. I'll go back and read the others later. Looks like more of a slashvertisement than anything useful, esp on the Xen writeup.

    From TFA:

    >> Use the "dd" command to copy the boot drive from another server to a local file, point Xen at that file, and boot
    >> the VM (virtual machine). Who needs consultants?

    Apparently, the author does, and they have not been reading the Xen devel or user's mailing lists.

    File backed virtual block devices can be very problematic for high volume services and applications such as MySQL, Apache and others. Most of us really using Xen on deployments that 'matter' have switched to SANS and using either LVM or real partitions.

    Think about how long it takes to create a 3 GB loop device, then copy over the contents over a 10 or 100 meg switch (as you'd find on a hobbyist's desktop).

    Migration only takes a few seconds once that's done .. but I am asking the author .. Please don't make something very amazing like Xen disappoint people because you're publishing information you really have not researched that is not accurate.

    If you want to write information on hot topics to draw readers and slashvertise it, great - go for it. Just be sure its accurate.

    They also barely touched on what is so magic about running 32 bit guest kernels inside of a 64 bit host, the new Xen credit scheduler, and other really cool things going on with Xen.

    If you're going to present yourself as an authority, please present fact, and all of the facts. Please don't setup something like Xen (which many people are working very , very hard on, HP, IBM, Novell, Redhat to name a few) to just dissapoint new users. Nobody would say "Wow that article must have been wrong", they'll say "Wow, Xen is too hard to get working like that article said". Be careful what you capitalize on to sell a few ad clicks ;)

    • by Znork (31774) on Friday July 07, 2006 @05:37AM (#15673989)
      "Migration only takes a few seconds once that's done .."

      An interesting way to accomplish file-based fast migration is to nfs mount an area on the target server, then use md (in the virtual machine) to place a mirror there. Then you have no need for the lengthy copy, you already have a synced up online copy there.

      Not saying it's good, just saying it works (and a useful alternative if you dont have a better shared storage) :).
    • Sorry for the double bang, I forgot to comment on the author mentioning Migrating NetBSD dom-u's using the loop-n-go method.

      You can't mount bsd slices as a loop device. You need a utility like lomount. Here's a copy [netkinetics.net] if you read the article and want to play with Xen/NetBSD. Compiles easily with gcc.

      Just another example of how you can frustrate people with mis-information, and give the topic of your article the bad rep.. when it was really a lack of research on your part.

      Cheers :)
  • One of the dominate rumors [macosxrumors.com] for OS X v10.5 (Leopard) is that it will come with virtualization to run Windows programs. If it did that well (and there are many big IFs to this) Apple may be breaking through. Though, these same rumours suggest MS helped (with Intel) so there must be a poison pill.
    • If that were true, it would more likely be the Mac's downfall. Why would developers (That is, developers who aren't already developing ON a Mac) port or support their applications for MacOS if Macs can run Windows software.

      The application requirements will simply say "Requires MacOS 10.5 with Virtualization to Run"

      Development won't stop, obviously there are already people programming for Macs. But, what about potential Mac software, say you like Program X, and would like it to integrate seemlessly with yo
      • If that were true, it would more likely be the Mac's downfall. Why would developers (That is, developers who aren't already developing ON a Mac) port or support their applications for MacOS if Macs can run Windows software.

        This depends upon a number of factors. Is the VM environment installed by default? Does it have the same look and feel as OS X or Windows? Is it fast? Does it run graphics at nearly full speed or greatly slowed? What is the market share of the mac after a couple of years of this techno

  • Consolidate Costs . (Score:5, Informative)

    by straybullets (646076) on Friday July 07, 2006 @05:32AM (#15673981)
    If you stat average CPU consumption over the servers of any big size datacenter chances are you will be very surprised by the results.

    I did this for a company with over 2000 unix servers and averages were : only 20% of the hosts would use more than 30% of the CPU ...

    It's a known fact that for most of the projects the hardware is super sized over what's really needed, and this is one of the main advantage of virtualization : it is seen as a cost reduction process.

    • by baadger (764884)
      In theory yes, but just like the shared hosting deals about offering the likes of 20GB of storage, a terabyte of bandwidth and a plethora of features all for $7 a month (yes, Dreamhost) you would have to convince the customer they're better off spending their pittance on a 'smaller' package (a virtual server instead of a dedi). How do you convince somebody that going with a virtual server is worthwhile when more generous shared hosting and quite low dedicated server prices are pushing from both sides?

      At the
  • by IDontLinkMondays (923350) on Friday July 07, 2006 @05:36AM (#15673988)
    Well, first of all, I'd like to point out that I've run on virtualized systems for the entire extent of my career. Not specifically in the sense which we run now, but in the sense that back in the old days, we ran IBM mainfraim operating systems on IBM systems that actually were virtual machines. They included features such as segmentation and all the good stuff which is just coming around now.

    Thanks to other technologies I've run similar systems for ages. It is entirely common for me to develop a file system driver while keeping Mac OS X, Windows, Linux, and DOS running on the same system. I've done this for a long time as well. The difference is that the operating systems would be virtualized by running system emulators instead of using CPU technologies for system segmentation. I did this in the old days under DOS using Quartdeck Desqview and a CPU emulator.

    First thing that people really need to understand at this point that virtualization as we're using it today is little more than finding a method to lauch operating systems as "processes" under another operating system. This is not magic, for the most part it's something that any operating system developer should be capable of. The issue is more of grinding. It takes the right kind of people to sit and grind through each of the problems that come up with running like this. It's the same idea as writing a Windows compatible API stack. You start off with simple programs you have the source for and work your way up through more complex applications that require direct hardware access. It's a matter of intercepting the calls and handling them as if you were the real thing.

    So here's the deal. As a system level developer, I am more interested in what these guys are actually doing in order to make it happen. Let's face it, although Intel and AMD are adding virtualization technologies to their processors, the actual task of switching between CPU contexts is hardly an issue. The real issue is how are they handling hardware emulation.

    See, to me, I focus on high performance workstation related tasks. Servers are cool and great, but in reality, it's how it performs on the desktop that is truly important to me. What I want to see is that a vendor grinds a little more on this issue.

    VMWare has classically written device drivers to handle hardware interfacing with better performance than others. So instead of simply emulating the VESA BIOS extensions and providing access to an SDL style frame buffer, instead they have written drivers to allow graphics acceleration. So what I really want to see is that they take it a step further....

    I want more than just accelerated BitBlt functions. Of course in the 2D desktop world, high performance frame buffer moves are not optional but required since the bus bandwidth required to copy large frame buffers all around is outrageous. But in the days where OS X uses OpenGL and Windows Vista uses DirectX, I want drivers that interpret 3D contexts as well.

    So here's what I'm thinking... write a 3D driver for Windows, Mac OS X, X. The driver should of course offer frame buffer handling, but this shouldn't be the focus since it isn't used for much more than boot and text mode processing. When an OpenGL context is created, instead of creating the context native to the virtual machine, the context should occur on the host operating system and should be managed there. The only interprettation should occur when the graphics driver informs the guest operating system of the top level context.

    For direct X, well, I've seen at least one virtual driver in the past which implemented Direct X on Open GL. For professional graphics, Direct X is typically seen as a toy although in reality in many ways it's more powerful than OpenGL (don't argue, it has to do with what's more important to hardware vendors so their drivers are optimized for game based testing). So, since most professional graphics packages are OpenGL based, then the virtualization software vendor should simply implement a translation layer ov
  • by Colin Smith (2679) on Friday July 07, 2006 @05:40AM (#15673999)
    Mainframes have been using virtualisation for decades. It's not going away, it's simply too useful.

     
  • System architecture is changing in a profound way that will somewhat limit the commoditization on which virtualization depends. It's not just a matter any more of CPUs doing calculation and ordering up random disk accesses. RAM speeds, memory bus speeds, interprocessor pipeline speeds -- that stuff all matters a lot now. This is most evident in data warehousing/analytics, where data warehouse appliances (Netezza, DATallegro) and even memory-centric technologies (SAP, Applix) are becoming more important,

  • - CPUs that can run several microcode architectures, either in parallel or by timesharing between them. Just imagine a CPU with both Intel, SPARC and POWER instruction sets. Yes, yes, there's a lot more to it than just swapping between different instruction sets, but it can be done, and since there has for long been a trend towards making peripherals that can be used in several architectures it shouldn't be too difficult.
  • by doublem (118724) on Friday July 07, 2006 @07:51AM (#15674269) Homepage Journal
    One of my company's clients used VMWare to virtualize the server software we provide them. A few months back they had a massive power outage that caused them to lose large portions of their primary data center.

    They weren't running one of our replicated setups, so we were expecting to spend the next week rebuilding the server and configuring our software.

    Instead, they grabbed the most recent backup of the VMWare image and booted it up on a completely different server over 100 miles away.

    End result?

    About a day's lost data and an hour of down time. (The backup was already at the remote site)

    I've been pushing for VMWare usage in our test environment to reduce our hardware needs and time spent restoring Ghost images, but a few managers are still dubious, and are afraid we might "miss some hardware issues" if we go that route.

  • by AmunRa (166367) on Friday July 07, 2006 @07:51AM (#15674271) Homepage
    I see no mention of virtualisation techiques that virtualise a different architecture - such as Transitive's [transitive.com] QuickTransit software, of Rosetta fame. They announced a version of their software the other day which virtualises a SPARC Solaris machine on x86-64 Linux, which sounds more interesting than simply pretending to be yourself :)
  • by m874t232 (973431) on Friday July 07, 2006 @08:21AM (#15674370)
    Various people in this thread have claimed that virtualization is a workaround for not being able to write a decent operating system. I think that's wrong. Different operating systems are legitimately different in the way in which they present high level interfaces and abstractions of low-level hardware features.

    What virtualization really is is a long overdue standardization of a set of APIs that exist in many operating systems but remain hidden. By finally exposing them, we gain functionality that didn't exist previously.
  • Are we talking exclusively hardware virtualization? Because leaps and bounds are being made in OS virtualization as well. Solaris/Nexenta zones spring immediately to mind, as does Virtuozzo [swsoft.com]
    • I don't know about Solaris virtualization (i.e., containers). A couple Sun reps visited my company and talked about the technology. All partitions use the same OS image. The base OS is a full Solaris installation; if it goes down it takes everything thing down with it (it's not a stripped base OS as in VMWare or your Xen dom0). It's almost like running chroot environments with symlinks to the base software. Even libraries apparently need to be in sync. This seems to prohibit one of the biggest advantages o
      • Hence the term 'OS Virtualization' as opposed to hardware.
        • Hence the term 'OS Virtualization' as opposed to hardware.

          Which has nothing at all to do with the Solaris limitation. Even commodity virtualization technology such as VMWare Workstation allows you to have different versions of the OS (and mix different OSes) in the same host. The Solaris containers cannot do this. Even the nascent Xen technology has the ability to run different OSes under one host.
  • The small school I work for is investigating server virtualization because we'll gain exactly what the vendors advertise: better hardware utilization and lower TCO. We can take care of all our needs with a single system and a spare for backups that together cost less than multiple dedicated systems performing the various things we need. Server virtualization is not the best solution for everybody, everywhere by any means, but it certainly fits my organization's needs like a glove.
  • by Doug Neal (195160) on Friday July 07, 2006 @09:04AM (#15674564)
    Virtualisation is an inevitable step in the evolution of computers. It follows the trend that we've already seen - when computers got powerful enough to usefully run more than one process/program at a time, we introduced multitasking operating systems that "virtualised" memory, IO, and peripherals to multiple processes. Now computers are getting powerful enough to usefully run more than one OS at a time, we are seeing software which arbitrates this in much the same way, and extra hardware features to support it better.

    Don't forget that this is just the first or second generation of this technology; in future we are likely to see multiple operating systems on one machine become much more commonplace, and as operating systems start to be built with this in mind, increased inter-OS communication in the same way that we have inter-process communication now.

    Also worth noting is that we're moving away from the model of ramping up the clock speed on CPUs and moving towards a model of increasing the number of processing cores (dual-core CPUs and SMP), and smart high-speed switched buses (e.g. PCI express, 1/10/100GBps switched ethernet) - I believe that the computers of 10 to 20 years from now will be highly parallel, modular, hot-pluggable sets of processors and buses that will be able to intelligently allocate and partition resources between OSes and apps, and we will see a break away from the strict two-tier OS/program model and move more towards a much more flexible model with multiple levels of abstraction.
  • Hard to believe that they wrote an article that even mentioned virtualization on mainframes, and didn't think to mention IBM's pSeries solution with runs both AIX & Linux. I ended up going that route over blade servers because it was simply cheaper to implement without sacrificing hardware robustness and redundancy. Not to mention the flexibility of a SAN-backed server....
    • Not to mention that the pSeries blades are a nightmare to administer. The SoL (Serial Over Lan) is always SOL.

      The pSeries virtualization on the 590s and 570s is pretty amazing. You can dynamically add/remove processors and memory and has (most importantly) some very good monitoring tools. But it's expensive for the power you get.

      In smaller shops needing Linux, Xen could be a VMWare killer. I'm impressed with the level of functionality in the current releases. Building out a new machine takes a few minutes.
  • cheaper, too (Score:3, Informative)

    by sethg (15187) on Friday July 07, 2006 @10:30AM (#15675185) Homepage
    I recently switched my mail/Web server from a G4 running in my basement to a virtual machine at OpenHosting [openhosting.com]. Previously, I was paying $70/month for DSL with a static IP address; now I pay $20/month for OpenHosting and $15/month for DSL without static IP. And I have someplace off-site to back things up to, and I don't have to worry about the UPS battery running out or the disk drive going kablooey.

    The only downside is that my basement server runs Debian and OpenHosting runs Fedora. But nobody's perfect. :-)
  • User-mode Linux [sf.net]? I've never used Xen in my life – never had any reason for it, and honestly it looks like too much effort for what I'd need it for – but I use user-mode Linux literally every day. Not only is it hosting my Web site (which is actually the reason I've gotten addicted to it), but I've also been using it for software development right on my own machine – since the only machine I have that's suitable for intensive dev stuff is my AMD64, I've set the thing up to run the '64 ver
  • by MobyDisk (75490) on Friday July 07, 2006 @12:33PM (#15676361) Homepage
    Most of the time I see people using virtualization, it is to get around software conflicts, or to be able to install things side-by-side that aren't designed to be run in that environment. In such cases, virtualization is overkill. They didn't mean to virtualize the entire processor and memory - they only needed to virtualize the system configuration and limited parts of the file system hierarchy.

    For these purposes, chroot is a better fit.

    I've often wanted an equivalent for Windows, where I could run an application with a virtual registry, so that it didn't muck things up. Or so that it thought it had full access to the C:\WINDOWS folder. Instead, I have to use Virtualization s which requires 2 gigs of space, causes a 2:1 speed reduction, and cuts my available memory in half.

    Even better yet, would be decent installers and applications that follow the rules.

"A mind is a terrible thing to have leaking out your ears." -- The League of Sadistic Telepaths

Working...