Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
User Journal

Journal eno2001's Journal: VIRTUALIZATION: VirtualIron is WAYYYY Cool 2

Alert: If you're not into computers, don't manage servers at work or at home, or don't know what TFTP, DHCP, Xen hypervisor mean, then skip this entry.

Disclaimer: I'm not affiliated with VirtualIron other than being a very happy customer so far.

I've been into virtualization on x86 hardware since 1997/98 when I got my first copy of VMWare Workstation. I'd tried PC emulators before, but I could tell this was different as the performance was infinitely better than any emulators I'd ever used. Since that time, I've also looked at other virtualization programs like Virtual PC on the Mac and later Windows, QEMU on Linux and Windows hosts, and within the past three years, I've pretty much focused on Xen virtualization because it is truly "the way of the future".

The thing that led me to Xen was that renewing the VMWare Workstation license (for home user) I had a few years back was becoming prohibitive, but I still wanted something to virtualize Windows XP with since I no longer see the point in running it on bare metal (I'm not a gamer, and audio and video production tools for Linux are much better than they were three years ago). So while looking around on the net for other free virtualization systems (I was using QEMU with it's accelerator at that point but wanted something better) I found the Xen project. I decided to install it on my Fedora Core 2 box and see what it would do. It didn't start off that well because I got the Xen hypervisor (microkernel) to boot, but the console would then stop giving me any output or way to interact. I assumed it wasn't working until I pinged the IP of my Fedora Core 2 box. It responded! I could ssh in! Weird. It was running, but no output on the screen.

It took me a little extra work, but I eventually figured my way around Xen and dedicated this box to using it. I then created paravirtualized Linux images and saw that the performance of my virtual machines was nearly 98% of running on the bare metal! It was really a sight to behold, especially on such old hardware. The only limitation was RAM. If I had more RAM I could run many VMs on this old box. I got around to that later and currently I have an old P II era celeron 400 with 384 Megs of RAM running three VMs:

Domain0 (the management domain): Doing DHCP and NTP for my network. You typically shouldn't have this domain doing much other than managing your VMs.
Domain1 (the "External" home server): Offering up three of my lame web sites, an SMTP smarhost, VPN services, external DNS for my domains
Domain2 (the "Internal" home server): Offering up MySQL for my dbmail installation, dbmail itself, postfix SMTP for internal use, internal DNS

The box is great for being so old and the virtualization really adds an extra layer of security that is unsurpassed.

However, I wanted to virtualize Windows and this was not possible until the first CPUs from Intel and AMD were released with hardware virtualization support. So, I bought a cheap Athlon 64 HP box at Best Buy and packed it with 4 gigs of RAM. I upgraded to Xen 3.0 and set up my VMs on this box. There really isn't much of a limit other than RAM as far as I can tell. Right now, this box is running a Gentoo build VM, an Asterisk PBX VM, a CentOS/Zimbra mail VM, and until recently a Windows XP VM (which I've actually moved back to QEMU for different reasons).

So basically, I know the "love" that Xen offers above things like QEMU and VMWare or Virtual PC/Virtual Server. In fact, the Xen project's technology was so impressive that MS itself is using the technology for their upcoming hypervisor in Longhorn. That is assuming it doesn't get dropped last minute...

At work, I've been planning a huge mail migration away from a system I wasn't happy with to the Zimbra system (which looks and works great). However, I really wanted nearly unstoppable uptime even in the event of hardware failure. I knew that Xen's live migration capability would offer me that (you can move a VM, while it's running, from one physical host to another without your end users ever noticing). I ran into several issues over the past month, and the VirtualIron product is what finally came in to solve the problems.

When I first set out to virtualize Zimbra, I tried installing it on a RedHat Enterprise Linux (RHEL) paravirtualized machine running on top of Gentoo Linux with a Xen kernel. As soon as I tried to install it, Zimbra complained that I needed to have NPLT support in the kernel. This was not possible with Xen in paravirtualized mode. The only options I had were to run RHEL on bare metal, which would not afford me the unstoppable uptime, or to run it in a Xen HVM (full virtualization) environment. I chose the second route.

So I got a system that I could test with and set up a TEST Zimbra box on CentOS 5 with RHEL 5 as the fully virtualized guest. But then I discovered another set of problems. The first big one, was that fully virtualized Xen guests CANNOT be live migrated or paused. The second issue was that because of the way that disk and network I/O is virtualized, you have a bottleneck in the RAM utilization on the management "host". If your disk and network I/O is very high, you'll likely wipe out all the RAM in the management domain and performance will suffer as your disk and network I/O attempt to work via swapping! Ugh. The third point, which isn't really an issue, is that I discovered the Xen's fully virtualized environment was really a specialized QEMU process! My worries about QEMU's performance grew quickly.

So I did more research and more digging around for other possible approaches. I briefly considered the OpenVZ project (which doesn't really virtualize, it's more akin to chroot). Then I found someone's blog entry on a bunch of virtualization techniques and noted a reference to Virtual Iron. We also, almost went with the commercial version of Xen: Xensource, but they got bought by Citrix who we had some issues with in the past. I'm hoping that the Xensource folks won't get screwed by Citrix in this deal. So we bought VirtualIron, as priced at the time for $600 per CPU socket (cores don't matter, only physical sockets).

I was expecting your grandfather's virtualization techniques, but I was completely mistaken as I would find out later. One of their big seeling points is that they don't use paravirtualization at all. This isn't really a good or bad thing, it's just their way of approaching virtualization. They have also been contributing back to the Xen project, so good on them! Instead, they chose to focus on the special version of QEMU included with Xen to bring it up to speed for their product. So they made sure it could do live migration! (It still can't pause/suspend/restore as far as I can tell) They also worked around the disk and net I/O issues by creating custom drivers and management software (VS Tools) to be installed in the guest after you have the OS running. This limits your choice of guests to OSes that they have their VS Tools software built for. They currently support Windows guests up to Windows 2003 Server, and many of the most common "big name" Linux distros.

So we got two big nasty servers for hosting our VMs. HP servers with 16 gigs of RAM each, and two dual-core Xeon 64-bit CPUs each. They also have fiber channel interfaces that connect to an HP SAN back-end. My original assumption was that I would install VirtualIron on each of these boxes just as I did with Xen kernel installations or any other typical virtualization technology. I did just that and was lost for a bit. All it seemed to do was install a DHCP server, a TFTP server, and a Web server (Jetty if you're curious). My confusion is partially due to the fact that their web site doesn't give you much info on the architecture. I've written to them about that since I think this product is "the bees knees" where virtualization is concerned. The Java based management interface for VirtualIron contains a "walk through" set up document in a pane on the right hand side of the interface. THAT is where I finally understood the actual architecture and layout.

To use VirtualIron Enterprise (we didn't go with Single Server which DOES work like VMWare and others) you need at least one "management server" and one "managed node". The management server can be one of a few supported Linux distros, or Windows. The fact that it could be Windows really confused me at first, because I couldn't understand how they would get a Xen kernel installed under an already existing Windows installation. Again, I was completely wrong in that line of thinking. Once I understood the architecture, I was both in awe and very eager to see this thing work. So I proceeded...

In my case, I have two managed nodes (those monster servers with 16 gigs each) and one manager (a Xeon dual CPU 32-bit system with 2 gigs of RAM and dual NICs). The manager is running CentOS 4.5, which is supported by VirtualIron as a host for the Enterprise manager. Once I had that installed and had the management network up (you basically need a different LAN dedicated to the manager and each node that you can consider "out of band"), I set one of my managed nodes to PXE boot off the manager. That's right! You DON'T need to install a damn thing on the managed node! It's diskless! The TFTP server and the DHCP server give this box an IP address, and point it to a preconfigured Xen boot image. Their preconfigured boot image is a Xen hypervisor with a very stripped down Suse Linux Enterprise 10 (SLES10) on it. So stripped down that the managed nodes can run headless. There is ZERO interaction on those boxes other than the power button!

Once the managed node loads it's boot from the network, it shows up in the Java managemenr interface and you're ready to create VMs and assign them RAM, CPU, network and storage. In our case, the SLES10 image has drivers for our Emulex LightPulse fiberchannel HBAs, so LUNs presented by the SAN are fully accessible from within the VirtualIron manager. Once VirtualIron was up, I was off and running installing RHEL 4.5 for my Zimbra installation. It's a beautiful thing! The managed nodes don't have a damn thing on them. In fact, not only do they run headless, but you don't need ANY storage in them at all if you don't want it! All VM configuration resides on the managing server. So that's the guy you want backed up reliably.

I can't say enough good things about VirtualIron. It can bring the power of Xen virtualization to anyone who wants it, even if they've never touched Linux. It really is an amazing thing.

This discussion has been archived. No new comments can be posted.

VIRTUALIZATION: VirtualIron is WAYYYY Cool

Comments Filter:
  • What happens if the management server goes down? Yes I know you can restore it from a backup, but in the interest of zero downtime, there needs to be a way to fail over the manager I would think. Does VirtualIron have a solution for that?
    • by eno2001 ( 527078 )
      Well... I asked a question to their support people that was sort of about that. My question was, "If the manager goes down or is down for a while, what happens to the VMs?" The answer was... "Nothing. They keep running. The only issue you have is that you can't manage the VMs (monitor, shutdown, reboot, boot, etc...). To bring the manager back up (assuming hardware failure), you install your OS,VirtualIron and then restore your VirtualIron environment. You'll also need to make some changes to five fi

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...