Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

OS Virtualization Interview 184

VirtualizationBuff writes "KernelTrap has a fascinating interview with Andrey Savochkin, the lead developer of the OpenVZ server virtualization project. In the interview Savochkin goes into great detail about how virtualization works, and why OpenVZ outshines the competition, comparing it to VServer, Xen and User Mode Linux. Regarding virtualization, Savochkin describes it as the next big step, 'comparable with the step between single-user and multi-user systems.' Savochkin is now focused on getting OpenVZ merged into the mainline Linux kernel."
This discussion has been archived. No new comments can be posted.

OS Virtualization Interview

Comments Filter:
  • I'm not convinced... (Score:2, Interesting)

    by SGrunt ( 742980 ) on Tuesday April 18, 2006 @09:46PM (#15154330)
    ...that virtualisation is going to be that much of a Big Thing(tm). Those that will get the most use out of it will be the would-be dual/tri/mega-booters, and, let's face it, compared to the number of computer users in the world - heck, to the number of people that know roughly what virtualisation is - that number is going to be quite small.
  • Virt is big (Score:1, Interesting)

    by Anonymous Coward on Tuesday April 18, 2006 @09:52PM (#15154357)
    I disagree, I think this is going to be big and is already starting.. In the corporate world, we have been moving many legacy systems onto VM's. Win2k3 also runs on VM's very nicely, great way to utilize that server you use for print/virus/iis, each having a seperate OS on same hardware. I think the VM buzz is really hitting much more mass right now. We are looking at mass roll-outs for desktops to get away from dual booting win/linux and would prefer to see this virtualised, as would clients.
  • by Anonymous Coward on Tuesday April 18, 2006 @10:25PM (#15154502)
    Parent wrote: <i>as measured in dollars-of-revenue,perhaps the biggest segement for Linux is servers; since that's what vendors sell --- but in units and in amount of work done workstations is a far bigger market.</i>

    Indeed dollars of revenue is a uniquely poor way of measuring the success of software.
    The best analogy I've heard is market research of
    breathable gases.   Any market research company
    would happily conclude that that Tobacco
    Smoke is a far more desirable breathable substance
    than air.   Just look at the revenue numbers:
       Cigarettes - $48.7 billion in 1997
       Cigars     - $ 0.9 billion in 1997
       Fresh Air  - $ 0.0 billion in 1997
    So the obvious conclusion is that if you're a business
    the revenue figures obviously show that best
    practices in the industry is to use smoke.

    Absurd, yes; but it seems that's how most corporations
    pick their databases and operating systems.
  • FreeBSD Jails (Score:3, Interesting)

    by Ragica ( 552891 ) on Wednesday April 19, 2006 @12:00AM (#15154825) Homepage
    Sounds, once again, a lot like FreeBSD's jail [wikipedia.org] support (which has existed for many years now, and is very stable).

    In what ways is OpenVZ different? I also wonder what their "commercial offering" adds... but i'm too lazy to look.

    I run FreeBSD jails on my box for testing purposes. It's extremely easy to setup and administer, especially with many helper scripts available these days.

    I am loving the simplicity of ezjail [erdgeist.org]. The coolest thing about it (besides the utter simplicity), is that it creates a "base jail" containing an entire FreeBSD install. From there it uses tricks with nullfs to mount parts of that base iinto jail 'instances'... this means each new jail takes only 2 megs of additional space, and about 1 second to create. It also adds security in that the base system remains absolutely read-only, while still permitting customisation and additional software to be installed in the jail.

    I need a new virtual server to test my software:

    ezjail-admin create new-jail-name 192.168.5.123

    Then run the ezjail startup script. And SSH in to my new virtual server. (Note: i set up the default server template to enable SSH and a few default logins... very easy to do. One does not need to use SSH; one can get into the jail environment a few different ways.)

  • by kesuki ( 321456 ) on Wednesday April 19, 2006 @12:06AM (#15154849) Journal
    actually, the virtualization software or the 'host OS' itself handles the scheduling. in server farms quite often the virtualization software runs 'bare metal' (eg: the system boots straight into the virtualization software, and loads any images etc.) but most geeks run it on top of a full fledged Os where the software can rely on any built in shcedulers etc. I have noticed that certain devices (soundcards, for example) don't always play nicely with being shared, but others (LAN cards) handle being shared very transparently. there is room for improvment in sound cards, saddly there seems to be little motivation to innovate. sytle over substance seems to be the name of the game, although in this case that means 'sounds clearer' over actually being able to processes multiple simultaneous audio effects.

    well there is the Audigy 2 X-Fi series, which on paper is a dramatic improvement, but is 8 simultaneous real-time sound events fast enough? I just kinda wonder because in the games I play (online), most people use hot keys to toggle sound effects anyways.

    besides which i'm not even sure if the audigy x-fi cards would even work properly with virtualization software. but, i can't think of another card with as much technical capability for generating sound effects. although i'm not that familiar with the $1000+ range products on the market.
  • by karl.auerbach ( 157250 ) on Wednesday April 19, 2006 @12:58AM (#15154985) Homepage
    It sounds like the *nix VM world is moving along the track established by Multics and IBM's CP/67 (later VM/370) projects.

    It seems to me that the differences in the *nix approaches are mainly whether the abstract machine seen by user written code resembles a hardware machine or some nicer abstract machine.

    In all VM approaches the idea that one can freeze an entire system and look at it, or isolate it, or migrate it, is a very valuable one. It's done well for IBM on their mainframes.

    As for adding resources on the fly - way, way back (mid 1980's) Robin O'Neil and I did a System V based kernel for the Cray's out at Livermore. We had to run on top of the real OS, so we gave each user his/her own copy of Unix and create a file system that could grow or contract, adding, or removing inodes on the fly. And some of those inodes could reference files held by the underlying OS, thus making strange things, like "df" showing less space on the file system than was shown by a "du" summation of the file sizes in the file system. We published a paper on this at one of various Unix gatherings of the time.

    So if we could expand file systems on the fly 20 years ago I don't see why it should be so hard to do today.

    Now if we'd just get serious about capability architectures... (Much of the secure OS work of the '70's was done with capability architectures with hardware support such as the old Plessy machines.)
  • by ovz_kir ( 946783 ) on Wednesday April 19, 2006 @04:28AM (#15155490) Homepage

    Very short answer -- Solaris Containers is the same technology as OpenVZ or VServer. Their isolation is OK as well, their resource management is worse than that in OpenVZ. There are some system-wide resources that you can not limit for a containter -- which can create problem if an application inside a containter goes crazy (or a container is owned by a c00l ha>

    Remember, Solaris Containers are a recent feature, while Virtuozzo was available as a product since year 2001. So, Solaris is doing the right things and great things, but it still has a way to go.

  • by ovz_kir ( 946783 ) on Wednesday April 19, 2006 @06:15AM (#15155738) Homepage

    Speaking of complexity, it is indeed complex. Any OS is complex. VMWare itself is very complex. Any stuff that is not trivial is complex.

    The questions are: whether it works, and is it maintainable?

    Whether it works? OpenVZ and Virtuozzo works just fine -- ask anybody who's using it, get a cheap Virtuozzo VPS from one of the HSP, or just install it on your Linux box and see for yourself.

    Is it maintainable? OpenVZ stable kernel is based on Linux kernel 2.6.8 (with tons of backported fixes and driver updates). We have recently ported it to 2.6.15 and 2.6.16, and also to the kernels from Fedora Core 5 (here [openvz.org]) and SUSE 10 (here [openvz.org]). So I think it is maintaintable.

    [VMWare] has some performance issues, and Xen's paravirtualization gets a fine balance, that is to have a minimal set of modification of the guest OS.

    Hmm, isn't that Xen which requires a modified Linux kernel? Is that "a minimal set of modifications"? Are you kidding? In contrast, in OpenVZ's VE you run an unmodified Linux distribution, the only missing piece is the kernel which is provided by the host OS. There are modifications (like removing getty from /etc/inittab), but they are not strictly required.

    What's the point then? OpenVZ also runs a modified Linux kernel. Well, the point is you can not have hundreds of VMs with Xen (or VMWare), but you can -- with OpenVZ. OpenVZ is also more stable -- but Xen will cure this, I believe, so this is not the point in the long term.

    Basically, VMWare is at the one end of the scale -- can run anything, bad performance, scalability and density, OpenVZ is on the other end -- can run Linux 2.6 only, native performance, best possible scalability and density, easier management. Xen is somewhere in the middle of all this.

  • by Target Practice ( 79470 ) on Wednesday April 19, 2006 @09:39AM (#15156421)
    "Well, the question is, why virtualization?"
    "virtualization is very usefull in a corporate context, eg you want to separate environnements, ease up backups, increase security, have 10 different OSes installed on one server for testing purposes"

    You really answered your own question, which is something to respect in the slashdot halls, where an empty question is more common...

    To add my own thoughts, though, I'd say that's exactly why I want virtualization, and why I'd rather have it at the hardware level than anywhere else. If I could test out what the latest patch from my software vendor will do (whose patches have a tendency to crap out their system) in an entirely simulated environment, I would love it.

    While I'm preparing for implementing a new and improved way of doing things, such as authenticating against LDAP instead of locally on each of my ten servers, it's reassuring to my higher ups to see the process actually implemented in a test environment, with ten servers, and working. Something tangeable for them to try out always sells better than "I think we can do this, I read about it, but I haven't tried it out yet."

    Running in a production environment may be something of a different beast. I'll probably wait a year for others to test the waters before I jump on board, but I AM anxious to do so.

    It was great to see the latest (I think) AMD hardware running Suse 10 with its Xen installation (So, Linux base) with an unmodified Windows XP OS on top. Sweet stuff. I'll never use it. But it indicates I'll be able to install any version of Linux, without kernel modification, and use it for my daily test needs. As soon as I can remember what the underlying hardware was, that's going on my list of 'toys to buy'.

    Sorry to jump on your bandwagon, but I had to say it somehow...
  • by ovz_kir ( 946783 ) on Wednesday April 19, 2006 @10:53AM (#15157129) Homepage

    I know some people who use Virtuozzo, OpenVZ or Linux-VServer to host a single VPS. This does not makes sense from the first sight, does it? What about the second?..

    The idea is virtualization (OS-level virtualization) provides some benefits without sacrificing much of anything. So what it provides?

    Virtual Environment (VE) do not depend on the hardware, so you can move a VE to another box without changing anything. Every sysadmin will love that. No need to edit /etc/fstab or /etc/modprobe.conf.

    VE can be cloned. If you want to change something but afraid it will not work, you clone your VE and change the clone.

    VE can be migrated to another physical server live (with no service outage -- to networked users it will be seen as a delay in response, not as any downtime). We are releasing this feature for OpenVZ this week.

  • Re:Yep... (Score:2, Interesting)

    by Cus ( 700562 ) on Wednesday April 19, 2006 @03:43PM (#15159817)
    I fully concur with the parent - I'm helping with an ESX environment at the moment that's running on 8 Proliant blades. Each of these will end up with on average 8 Virtual machines on each one and that leaves us with a lot of overhead 'just in case'. As well as redundancy it's physically taking up a lot less space and power. Regarding redundancy, we're running with storage on a SAN - if the error detection system uncovers an imminent failure in the hardware (or if we decide to), the time taken to transfer a virtual machine onto another server doesn't take long at all - after all, you're only looking at shifting the memory, not the drive contents. It *is* weird seing a fully function copy of W2k3 running SQL Server only taking up less than 100 MB RAM, though :)

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...