Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

OS Virtualization Interview 184

VirtualizationBuff writes "KernelTrap has a fascinating interview with Andrey Savochkin, the lead developer of the OpenVZ server virtualization project. In the interview Savochkin goes into great detail about how virtualization works, and why OpenVZ outshines the competition, comparing it to VServer, Xen and User Mode Linux. Regarding virtualization, Savochkin describes it as the next big step, 'comparable with the step between single-user and multi-user systems.' Savochkin is now focused on getting OpenVZ merged into the mainline Linux kernel."
This discussion has been archived. No new comments can be posted.

OS Virtualization Interview

Comments Filter:
  • I'm not convinced... (Score:2, Interesting)

    by SGrunt ( 742980 )
    ...that virtualisation is going to be that much of a Big Thing(tm). Those that will get the most use out of it will be the would-be dual/tri/mega-booters, and, let's face it, compared to the number of computer users in the world - heck, to the number of people that know roughly what virtualisation is - that number is going to be quite small.
    • well isn't Linux used mostly for server operations? Virtualization also adds a layer of safety and security between child OSes and their processor.
      • I don't see why such a layer is necessary, or what it will ultimately provide. The OS is supposed to protect users and apps from each other! If virtualization becomes widespread, it will have to take on more and more of the roles of an OS until it *is* an OS. For instance, an OS has a bunch of logic (a scheduler) to grant processes "fair" access to the CPU. With virtualization, you need another scheduler to schedule among the schedulers!!
        • Exactly. something to sit above the kernel, or "supervisor"... something like a "hypervisor", which is exactly what xen's marketing department wants us to call the xen kernel
          • Well, actually that term was coined well before Xen came around. I'm pretty sure the VMM that's part of z/VM on IBM's machines was referred to as a "hypervisor" as well - and it far predates Xen's use of the word.
        • by kesuki ( 321456 )
          actually, the virtualization software or the 'host OS' itself handles the scheduling. in server farms quite often the virtualization software runs 'bare metal' (eg: the system boots straight into the virtualization software, and loads any images etc.) but most geeks run it on top of a full fledged Os where the software can rely on any built in shcedulers etc. I have noticed that certain devices (soundcards, for example) don't always play nicely with being shared, but others (LAN cards) handle being shared v
          • All good sound DSPs (arguably better than anything from Creative quality-wise), and they all support 8+ hardware mixed sound streams.
            As a bonus the envy24 has very flexible hardware mixing and routing too... so you could actually have 4 different OSs running with 4 different stereo output pairs on the same card (check the Midiman 1010 for an example of the requisite hardware incantation for 8 mono outputs).
    • by Abcd1234 ( 188840 ) on Tuesday April 18, 2006 @09:59PM (#15154396) Homepage
      Uhh... these products aren't aimed at your desktop box. They're for use in server farms, where virtualization provides an additional measure of security, along with providing the server operator more flexibility in how their hardware is utilized.
      • Indeed! (Score:2, Offtopic)

        by babbling ( 952366 )
        That's true, but come on, it's going to be pretty fun to play with on desktop machines, too, isn't it? Imagine all the tricks you can play on computer-illiterate friends/family. One second it's Windows, the next it's MacOSX, then 10 seconds later it's Linux! Heads may explode.
      • Uhh... these products aren't aimed at your desktop box. They're for use in server farms, where virtualization provides an additional measure of security

        If windows apps (or group of apps) were virtualized, we could use activex webpages without having to worry about spyware. Just close the virtualization window and it's gone.

        The same for e-mail, if you restrict write access only to the mail files, and all spawned process from the e-mail were virtualized. If it screws up, the most you lose is your e-mail, but
        • That's brilliant, instead of actually expecting secure software, let's just use a 40 pound sledge to drive a nail. Virtualization means running a nested kernel, I don't feel like booting a sub-OS everytime I want to check mail or open a browser. It's far more efficient to just write the app properly.

          I guess the true question is: Which solution is more likely to get attention ? Whiz-bang virtualization will probably win, since it seems very few people in this world have the patience and discipline to writ
          • Virtualization means running a nested kernel

            No, it isn't. Didn't you RTF... oh, right, this is slashdot. Nevermind. :P
          • "It's far more efficient to just write the app properly."

            That's right. But how many do? I mean, it's not as if most application developers deliberately set out to write buggy insecure software.

            Being able to, say, bring up a perferences dialog and totally sandbox an application would be cool. After all, do you really trust that utility you just downloaded? Should that browser have full access to your system.

            And as to "booting a sub-OS", you obviously haven't used an OS 9 "Classic" app on OS X. You just doubl
        • If windows apps (or group of apps) were virtualized, we could use activex webpages without having to worry about spyware. Just close the virtualization window and it's gone.

          On more than one occasion, I've trolled the warez sites for a "key generator". These are programs that you run that give you a workable key for a particular software product - but they are almost ALWAYS loaded with spyware and other easter eggs.

          But, with VMWare, it's no big deal. Take a snapshot, download the generator & run, write d
        • Vista is getting out the door late, and I'll bet that most of the reason is that they have to get backward compatablity with all of the software that came before. It seems to me that An OS/X-like operating system (a clean kernel and network stack with a lovely, deeply integrated GUI) could run XP virtual machines whenever you needed to run a 'legacy' application. This would allow Microsoft (or Apple, or the OS community) to code an efficient OS and still be able to have all of the arcane hooks (e.g., dupl
    • by NitsujTPU ( 19263 ) on Tuesday April 18, 2006 @10:02PM (#15154408)
      Nah nah nah. It's going to be great. Picture this. You manage a university computer lab. The computers all have identical software, and all of the students files are stored on a network share. When computers are not in use, you'd like to dedicate the cycles to a long-standing distributed computation for experiments carried out by one of the departments.

      The student logs in and a disk image runs their OS of choice, they don't have to reboot or know much, they just click an icon saying which OS, which instantly is presented to them. A batch process manager removes the load from the distributed experiment from their machine.

      Or, perhaps something that's already fielded. You're a graduate student, and want to emulate 1000 compute nodes for a distributed computing experiment, you log into emulab, and tell the 50 that you've signed up for to boot 20 OS's a piece, and emulate a 1000 node network.

      Or, perhaps you're studying viruses (this has also been done), and want to build an Internet scale honeynet.

      Or, perhaps you're running a large server farm. You want an easy way to load balance a multitude of services, so you can run something that looks like 100 servers on perhaps 50. By dynamically balancing across nodes, services can automatically adjust themselves, independently of mechanisms built into their software (to some degree). When you want to add new hardware to the network, you just plug in the machine, and tasks start being farmed to it. When you want to retire some, you just tell the manager to stop moving tasks onto that machine, and wait for the tasks on that machine to move off.

      Briefly put, VMMs rock. You have to think outside of "geeks playing with VMWare" to really see the interesting applications though.
      • Thanks for the post, it gives me some insight into what virtualization is. But I'm still confused about what it actually does. I read this entry over on wikipedia:

        http://en.wikipedia.org/wiki/Virtualization [wikipedia.org]

        Does virtualization basically run multiple OSes on one box? Make one computer appear to be 2, or 3, or n?

        Steve
        • Yep (Score:3, Informative)

          by XanC ( 644172 )
          That's basically the idea. A single machine can be running several different systems at once, and each one can have its own kernel, network settings, tuning for a particular task, whatever. You can set up the network however you want; you can even simulate subnets and routers and who knows what to try stuff out.

          Another big advantage is that the virtualization provides a common "hardware" layer. For example, every VMWare "machine" sees standard VMWare "hardware", no matter what kind of metal it's actual

          • The big server would still need to be x86 in that scenario.
            • Yes, you are correct. Think about how it would streamline development/testing. You want to try out a new feature or patch to an application. You bring a snap of your VM to your testing server. You launch your VM of the production server, make the change, and validate. When done, you snap that over to the real box and you are live. You don't have to implement the change twice. If it is trivial, not a big deal, but if it is a major issue like a multi hour recompile, this could save tons of time.
        • In the very simplest case, there is a program called a virtual machine monitor that multiplexes the underlying hardware. Operating systems that run atop this see the hardware as if they have exclusive access to it.

          The cool part comes in what one chooses to do with this. See, now the operating system sets on something that in its simplest sense does this... but one can build more interesting things into the VMM that allow it to do things like snapshot the entire running operating system and move it across
          • So basically, what has happened, I gather, is that computers have gotten so powerful that now we can split up one "hardware unit" (a PC) into serveral virtual units, with different levels of connection (or none at all) between these virtual units.

            I think this is an awesome way to run a web browser - just destroy the virtual machine every time you are done browsing and you greatly minimize infection possibilities.

            Steve
      • Or, perhaps you're running a large server farm. You want an easy way to load balance a multitude of services, so you can run something that looks like 100 servers on perhaps 50. By dynamically balancing across nodes, services can automatically adjust themselves, independently of mechanisms built into their software (to some degree). When you want to add new hardware to the network, you just plug in the machine, and tasks start being farmed to it. When you want to retire some, you just tell the manager to st
    • I'm not convinced that virtualisation is going to be that much of a Big Thing(tm).

      Allow me to introduce you to the world of Big Business: upper management want the Big Business pay check but, post dot-bomb bubble, they want none of the penalties associated with taking a risk. So you have the "one application per box" mentality. All of a sudden, you've got 20 boxes running at 5 percent utilization.

      Can you see where virtualization would provide "virtually" the same thing with better cost efficiency?

      Make no
      • Please forgive me for copying my own post, but I'm lazy.

        I'm a performance tester who has had to completely reinvent how we do business thanks to virtualization. How do you give assurances to an application that they will perform adequately in a virtual environment when by definition performance will always be dynamic?

        The primary approach we have had to take was to stop looking at whether an app will perform on a virtual machine, and start looking at whether or not it will be cost effective for the app t

        • I know some people who use Virtuozzo, OpenVZ or Linux-VServer to host a single VPS. This does not makes sense from the first sight, does it? What about the second?..

          The idea is virtualization (OS-level virtualization) provides some benefits without sacrificing much of anything. So what it provides?

          Virtual Environment (VE) do not depend on the hardware, so you can move a VE to another box without changing anything. Every sysadmin will love that. No need to edit /etc/fstab or /etc/modprobe.conf.

          VE can b

    • Virtualization is HUGE. It helps solve a major problem. With few exceptions, most data centers are running out of power, not space. Servers consume 70-90% of their power draw when the CPU(s) is(are) at idle - and most servers in corporate America run below 15% utilization. If I can combine 4-8 servers into 1, I can save a tremendous amount of power. Here's some simple math.
      A server consumes 400 W at idle and 500 W when all 4 processors are pegged at 100% utilization. If I take 4 servers that normally
      • No one is going to want to run their servers at a high utilization rate as it leaves no headroom. Let one of those combined "virtual" servers get Slashdotted or mentioned in a blog or Time and you bring down the whole shooting match.

        A better way to do what you suggest would be figuring out some way to run all of those "virtual" machines/applications in a cluster so that if one gets /.'ed the load spreads out and is handled across multiple boxes. In a sense, you need to make the cluster look like a single se
      • The only caveat to virtualization the way you are describing it is that if a system has most of it's time at 10% utilization, but peaks for a few hours pegging the CPU, and using all of the memory... you could be in a bind.

        It takes a much different set of administration skills to manage systems like these than it does lots of distributed boxen. I can't admit that I know all of the problems and issues, but there are many. I know that at my last job, we had a lot of systems that were performing very poorly (s
    • I disagree. In the last couple of months, we've virtualized almost half our network. About 20 servers that used to run on physical hardware now runs on one of our four VMWare ESX servers. Everything runs more than fine, without paying alot for the hardware. Granted, the machines that ESX runs on are quite bloated, but hey, if you can run about 8 servers on 1 machine you'll find that the costs aren't that big anymore. No, virtualization has come a long way since the first time i used it to boot linux inside
    • We make use of virtualization at my company all the time. When we need to prototype something or even need to deploy a production server application quickly we just take one of our pre-rolled skeleton installs of either Debian or Windows Server 2003, copy it and start it up. We can then just install whatever needs to be installed and we have a new "server" up within a few minutes with no need to purchase new hardware. When a particular physical server gets too busy we can buy a new one and easily migrate a

    • I disagree. I think virtualization is going to be an incredibly useful too - all the more so if microsoft would allow windows to be virtualized easily (perhaps on top of a nice fast exokernel). And I think it has the potential to hit home use big time. For example (there are lots of others) : if you have such a machine you could give everyone in your family a new virtual copy of windows (or linux or macos) to run on the same hardware, which might be a multi-core processor and use remoteing (and remoting
  • OT question (Score:2, Insightful)

    by tomstdenis ( 446163 )
    What's with "open" in the name of all these projects. Is anyone really impressed by that anymore?

    Tom
    • Re:OT question (Score:4, Informative)

      by subreality ( 157447 ) on Tuesday April 18, 2006 @10:10PM (#15154436)
      What's with "open" in the name of all these projects.


      In this case it's an OSS version of a closed-source product called Virtuozzo, commonly abbreviated VZ. I think it's a perfectly descriptive name.
      • Well if it's the closed project it's opened up.

        If it's a clean-house implementation then it's not strictly based on it.

        Call it something else like Vzeeforefree!

        Dunno just annoyed at people abusing the OSS blanket for publicity.

        Tom
        • It's a closed commercial product, and they forked and GPLed a subset of the source.

          Dunno just annoyed at people abusing the OSS blanket for publicity.

          Where do you think Firefox came from? Do you think releasing Mozilla was abusive?

          I don't think everything needs to be done for wholely untainted altruistic reasons. It's not like they're throwing out some old bones to chew on. This is an actual useful bit of software.
          • They don't call it OpenNetscape now do they?

            Tom
            • OK, I missed your point before, because I'd never even considered picking THAT nit. :)

              You consider it abuse when they call it Open even when it's a real product being released under a real OSS license. Under what circumstances would you consider the word "Open" to be NOT abusive?
  • A bit of bias... (Score:5, Informative)

    by subreality ( 157447 ) on Tuesday April 18, 2006 @10:05PM (#15154418)
    "why OpenVZ outshines the competition, comparing it to VServer, Xen and User Mode Linux."

    Of course, Andrey works for the software company that wrote this thing, and their closed full-featured flavor, Virtuozzo. The VZ method is a good one, and has excellent performance, but it has its drawbacks, too. Personally, I don't like that my VPSes need to use my VPS provider's kernel, which lacks features I desperately want (like stateful iptables matching), and which forces me to reboot whenever they upgrade their kernel (my VPS can't be migrated to a host running a different kernel), and I can't upgrade until my provider does.

    VServer, Xen, and UML all make different tradeoffs. VZ goes for performance. Saying one outshines the others is just trolling. That's mostly on the part of the /. submitter, but Andrey slants it a little too.

    I don't want to crap on the OpenVZ project. They're working on very cool stuff, and I applaud SWSoft for opening the thing up. I just want people to keep the comparisons in context.
    • by XanC ( 644172 )
      You need to move to Linode.com, seriously. They don't have any of the problems you mention. It's all UML for now, although they have some Xen boxes in beta that you can get on.
      • And without knowing anything about what I'm doing, you make a recommendation for a service provider? My requirements are a bit more complex than that. :)
      • I used to use UML fairly heavily, but the real-world I/O performance was awful, even with the skas patches applied on the host. Xen's a dog too right now (as far as I/O operations are concerned -- particularly video) when doing VMX domains [which use hardware-supported virtualization rather than a paravirtualization-aware guest], but on native domains the performance hit isn't nearly as bad as it is with UML. Do some I/O-heavy (rather than CPU-heavy) benchmarking, and the difference becomes fairly visible.
  • OS virtualization (Score:5, Insightful)

    by Cthefuture ( 665326 ) on Tuesday April 18, 2006 @10:06PM (#15154424)
    Unlike Xen or VMware this OpenVZ doesn't run a separate kernel for each virtual machine. This seems like a security risk to me. A kernel bug will affect all the running virtual machines. In other words, you only need to break one kernel and you have them all.

    Plus you can't run different operating systems on each virtual machine.

    It does have some positive benefits, it all really depends on what you are doing. I like the security of Xen and VMware better though.
  • by cduffy ( 652 ) <charles+slashdot@dyfis.net> on Tuesday April 18, 2006 @10:18PM (#15154473)
    The interviewee keeps talking about Xen 3 like it's not out yet, but that's untrue.

    Indeed, Xen 3 has been stable long enough that they're presently at 3.0.2. It's not prerelease anymore, and support for x86_64 and hardware-supported virtualization has been out and about for a while. I have semi-production (used by in-house staff only, but there are folks who can't work if it's down) systems running on Xen3 x86_64 DomUs, and the host they're on has been up (and running unattended) for 117 days now.

    Sun has a OpenSolaris port to Xen (though I think it may be in-house-only still), and I have some good friends working on a microkernel OS targeted at embedded operation with a Xen DomU port pending (such that they -- and people working on it -- will be able to run it in parallel with the OS they use as their development platform). Being able to run more than one kernel -- indeed, more than one operating system -- is a big plus on the Xen side of things.
  • Imagine ... (Score:3, Funny)

    by 3dr ( 169908 ) on Tuesday April 18, 2006 @10:43PM (#15154559)
    ... a beowulf cluster of virtualization servers running beowulf clusters of VPSes!

    • Imagine a beowulf cluster of virtualization servers running beowulf clusters of VPSes!
      .... with each server having an 8 way mainboard containing 2 core chips pretending to be a single core chips ....

      Imagine playing solitaire on that !

  • Its amazing how low utilization of servers is. Developers love lots of servers, but don't use them nearly as much as they say... see article "Virtualization is the COOLEST thing" at http://blog.tallsails.com/ [tallsails.com]
  • Xen misconceptions (Score:3, Informative)

    by jforest1 ( 966315 ) on Tuesday April 18, 2006 @10:57PM (#15154612)
    Just to clarify: "Using Xen, you need to specify in advance the amount of memory for each virtual machine and create disk device and filesystem for it, and your abilities to change settings later on the fly are very limited." Xen supports a balloon driver that can allows for one to add or take away from the memory allocated to guest operating systems (DomU's). It is highly advised to us LVM2 to allocate disk space for DomUs, since it allows for easy changes to the partition. This makes file system management easier. "But most importantly, OpenVZ has the ability to access files and start from the host system programs inside VPS. It means that a damaged VPS (having lost network access or unbootable) can be easily repaired from the host system, and that a lot of operations related to management, configuring or software upgrade inside VPSs can be easily scripted and executed from the host system. In short, managing Xen virtual machines is like managing separate servers, but managing a group of VPSs on one computer is more like managing a single multi-user server." Using LVM2 as the disk manager as mentioned above, the host operating system (Dom0) can access the DomU's filesystem for troubleshooting and run programs (though it would not be run in the scope of the DomU, I'm not sure that he's actually implying that is the case with OpenVZ). --josh
    • by jlittle ( 122165 )
      Regarding running applications within the scope of a VE (DomU equivalent), yes he is. I extensively use both Virtuozzo and Xen. Each has their strengths. VZ allows efficient use of memory (shared memory across all VMs) as well as disk space, as binaries _can_ be shared with a copy on write file system. You can do a lot of this in Xen, but you can't mount a Xen domU filesystem in Dom0 when a DomU is using it. In OpenVZ, the filesystem is only mounted in the hardware node and exposed through an FS layer (copy
      • but you can't mount a Xen domU filesystem in Dom0 when a DomU is using it

        Can and do :). Use OCFS2, piece of cake to set up and the because Xen 3.0.2 is based on 2.6.16, it's already in the kernel tree.

        Haven't used it as the root filesystem yet (just as a shared filesystem between domains), but when I do I will (in theory) be able to have 1 filesystem with 'per node symlinks' (ocfs2 calls them something else but that's what they are) so each node/domain can have a separate /etc, /var/run, /var/spool, and so
        • You can also use GFS, or one of the other clustered filesystems to do this, though what I'm really waiting to see is XenFS - as far as I know it's still in the works, and it'll definitely put an interesting new spin on Xen virtual machines.
  • virtualisation (Score:2, Informative)

    by Tinkster ( 831703 )
    ... and then there's the outstanding IBM p-Series machines with their Hypervisor in
    hardware that benefits from the aforementioned age-old mainframe technology :}

  • I don't doubt that OS-level virtualization is more efficient, but have you ever tried upgrading the OS for hundreds of applications at the same time? It's darned near impossible.

    The great benefit of hardware level virtualization is that you can upgrade one app and one environment at a time. If app-"A" needs Linux 2.4 because that is what Oracle supports - fine, no problem. But if app-"B" needs to upgrade to Linux 2.6 because its reporting suite must have that version, that is ok too.

    It seems to me that OS-l
    • It seems to me that OS-level virtualization is a cool sounding idea that is pretty hopeless in the real world.

      It depends on the application. If you're talking about a web host running lots of web servers it might make sense to use this approach, since the guest systems are likely to be very similar if not the same.
      • I guess if you think hard enough you'll think of a good application for it... but in the case of web server farms, what's the point of having multiple virtual environments unless you are going to open them up to your clients to install their own PHP or postgresql or mysql or whatever darned bit of web technology they want? If all you want is a bunch of web sites on virtual hosts, you can just use the apache virtual hosts function. But if you want to give clients a free for all, you basically have a massive
  • by ratboy666 ( 104074 ) <<moc.liamtoh> <ta> <legiew_derf>> on Tuesday April 18, 2006 @11:31PM (#15154734) Journal
    These are not virtual machines. The idea seems to be the same idea behind Solaris 10 Containers, and I wish that had been discussed (pros and cons) in the interview.

    Easier management for vertical stacking of applications on a machine.

    And, yes, it is VERY useful.

    Not for typical home use though. At home, I use VMWare for virtualization, QEMU to run foreign code, and BOCHS to test x86 assembly sequences, all of which I do frequently. Stacking? Not so much, because my main server is a dual PPRO with 128MB -- httpd, imapd, file services, time services, etc. Not a heavy load (104 processes, easy enough to manage manually).

    Ratboy.
  • FreeBSD Jails (Score:3, Interesting)

    by Ragica ( 552891 ) on Wednesday April 19, 2006 @12:00AM (#15154825) Homepage
    Sounds, once again, a lot like FreeBSD's jail [wikipedia.org] support (which has existed for many years now, and is very stable).

    In what ways is OpenVZ different? I also wonder what their "commercial offering" adds... but i'm too lazy to look.

    I run FreeBSD jails on my box for testing purposes. It's extremely easy to setup and administer, especially with many helper scripts available these days.

    I am loving the simplicity of ezjail [erdgeist.org]. The coolest thing about it (besides the utter simplicity), is that it creates a "base jail" containing an entire FreeBSD install. From there it uses tricks with nullfs to mount parts of that base iinto jail 'instances'... this means each new jail takes only 2 megs of additional space, and about 1 second to create. It also adds security in that the base system remains absolutely read-only, while still permitting customisation and additional software to be installed in the jail.

    I need a new virtual server to test my software:

    ezjail-admin create new-jail-name 192.168.5.123

    Then run the ezjail startup script. And SSH in to my new virtual server. (Note: i set up the default server template to enable SSH and a few default logins... very easy to do. One does not need to use SSH; one can get into the jail environment a few different ways.)

  • by Anonymous Coward on Wednesday April 19, 2006 @12:18AM (#15154887)
    In the mid 60's IBM created CP-67 which virtualized the IBM S/360. In the following years the system became VM/370, and has evolved to z/VM today http://www.vm.ibm.com/ [ibm.com]. VM (the general term for z/VM) is made up of two primary components, VM/CP (control program) and VM/CMS (a mini single user operating system). VM/CMS provided the ground work for being able to administer the system, and provided a nice programming environment in that each VM/CMS user had their own "system" that one could edit, compile and run their programs in an interactive environment (think of a MS-DOS type of model -- then remember that this was in the late 60's).

    CMS itself provided some limited simulation of IBM's two other mainframe operating systems OS/360 and DOS. Enough that one could write simple OS or DOS programs and do at least some unit testing. The simulation by CMS was by providing a limited set of the OS and DOS API.

    Unlike MVS or DOS, (or even the CP/M, Windows, or *nix families) VM/CP itself does not provide many services directly. VM/CP does not provide any filesystems, any application APIs, etc. All VM/CP really did was to provide a barebone virtual machine and only provide those services one would find on the bare hardware. It was the responsibilty of the operating system running within the virtual machine to provide the application API, filesystems, application memory management, etc. Communication between vm's were originally only via the raw hardware model (channel-to-channel adapters, shared disk volumes, and a method of "punching" virtual cards and sending the virtual cards to another vm's virtual card reader.) As time progressed, VM/CP did provide some API's that allowed very simple messaging between two vm's (first VMCF - Virtual Machine Communication Facility, and then IUCV - Inter User Communication Vehicle).

    Early on it was "discovered" that the virtual machine model made a lot of sense as a method to implement VM services. For example if one were to look at a modern VM system, you would see that the entire native VM TCP/IP stack is managed within a small collection of vm's. (Under VM/CP, a vm is called a "userid"). The native VM TCP/IP stack consists of a TCPIP userid that manages the network interface devices, and the TELNET server. The FTP userid implements the FTP protocol, etc. Each userid is totally seperate from the rest of the system and from each other (the tcp/ip socket facility "rides" on top of IUCV in a transparent fashion so that a tcp/ip server is coded the same as on *nix).

    Because of the facilities provided by CMS, it is fairly easy to write little servers. For example the orginal LISTSERV server http://www.lsoft.com/products/listserv-history.asp / [lsoft.com] was written as a CMS application. As well as several native VM webservers.

    If one wants to see what is and has been possible in a virtual machine environment, one should at least look at the history of IBM's VM.

    For an excellent history of VM http://www.princeton.edu/~melinda/ [princeton.edu]
    and the VMSHARE archive, an early BBS used by VM system adminshttp://vm.marist.edu/~vmshare/ [marist.edu]

  • And it's coming. But I think VMWare and Xen got it right. OpenVZ tries to do it inside the OS, which makes OS too much more complicated. It's not going to scale.
    • Well, CoLinux works pretty good under Windows, better than Cygwin for sure. The only hitch is getting networking set up. The CoLinux wiki is a bad mashup of WinXP information. At one point I got it to work fine on a work computer under Windows 2000, but I tried the same at home (again, Win2K), and the colinux side does not connect to the net... *:(

    • I don't get your comment at all.

      VMWare and Xen virtualize an entire machine, creating multiple virtual machines, with virtual hardware and all that mess. Openvz just virtualizes an instance of ONE machine, mainly just doing priviledge / resource separation.

      Considering that it is MUCH less complicated from a total lines of code POV and uses much fewer resources to operate, openvz seems like it would scale MUCH MUCH more. Don't get me wrong, I like VMWare a lot - been using it since 1.0... But the two product
  • by karl.auerbach ( 157250 ) on Wednesday April 19, 2006 @12:58AM (#15154985) Homepage
    It sounds like the *nix VM world is moving along the track established by Multics and IBM's CP/67 (later VM/370) projects.

    It seems to me that the differences in the *nix approaches are mainly whether the abstract machine seen by user written code resembles a hardware machine or some nicer abstract machine.

    In all VM approaches the idea that one can freeze an entire system and look at it, or isolate it, or migrate it, is a very valuable one. It's done well for IBM on their mainframes.

    As for adding resources on the fly - way, way back (mid 1980's) Robin O'Neil and I did a System V based kernel for the Cray's out at Livermore. We had to run on top of the real OS, so we gave each user his/her own copy of Unix and create a file system that could grow or contract, adding, or removing inodes on the fly. And some of those inodes could reference files held by the underlying OS, thus making strange things, like "df" showing less space on the file system than was shown by a "du" summation of the file sizes in the file system. We published a paper on this at one of various Unix gatherings of the time.

    So if we could expand file systems on the fly 20 years ago I don't see why it should be so hard to do today.

    Now if we'd just get serious about capability architectures... (Much of the secure OS work of the '70's was done with capability architectures with hardware support such as the old Plessy machines.)
    • It sounds like the *nix VM world is moving along the track established by . . . IBM's CP/67 (later VM/370) projects.

      Of course, Linux on zSeries is already out, stable, and effective for S/390 and later zSeries hardware, and plays very nicely with z/VM. The tricky part is doing the same thing on x86 boxes (given the instructiuon set noncompliance with Popek-Goldberg), which is why there are so many projects going at it from so many different angles.
  • Perhaps I misunderstand virtualization, but this is what came to my mind after reading about it:

    Imagine that in the future nearly every application will be run inside its own private virtual systems. This will be done to improve security, scalability, etc etc. For very complex applications, this will improve the stability of the system as a whole!

  • Running single instance of kernel, I run single OS yet. They can mimic all benefits of virtualization on this level, but basic security improvement I obtain is nothing more than a fancy variation of process privileges separation, achieved by cost of immense additional complexity and waste of resources.

    Basically, I would never jump into separating everything around just to make things safe, unless I look for a fancy way to mess up.

    But for sure, this tool can be very useful for some cases.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...