Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Building Your First Cluster? 71

An anonymous reader asks: "I'm interested in building a DIY cluster using Linux and will be using conventional Linux software. However, the number of possible ways to do this is huge. Aside from Beowulf, there's Mosix, OpenMosix, Kerrighed, Score, OpenSSI and countless others. Therein lies the problem. There are so many ways of clustering, development seems to be in fits and starts, most won't work on recent Linux kernels and there's no obvious way to mix-and-match. What have other people used? How good are the solutions out there?"
This discussion has been archived. No new comments can be posted.

Building Your First Cluster?

Comments Filter:
  • by tomhudson ( 43916 ) <barbara,hudson&barbara-hudson,com> on Wednesday July 26, 2006 @08:57PM (#15788001) Journal

    Try them all. After all, you just KNOW your first one's going to be a clusterf*ck.

    Seriously, if you're going to take that route, you really should be prepared to invest the time in test-driving several different solutions.

  • It kinda depends upon what you want to do. What are the requirements for your cluster? Have you considered going massively SMP instead? Or are you just after a cluster for the sake of having one?
    • I've found that most people I know that wanted to build a cluster, when asked why, replied something to the effect of "for the coolness factor... and to compile things faster."

      I tried to set up 5 node (7 processor) distributed compile farm for a while which let me build gentoo packages with blazing speed. unfortunately, I couldn't get cross compiling to work, nor could I get XCode integration working, so in the end, I had a 400mhz G3 and a 800mhz G4 doing distributed builds with distcc [samba.org] and a 1ghz and a dual
    • Re:Well, uh... (Score:5, Insightful)

      by kimvette ( 919543 ) on Wednesday July 26, 2006 @09:25PM (#15788091) Homepage Journal
      "massively SMP" does not provide fault tolerance and does not eliminate certain bottlenecks such as disk I/O and network throughput, so if it's for an extremely high volume/high availability fileserver, mail server, or web server, massive SMP isn't going to cut it.

      Also, I'd go a render farm (if that's the task) if I had to choose between clustering and SMP, because if one node dies (depending on the managing application) the job just continues, whereas if it's on one single monster machine with no fault tolerance, if the job dies you often have to start rendering again from the beginning. Not fun.

      So let's back up and ask:

        1. What problem are you trying to solve?

        2. If it's a learning experience, try them all, take notes on which suit you best for tasks a, b, and c,

        3. What are your priorities
  • Rocks (Score:5, Informative)

    by corvair2k1 ( 658439 ) on Wednesday July 26, 2006 @09:00PM (#15788013)
    Rocks [rocksclusters.org] is a great tool to build a cluster. It includes lots of monitoring tools and such, so you can see the status of nodes, etc. However, I'm not quite sure how large you're planning on going... May be overkill for a 4-noder. =)
  • Sounds like you're trying to solve a problem that you don't have.

    Cluster need special software to take advantage of the disturbed computing. They are built with a specific task in mind. Or do you already have a need and just failed to tell us?

    For me, I run my network with distcc (http://distcc.samba.org/) So all of my Gentoo boxes can compile using shared computing power. It cut a typical 33Min app down to less then 2 mins doing this. And works wonders for my slower laptop.

    With distcc, all you need to do is have the same tool chains. (glibc, gcc, coreutils, etc) You can even specify how many threads per box you want running to fine tune your network.

    On the other hand, if you just want to learn, then you should try them all. The all suit different needs.
    • Cluster need special software to take advantage of the disturbed computing.

      Well, indeed, clusterf*cks might turn your distributed computing project into a disturbed one..
    • For me, I run my network with distcc (http://distcc.samba.org/) So all of my Gentoo boxes can compile using shared computing power. It cut a typical 33Min app down to less then 2 mins doing this. And works wonders for my slower laptop.

      And don't forget, ccache [samba.org] can work with distcc [samba.org], for an even bigger speedup...

    • Cluster need special software to take advantage of the disturbed computing. They are built with a specific task in mind. Or do you already have a need and just failed to tell us?

      And specifically, is this a processing cluster or a failsafe cluster? I kind of assume a processing cluster, since that's what most people on slashdot refer to as a cluster, but in my experience most of the clusters out there are failsafe clusters ("5 9's" of service versus raw horsepower). Two rather different applications of clu
    • He's apparently not asking for compile farm...

      Author did fail to say what the purpose was, but here are some good starts.

      Apache cluster [howtoforge.com]
      MySQL cluster [howtoforge.com] (should also refer to mysql.com resources)
      Ultra monkey [ultramonkey.org], heartbeat and the like [linux-ha.org] can make cluster as well.
    • Well honestly the problem could be he wants to learn about clusters. Not a bad ambition but also kind of very broad.
      If learning is his goal he could use qemu Xen or VMWare and create a virtual cluster on one really fast Linux box with a good amount of ram. He could also try out different clusters all with out hooking up five or six old boxes.
      Just to add to the confusion I would like to mention the could use Plan 9 for his cluster since it is distributed by nature.
      I would love to set up an OpenMosix Cluster
    • Cluster need special software to take advantage of the disturbed computing.

      Nicely put; sometimes I feel like my computing is disturbed, too.
      • How to build a disturbed cluster:

          1. Cluster unpatched Windows 2000
          2. Install spyware
          3. Install SQL Server (unpatched)

        There's your disturbed cluster.

        Or, for another form of disturbed:

          1. Move to Sv^Hweden
          2. Start legal torrent site based in Sweden
          3. Wait for our govermnent (Bush Administration) to coerce the Swedish government into breaking the law by illegally siezing your servers. That'll be a disturbed cluster!
  • Xboxen (Score:1, Funny)

    by Anonymous Coward
    hella X-Boxes...
  • by Anonymous Coward
    You may want to look at this online book (free):

    http://linuxclusters.com/compute_clusters.html [linuxclusters.com]

    At least get to know various approaches at a high level before proceeding...
  • by techno-vampire ( 666512 ) on Wednesday July 26, 2006 @09:33PM (#15788116) Homepage
    I remember in the '80s, back at JPL, they were talking about connecting a number of computers into a hypercube. One node would be for I/O, and it would communicate with the others along the edges of a hypercube.


    Two computers make a 1 dimensional "cube." Four, in a round-robin make a square. Six, properly connected, make a regular cube and so-on. Does anybody out there know if they still connect clusters this way and if not, why?

    • by sophanes ( 837600 ) on Wednesday July 26, 2006 @10:07PM (#15788312)
      While a few early parallel computers used hypercube based interconnects (eg, CM-1), there hasn't been a lot of interest in hypercubes since then. The advantage of hypercubes is that their diameter only increases logarithmically with the number of nodes in the network. Their disadvantage is that the node degree increases with the dimension, meaning that each node must be configured with sufficient ports to support the maximum dimensionality of the network -- making hypercube-based networks either expensive or non growable. (This can be solved to an extent by using cube-connected cycles.)
    • No, cluster network links are not nearly as elaborate as that, for the most part. The advent of high-speed crossbar links means that only the largest clusters need to even consider the network structure. You can get a 288 port Infiniband switch that provides full crossbar connectivity across all data paths. Only if you outgrow the largest available switch do you need to consider how to best link the multiple switches together. Generally this would involve minimizing the amount of IO that would need to b
    • Two computers make a 1 dimensional "cube."
      This is more commonly known as a line.
    • Some high-speed interconnects like SCI and Dolphin are designed to be deployed in ring based structures (hypercube is based on a bus). The multidimensional analog to the ring is the hypertorus, and many clusters based on SCI and Dolphin use a hypertorus topology.

      For instance:
      http://krone.physik.unizh.ch/~stadel/zbox/start [unizh.ch]
      • Hmm? Hypercubes are generally constructed using point-to-point links, not buses. Unless by bus you mean bidirectional links, but toroids often use bidirectional links as well.

        But you're right, toroids constitute the most popular k-ary n-cube architecture today. Folded 3D toroids (eg. by Avici) are especially sweet (uniform, short interconnects, low diameter, huge path diversity).
        • Unless by bus you mean bidirectional links, but toroids often use bidirectional links as well.


          Poor choice of word. I mean a bi-directional, linear arrangement of nodes. :)

          It doesn't have to be p2p, I believe. One could implement a hypercube using ethernet switches.
    • It all came to a halt when they had 27 boxes [wikipedia.org] nicely stacked up... Some joker came along and painted primary colors on all the outside faces and this was irresistable to some other jokers who began moving the boxes around, making quite a tangle of all the cords.
  • beowulf: http://www.rocksclusters.org/wordpress/ [rocksclusters.org] or http://www.warewulf-cluster.org/cgi-bin/trac.cgi [warewulf-cluster.org]

    i'd recommend looking at the ease of building diskless clusters w/ warewulf.
  • My Experence (Score:4, Informative)

    by PAPPP ( 546666 ) on Wednesday July 26, 2006 @09:46PM (#15788183) Homepage
    I have a stack of five origional Pentium boxes with 32mb of RAM and 2gb harddrives (except for one, with a larger drive for a software repository). Origionally built it to experiment with AFAPI [aggregate.org] based clustering, but since AFAPI is a reasonably non-invasive setup, it works well for trying other techniques too, everthing from simply running distcc [samba.org] on the nodes to speed up i586 software builds to briefly fiddling about with some of the other clustering options mentioned. Fiddling around with options on a real cluster (running cluster software on a single node really isn't a good impression) that could be reinstalled from scratch in a few hours, and the machines aren't worth enough to matter if it is physically damanged is a great way to learn.
  • I have 5 boxes of somewhat random configuration that I am looking to do the same thing as the OP. My goal for this cluster would be high-speed video processing (using tovid).
  • sweating it out.. (Score:5, Interesting)

    by tempest69 ( 572798 ) on Wednesday July 26, 2006 @09:56PM (#15788245) Journal
    ok a few caveats (I cant spell)

    1. I do beowulf, other clusters arent my thing. 2. I can handle C and C++, but I'm not a guru. 3. I can fumble my way through unix-linux but I get cranky with new versions (command/ flag changes in utilities). 4. I have 6 lazyish years as a unix sysadmin.

    getting prepped,

    You want to make sure that the boxes that are talking to eachother are very secure from the rest of the world. Most of the concepts on a cluster are about trashing the security of the machines in question. There are ways to make a secureish cluster, but a good firewall is a better way to go. let the firewall talk to your "head node", and preferably locks the "body nodes" from seeing the outside world. There are a ton of ways to get this done. on the cheap have all the body nodes have a non-existant gateway ie 192.168.0.1, set the firewall as 192.168.0.129 (forget dhcp) and let the head node point to the 192.168.0.129,, and have the firewall route services (ssh, ftp, telnet (ok not ftp or telnet)) to the head node.

    getting started

    1. Load all the boxes with the same OS. (the same way) (DONT select SELINUX or you will cry) 2. build a hosts file (names for machines) /etc/hosts 3. build a hosts.allow , hosts.equiv (still in /etc) 4. add in some entries into securetty for your rlogin rcp rsh.. 5. youll probably have kerberos(weakly secured) rlogin rcp rsh... you want to rename those and replace them with the non secure versions, there are other ways, but this saves a bunch of hassle. 6. pop into /etc/pam.d and adjust the rlogin rcp rsh.. (this may not be needed in some cases). 7. add in a + + in the .rhosts file of each cluster user.

    after you have pulled your hair out decyphering my glossed over instructions, you should be able to type: rsh node002 and be at the prompt for node002 with no password asked, and no silly kerberose failed: trying /usr/bin/rsh message given.

    At this point then you can configure LAM (you may nee to download it and get it installed on your boxes)

    basically it needs an arbitrary file Mynodes.txt that will contain the list of nodes you wish to launch. you type in lamboot Mynodes.txt and then it will kick back some silly error 99% of the time because something small was forgotten. you fight through those errors until it finally gives you a sucess.

    Now your golden, then its just a matter of figuring out how to compile and run MPI programs with the mpiCC and mpirun. But if you got through the first gloss over then the rest is a snap.

    Remember if these machines see the outside world they are naked, defenseless, and totally exploitable..

    Be aware that these instructions can cause all sorts of havok and any reasonable person would just hire an expert.

    Honestly I hope that this gives you a starting point. You'll still need a bunch of time with google.

    GOOD LUCK!!

    Storm

    • Some other silly parts that if your doing a pure DIY beowulf that might be important.

      set up an NFS share on the head node, it will simplify a whole bunch of data collection. oh, dont overwrite the home directory as your share, it will cause weird issues, make the directory something like "clustershare" it will make sure that you launch the exact same executable on all the boxes.

      The LAMRSH variable needs to be set to "rsh" otherwise you get an error message.

      ________________

      Most of this guide is if

  • Rocks Rocks! (Score:5, Informative)

    by Frumious Wombat ( 845680 ) on Wednesday July 26, 2006 @10:11PM (#15788339)
    No, seriously, if you're setting up a cluster where your work can be batch-queued, or intend to run MPI, then Rocks http://www.rocksclusters.org/ [rocksclusters.org] is the way to go. It also comes with tools such as SGE (Sun Grid Engine) or OpenPBS pre-configured, Intel compilers and libraries ready for you to drop a license onto (but of course the entire GNU suite is there as well, including Ada), has more monitoring tools (plus some nice web-based interfaces) than you can shake a stick at, and runs on IA-32/AMD-64/IA-64 (Itanium). It also has a Roll to help build a tiled display wall, which would be a really cool use of a small cluster.

    They're also really great guys.

    On the other hand, Oscar is supposed to be good, and if you're not into the whole batch-mode thing, you can get OpenMosix up and running using http://clusterknoppix.sw.be// [clusterknoppix.sw.be] ClusterKnoppix, and just fire jobs off into space and let them find their own unburdened node.

    But still, Rocks is really an elegant and clean way to go, plus it will scale up in case you're going to deploy a huge one of these for real after you get your feet wet.
  • I am pretty much summarizing what has been already mentioned. Cluster is somewhat of an ambigous term which means tons of different things. You really need to specify what you want to achieve before any meaningful suggestion can be provided.

    This is a good reference:

    http://linuxclusters.com/compute_clusters.html [linuxclusters.com]

  • java (Score:1, Informative)

    by Anonymous Coward
    Do a Google search on "Java heterogenous cluster". I have been playing around with some of them.. Nice things - you can use any machine. No lengthy build out process. Downsides - slower than dedicated, not as flexible.
  • by mrsbrisby ( 60242 ) on Wednesday July 26, 2006 @10:59PM (#15788563) Homepage
    Clustering means too many different things these days. I operate several clusters- but they're all so different that I can't say that any of them are the same.

    I run ClamAV and Spamassassin- both very slow programs- with cexec [internetconnection.net] which simply lets me farm regular unix tools across multiple (lots) of CPU servers. This lets me replace the clamscan and spamc programs with "wrappers" that use my farm. I like cexec because it doesn't make me create lists of clients and servers, but automatically load balances and fails out very nicely.

    For my frontend web servers, I use fake/heartbeat and some custom proxy software for routing frontend requests to backend farms.

    I haven't found a real reliable replicated directory- with one, I could use cexec as a filesystem... Maybe some day...
    • The article submitter seems to have decided two things:
      1. They want to use Linux.
      2. They want a cluster.

      In general, these are the two things that should be decided last. Other posters have addressed the 'why do you actually think you need a cluster' issue, so I will take a look at the 'why do you want to run Linux' bit.

      If what you want is reliability, then nothing beats OpenVMS. You have to pay a premium for hardware that can run it (VAX, Alpha or Itanium only), but if you really need that much reliabili

  • Diskless OpenMosix (Score:4, Informative)

    by rwa2 ( 4391 ) * on Wednesday July 26, 2006 @11:02PM (#15788582) Homepage Journal
    I worked for a linux supercomputing startup way back when. The easiest time I had was by separating the components : one big machine for storage, and lots of little diskless machines for computation.

    So I'm a Debian fan, so that involves just creating one large computer (or two with redundancy using linux-ha) with a good RAID as a shared home directory. Then just install the "diskless" package. This will allow you to spawn off as many hosts that mount root off of NFS as you like. All you have to do is get the compute nodes to boot a kernel that supports nfs root (I used a single floppy, but you can do bootcds or net-boot if you're more sophisticated).

    I used a Mosix kernel at the time, but I suppose OpenMosix is a better bet today. Mosix pretty much makes the entire system look like a massive SMP, so you just launch a whole lot of batch scripts on one computer, and it automatically distributes the load out to idle machines, and ships the results back to the one you started on. You just choose a node to become the master diskless-image, and then use the diskless scripts to clone it as many times as you like.

    The compute nodes could have a local drive, but I just used them for swap and maybe local /tmp. It's damn convenient to be able to configure all the nodes from one place whether they were online or not.

    The other nice thing about OpenMosix is that it's architecture agnostic, so you could conceivable join and remove nodes that were all different speeds, AMDs or Intels or maybe even 64-bit platforms in any combination. The faster processors would get the heaviest loads first, etc.

    After you have this system up and running, you might start playing with more sophisticated stuff, like parallel distributed global filesystems and the like. But before that you could certainly make your NFS root server scale up by splitting it up across multiple machines (for /home , /usr, etc.). You'd be amazed at the performance you can get with a well tuned NFS share... since one server can cache most of the disk access, it can even dish out files from one big high performance RAID faster than if you had bothered to give all the nodes their own drive or two.

    Anyway, it's the systems management that will get you... so I recommend using Debian, getting real cozy with aptitude, and searching the apt repository for all of the little command and monitoring thingies that will help you, like clusterssh and cfengine and nagios2 and stuff.

    Burning a bunch of ClusterKnoppix CDs will pretty much get you on track with most of this, I'd imagine. Also check out the "KNOPPIX Remastering" howto so you can customize your own livecds, should you choose that path insteads of diskless nfsroot.

    So that's a software approach, the hard part is really selecting, testing, and optimizing all of the hardware. The slowest component is always going to be storage (invest in lots of separate SATA cards using the Linux software RAID5 or RAID10 - reconfigure and test lots with hdparm -t and bonnie++ and format reiserfs), followed by network (gigabit NICs are cheap - you could afford separate ones for the NFS and the "real" network, though gigabit switches are still up there - Linksys and D-Link make some good 16-port ones for ~$300).

    Um, if you're looking for parallel applications, povray is fun. And for a time we could sort of measure how many nodes I had up and running by monitoring my stats at distributed-net . But with OpenMosix, just about anything with lots of CPU-intensive parallel batch processing is fair game and works out of the box.

    Have fun!
  • OSCAR (Score:3, Informative)

    by Odocoileus ( 802272 ) on Wednesday July 26, 2006 @11:18PM (#15788667)
    Last year I built a cluster using OSCAR http://oscar.openclustergroup.org/ [openclustergroup.org]
    I haven't tried any others, but OSCAR installs pretty easy. Just run the installer on the head node, and when it is done it feeds an image to each of the other computers that are a part of the cluster. It includes the ganglia monitoring tools and the apache server so you can view it.
  • better reason (Score:2, Insightful)

    by tezbobobo ( 879983 )
    Common people. It really saddens me that te only reason people can think for doing this is rendering, compiling, and coolness. Maybe, and I'm wishing more than expecting, the guy is compiling a new breed of kernel for super gaming. I think the most fun thing to do now is assume he is doing it from a gaming point of view and move into fun, spectulative hypothesising. If it doesn't help the poor guy, then at least it may give him some muc cooler ideas.
  • by Anonymous Coward
    oh, wait, never mind.
  • warewulf-cluster (Score:2, Informative)

    by Imp- ( 40963 )
    I have used http://www.warewulf-cluster.org/ [warewulf-cluster.org] for my Opteron cluster. Works with new kernels and with many different distros. Seems to be under good devopelment.
  • OpenMosix LiveCD (Score:4, Interesting)

    by dargaud ( 518470 ) <[ten.duagradg] [ta] [2todhsals]> on Thursday July 27, 2006 @02:35AM (#15789278) Homepage
    First, here are some notes about the first cluster I build [gdargaud.net] 3 years ago, and most of it is still relevant.

    This being said, for an instant trial, there are some OpenMosix LiveCDs, such as Quantian or other variants of Knoppix. Put the Quantian DVD in the 1st PC, boot, enable the remote boot option, boot the other computers over the network. Here: you have an operational cluster.

    I think there may also be Rocks LiceCDs but haven't tried them. And remember your electricity bill when playing with clusters !

  • Beowulf! (Score:2, Insightful)

    by sp1nm0nkey ( 869235 )
    What do you want to DO with it? To get one thing straight, beowulf is a distribution, and bproc/mosix/lam/mpich are ways of getting apps to communicate over a cluster. What technology you use is going to depend on the app. If the app is written for mpich, you have to use mpich. If it's written for bproc, you need to use bproc. If you're writing it, look around at the various technologies and see which one you like the most. MPI is a smallish layer above sockets that allows you to explicitly pass messages fr
  • I tried it on a cluster with 22 nodes. After 2 years I threw it out. One node had one weak bit and it was impossible to find with OpenMosix, as the errors would just seemingly happen on all nodes.

    • I tried it on a cluster with 22 nodes. After 2 years I threw it out. One node had one weak bit and it was impossible to find with OpenMosix, as the errors would just seemingly happen on all nodes.

      I would say that if you're not smart enough to do either of the following :

      a) Run Memtest.

      b) Remove one node at a time and test.

      Then you don't need a cluster anyway. So it sounds like it turned out alright in the end.
  • For learning purposes, you can save a lot of headaches by setting up a VWware linux setup and do snapshots so you can roll back when you do something that screws up the system (had a guy try to set up an NFS server and ended up formatting the /usr directory, how that happened, I don't know). It's also quite nice as a testing platform since the vmplayer is free. What that means, is if you have a room full of computers that never get used (as I do in my undergraduate student's computer lab), I can run the v
  • Warewulf (Score:1, Insightful)

    by Anonymous Coward

    So far, no one has mentioned. Warewulf [warewulf-cluster.org].

    I have built three Warewulf clusters in the past year. I like how light weight and customizable WW is. It consists of a bunch of scripts that netboot/etherboot/PXE boot a custom RAM disk as your root file system from a tftp server (in my case the head node). (The smallest RAM disk we have built is around 10 MB. Everything else can be NFS mounted so each of the nodes has the capabilities of a standalong workstation.) From there you can configure it to do whatever you

  • by Wornstrom ( 920197 ) * on Thursday July 27, 2006 @01:04PM (#15792206)
    For my first diskless cluster, I used the warewulf cluster solution [warewulf-cluster.org] to see one up and running. Then, I wiped the master node's disk and built one with the openmosix kernel patch etc, and used the Linux terminal services project [ltsp.org] which was really cool. the ltsp stuff made the node filesystem stuff easy to build onto. I am waiting for the openmosix team to finish up work and release the userland tools for the 2.6 kernel for my next build. here is a good how-to on LTSP+Openmosix - http://ltsp.org/contrib/ltsp-om5r3c.html [ltsp.org]

    Once you get that going, you might look at PVFS2 [pvfs.org] Parallel Virtual File System. "PVFS2 stripes file data across multiple disks in different nodes in a cluster. By spreading out file data in this manner, larger files can be created, potential bandwidth is increased, and network bottlenecks are minimized."

    Good Luck!!
  • Do yourself a favour and contact the team at Bullet Proof Linux http://www.bplinux.com/ [bplinux.com].

    We needed a cluster for load balancing a typical web application, with OpenSSI http://www.openssi.org/ [openssi.org] being the chosen SSI system. Sadly, OpenSSI is far from a "working" solution, and needs quite a bit of massaging - especially if you have newer equipment.

    The guys at Bullet Proof Linux have been professional, helpful and incredibly patient in their efforts to get us going. Worth every cent.

    Any they did it all
  • if you really want grid computing, use an out-of-the-box solution like xgrid, or a sun based solution. why linux? if not for some special OS-specific reason, sun or OSX will privide better success in the cluster realm. if you want HA, look into everything fiber with FC subsystems and brocade and heartbeat. please, this is not flame bait, just tired of the "linux everywhere" mentality. using the right tool for the job is more important...

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...