Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
BSD Operating Systems Hardware

FreeBSD Cluster At Purdue 92

luddite writes: "Two guys at Purdue University have assmbled a FreeBSD based cluster built cheap - very cheap. With under $2500 spent on the cluster, it's one sweet set-up. Just shows that if you take the time and put some effort into something, money doesn't have to limit your resources! The site also goes into some detail about what the cluster is made of, where they found the parts, how it's been configured, and what they plan to use it for."
This discussion has been archived. No new comments can be posted.

FreeBSD Cluster at Purdue

Comments Filter:
  • by Anonymous Coward
    How did you get dual motherboards for K6-2?
  • Guess you didn't notice that a majority of stuff was donated. Only the stuff they paid for was listed.

    :wq!

  • But it takes so long to install the operating system.

    I also like to run sendmail, apache, name server, nfs, samba, etc.

    When I use any of this while I playing mp3s through its sound card, my stereo sound awful.

    You know, this 75mhz Pentium probably does more than my 650mhz Athlon.

  • Yes, I know why Bind 8 isn't included. Red Hat changed Bind to run under the non-priviledged "named" user precisely because of this problem in 6.2.

    I guess I probably feel the same about BSD and SunOS 4.1.3 - classic operating systems, but these days a little behind the times.

  • >Nononononono. The Ethernet is used only on the fighters. Anyone can
    >clearly see that the fighter umbilical is FDDI. (And of course the
    >mother ship runs Windows -- don't you know that the reason Jeff
    >Goldblum was using a Mac (apart from the fact that they own him

    Yeah. That's why they were able to infect the mothership so easily with the virus. They just uploaded a .VBS file created on the Mac to the mothership and the rest as they say was history
  • It looks like the boxes have served most of what they are good for, recognition.
    With ~16 single proc 450's with 32mb of ram... what sort of problems can you solve. Sure, sure, they offer parallel tasking. But 32mb? How many large tasks that need parallel tasking can be done on a several single proc boxes with that little memory... and with PVM?
    I have used PVM before on a similar set up of systems (ok... they were 300's and I only had like 4 of them) but I was able to sort numbers and compute digits of PI... but that was about it.

    Now... on the other hand, buying real systems... like 42 dual 600's with 1gb ram. Those can rip through problems... well... rendering. And... Beowulf isn't always the soultion. Sometimes shared resources aren't good... can you imagine a network supporting a shared memory of up over 500mb's for a process?

    Beowulf has its place, pvm has its place... but in a lot of places, it is good for research. I hope that they were able to acomplish this.
  • I made a 8 node cluster of Linux boxes, cost $250. I spent more money on a liquid cooling system for it, I was drunk, theres not much to do at Purdue.
  • Acctually they also have the 126 node Intel Paragon with the nifty little lighted display that shows the interprocess communication, and memory links.
  • I'de love to see you do any hard work with 8 486's with an average of 3mb ram, no video cards, an 40-80mb harddrives, try getting a decent install of any OS in under 40. It was a great setup to fool around with beowulf, but its all going to the garbage dumps now.
  • What?

    You really mean to tell me you didn't see Independence Day?

    Everybody knows they have Mac workstations with their built in ethernet.. *duh*
  • Would it kill Nik to use the BSD Icon for these stories? Isn't that why its there for?

    Just something I've noticed which I find irritating. Sorry to be so anal...
  • The NEXT kernel (aka Mach+BSD) supported auto SMP and distributed processing way back when AFAIK...

    There's no reason these technologies shouldn't migrate to OS X as well at some point. Many reports of the full SMP demo'd at Apple's WWDC under OS X beta on Apple hardware. Distrbuted processing may come along as well but may or may not make the first release as it's MUCH less sexy than SMP.

    Apple's plan would probably center on "farming" your Mac network's extra cycles at night more than the kind featured in the story. But plain distributed machines - like the headless one's in the story - shouldn't need full OS X install, though, unless they need Carbonlib, OGL or something. Maybe a lite Darwin-ish install would be possible for those machines - who needs display code without a display?

    Of course you're probably talking about an extra 32Megs of RAM to make up the difference so why worry about it?

    =tkk

  • "I'm sure a watered down version of Aqua could be created"

    LOL

    Reminds me of that guy who wanted to sell dehydrated water...

    Sorry, couldn't keep it for myself... :)
  • I just wrote an essay on this very subject for a friend of mine who is trying to decide which non-MS operating system is best for her. The essay I wrote simply explains why FreeBSD turned out to be the right choice for me, since I can't say what would be right for someone else in a different situation. If you're interested, my essay is here [pyro.net].

    -Joe

  • Dude, prior to reading your post, I was a Democrat, I am now switiching to Republican.
    Thanks! :)

  • You wouldn't want a cluster of clusters, but you could set up a unified queue system. This is what they do at NCSA in IL. You log onto the interactive machine, modi4, and submit jobs to the queue. It then sends them off to any one of a slew of other machines. These are all Origin 2000 systems with either 64 or 128 nodes each, and either a quarter or half a gig of ram per node.

    It is quite a nice little system when you have enough users to keep it busy, and I imagine is quite easily scalable. The only trick is that if there are differences between the various machines you can have really annoying and hard to track down bugs. Fortunately consult is very responsive. Once they even contacted me about a bug in my code, because they saw the error in the system logs and thought it might be their fault!

  • Take a look at the picture---it's a FreeBSD cluster with a sound card. Kick ass! ;-)

    ----
  • The picture doesn't say all of those machines on the rack were for ACME. The picture also doesn't say that all of those boxes were up and running and not just sitting there empty.
  • We cheat. If you look closely at the picture, ACME is sitting on a raised floor. It lives in a little-used computer room left over from holding two dual Vax and a couple of Goulds.

    As a result the room has a raised floor with forced air from below, Liebert cooling, power conditioning, better security, etc etc etc.
  • I think not.

    There is no reason at all that you can't do similar things. Your arguement is much like that of someone who says "oh you've comitted fraud because you didn't pay for that FREE operating system." FreeBSD and Linux happened because thousands of people gave their time (in huge amounts) to these projects. What would you have us do on the budget to account for the free O/S that we were given?

    Unless you've lived under a rock, *lots* of things get given to places (non-profit and for-profit alike). Educational places get zillions of dollars worth of things every year (especially a person from MIT should know this!). Just this past week a group I consult for gave another group (us)$10k worth of equipment in a different division of the same company. It wasn't _fraud_ that they can now clain they didn't have to spend the money to buy that stuff. Their budget wasn't decrimented a dime.

    The ACME budget stands as it is as that is what it cost in dollars to build it. If you're going to pick at us, then pick at the time that we've got invested in getting it working. But since this is /., I'll guess that everyone presumes that their time is free.

  • 17 at the moment.

    We found cases last summer that all matched, so that's why we have all the cases. Since that picture was taken, the pile of computers on the far end has gone away and another rack replaced it.

    The end you see closest to you is Column 'E' so all the nodes in C, D and E are up plus the top two in column B. The bottom machines in D and E are the two connected to the outside world. The bottom one in D now really is in the new rack, but we've not got new pictures since that was just done last week.

    We add roughly a motherboard a month. Hard disks are not a problem (I see four within reach right now that are all 880mb). CPUs we purchase. Memory we find here and there mostly for free. The motherboards are the sticker. PU Salvage gets probably 100 machines a month and we try to get every one of them looked at before someone else gets them purchased.
  • Disk drives less than 1gb often go into the dumpster even out here in the midwest. Most of ours came from junk at junk yards or from piles of stuff that was headed for the junkyard.

    We have yet to have to pay for any disk space.

    Watch ACME's news page... in a couple of days we should have eclipsed our 'free' disk space with even cooler stuff.
  • So far, with over 70,000 hits since it started 9 hours ago, ACME.ecn.purdue.edu has done very very well. Our guess is FreeBSD 3.3 as it currently is tuned with plenty of Apaches and enough memory to prevent swapping and we could have dealt with at least 10 times the load we've seen so far with no changes.

    On the other hand, we'll also bet that the campus connection to the Internet probably couldn't have taken 10 times the load.

    So far so good. I figured we'd get maybe 10,000 hits over the entire life of ACME and we crossed that in the first hour of /.

    We've seen some security trouble, but nothing that reasonably well configured systems couldn't handle, though I'd be running a bit more security stuff if I'd known this was coming.

    Being /.ed is an intersting thing. We'll write more about it after the hits are below 4 a second.

    David Moffett
  • The nifty Paragon was just decommissioned a couple of weeks ago. I'll guess they needed the space for the SP/2.

    Perhaps it will show up someplace where we can, ummm, 'acquire' it. :-)

  • You can solve lots of problems. It comes down to how the problem is described and then coded.

    ACME's first problem is CPU bound and not memory bound. If the nodes had 16mb it would be fine, since the code is small and the runtimes are long.

    On the other hand, we have friends at Purdue who are adding their second gb of ram to each compute node. It all depends on the problem and the resources available.

    Please don't mis-understand that we *like* having nodes with only 32mb, but we don't have huge money to buy huge (and very very cool) grown up machines. If you've got a free (or near free) source of 42 duals 600s with 1gb of ram per CPU in rack mounted cases, we've got racks that would be delighted to hold them instead of what they are holding. FreeBSD runs *real* well on duals.

    $3000 doesn't go very far in 'new' computing. In another project we've got going, $3k didn't even buy the disk drives for one machine.

    Finally, to your swipe about 'recognition'. I had no intent on ever making any kind of BIG deal about ACME. I figured we'd get 10,000 hits over the entire life of the project. We got that in less than the first hour after /. posted about us. ACME is doing real work, that's why it was built. If you don't like that, well that's your problem.

  • K6-2 because at the time we selected it K6-3 was still in the first revision and the motherboards that we had couldn't do the power requirements (3.x volt current in particular) that K6-3 needed. Also there was (and I'll guess still is) a pretty steep price premium on K6-3. This was one of those topics we discussed for a while before we decided. Original K6-2s were just on the current limits of what the P55T2P4 Rev 3 boards could do.

    Our rule of thumb is each node is roughly a Pentium II/330. Most of the code we're running (at least at the moment) is in Fortran (80% - a 4800 line legacy) & C and is mostly integer.

    I'm reminded of the expression, "when all you have is a hammer, everything looks like a nail." We have a hammer and some of the problems will certainly be screws, but at less than $3k we'll live with it. It isn't a perfect world.

  • The orignal post was a Troll, but in case people are interested...

    It's safe to say we can bring up a new node in well under 30 minutes. That includes flashing it to a new BIOS, setting up the NV RAM in the network cards and then loading FreeBSD (via the network of course).

    Sometimes it takes longer, but that is usually because we've got some piece of bad hardware (memory or disk usually) or we've screwed up along the way from raw tiredness. Most of the hardware work we do on this thing is done on Friday nights.

  • FreeBSD is an operating system, not just a kernel.

    Of course FreeBSD allows for symbolic links. They've been around for a long time- 4.2BSD or so. Fully documented in the D&I of 4.4BSD..

    If you are _sure_ about symlinks not working on SunOS, then that was probabaly because they were disabled for a reason.

  • Speaking as a Purdue Alumni, I thought it was to hire a few more vice presidents. ;)

    I would use my real name, but the Alumni Association would be after me for money.
  • I dont know a whole lot on this topic but wonder
    what the performance and usability issues are
    between PVM and Mosix, other than issues dealing
    with kernel mods.
  • Hmmm..

    $2,390 for a 4/5 diskless cluster of Celery 500/533 done right.

    Go with scrap cases, a crossover cable, used Boomerang cards in a hypercube, cheapo DFI mobos and a smaller SCSI drive and you can get out the door for $1,800. Go for offbrand memory and O/C Celery and you can push it down another $150.
  • At no cost? It's not like they're buying new ram.. there's a cost.. beyond high failure potential it was hardware which was just recycled.. nothing more.
  • Yet another reason to eliminate anonymous coward postings.
  • Purdue Salvage is where all stuff that University no longer wants goes. You can find all kinds of things there, from beds and desks, to computer parts. It was a great place to buy old hardware!
  • I've got a similar cluster with a somewhat different focus - I need to be able to generate enough HTTP client traffic to saturate (well, nearly saturate) a single gigabit ethernet server. The clients in my system are eight eTower 266s that I got on clearance at buy.com for $229 each. These have 200MHz Cyrix M-II CPUs, and running FreeBSD that's enough horsepower to saturate a 100baseT ethernet, so eight can pretty much saturate the gig ether.

    However, if the client machines were running Linux, they would not be able to saturate, since Linux still has about half the networking performance of FreeBSD. That's the main reason I run FreeBSD and not Linux. Other reasons include Linux's tremendous supply of bugs that were fixed in BSD years ago, and the general obnoxiousness of the Linux and Gnu user community.

    If not for those things, sure, I'd run Linux.

  • The current problem the cluster is solving is a very complex transportation engineering equation. My best undersatanding is that the computer knows the result of the problem and general form of the equation but does not know the equation coefficients and exponets. It is just iterating the problem untill it finds an awnser. Solving problem requires very little in the way of memory (and hence disk/swap space); just a lot of horse power. I would guess the equation will fit entierly into the 64kb of L1 cache of a K6-2. The problem has been cranking for over a year on various personal computers. Once the equation is solved, it will be used for a structural engineering problem; most likely matrix algerbra.
  • Did the site say anywhere the ultimate purpose of the cluster? Is it just to test speed and cost-effectiveness? Also, it sounds like the machines are lacking in some hardware departments. Do they have high-end graphics cards, what kind of drives are they currently sporting? My research group is also based at Purdue, and it sounds like the cluster we have set up is superior. We currently have ten nodes (one dual) each running Redhat. The processors are AMDs, 6 K6/2's and 3 Athlons. 550MHz and 6-700Mhz respectively. All have 256Mb of RAM and 44x CD drives. We also have 100MBit intranet. The current setup ran 10 grand, with monitor and monitor switches. I doubt the ACME cluster could run the simulations we run. Our simulations have a run-time of 12 hours. I wonder what it would be on theirs...

    Okay, now I'm done with the "Mine is better than yours" rant.

    Ciao

    nahtanoj

  • First, thanks to thundrcast for answering my question, and I apoligize for the boasting I did. Different problems require different set-ups, and from thundrcast's reply it sounds as if what they have is adequate for their research. Our own is analyzing atmosperic Cherenkov radiation, both electron and proton showers. And now for all the other questions. We have 44x CDroms because the supplier threw them in for free. ;) While they are handy for loading and re-analyzing data, the only really useful bay drive is the DVD-RW. The dual machine is not K6/2, but two of the Athlons. And yes, we could rip a lot of Britany Spears, but here we would all rather rip Christina Aguilera. ;)

    Ciao

    nahtanoj

  • >Guess you didn't notice that a majority of stuff was donated. Only the stuff they paid for was listed. But that's my point! Saying you can build X for under some amount of money when a lot of X is donated is fraudulent.
  • As a couple others have pointed out, what's with their budget?...it seems totally misleading to me.

    First, there's a BUNCH of things missing, like motherboards, memory, etc.

    Second, the numbers don't make any sense, 40 network cards for a cluster of 16 machines? But wait, only 15 CPUs??

    Third, many of the prices they got was sheer luck (and perhaps a little bit of work, which I would applaud them on)...I mean comeon, they managed to get from their salvage dept. (what's that, BTW?) network cards for ONE DOLLAR a piece and tape drives for $2.50 a piece (their keyboard adapter cost $2.32, only 18cents less than a tape drive). Those prices are so not good examples of real prices others would be able to consistently find -- or most likely at all!

    All in all, almost none of their budget makes any logical sense, and it all just strikes me as stretched truth + luck = a cheap cluster here. I mean comeon, I'm going to have my friend by a $5000 computer, and sell it to me for 1cent, and then post to the world on my webpage that I bought the cheapest PentiumIII-whatever around?

  • Ya, I noticed that too. I also didn't notice any hard drives on the list. What do all these babies boot off CD-ROMS? I wouldn't think so because I don't remember any of those on the list either.

    Damm dis shit be scandalous.

    -Jon
  • The interest in this is that it was done for so cheap.

    The cluephone is ringing...
  • Why in the hell did they buy 107 network cards for a 16 node cluster?

    Look at the budget! They bought 107 network cards! Now that's a lot of bandwidth.

    --
  • As a result, the design of the cluster was done such that the minimum amount of traffic would appear on the network that connects all the nodes to the main systems as that clearly is the critical resource in this cluster. The compute nodes each had three network cards placed in them and were organized into a matrix of 5x5. The network cards then were used for rows, columns and everyone. The machines are named: A1, A2... E4, E5. The letter being the column and the number being the row. Because of the way the mechanical part of ACME was constructed, the machines are being built by column starting with E and working backwards toward A.
    Dope! Answered my own question. Should have read ALL the pages first.

    --
  • Actually the FreeBSD installation does support CDRom drives. Ever since 2.05 or so (that is the earliest version I have used). The CDRom support in the low 2.x kernels was sometimes tempermental but nonetheless functional.
  • The BSD kernel was originally develloped for the 8086 instruction set on the old IBM PC's. Larry Ellison once said that upgrading the kernel to anything much better then a 386 was (as I remember), "akin to putting a 4 cylinder engine into a corvette, sure it'll get you where you want but you won't get there very fast
    With all due respect, this is entirely utter hogwash. Both the ellison quote and the 8086 reference apply to DOS/Windows (WIN 98 still contains quite some 8086 code).

    BSD originated from a set of modifications to various versions of the 'original' AT&T Unix, which mostly (and especially at Berkeley) ran on DEC PDP11s and Vaxen.

    f.
  • I must say this is the most silly thing I've read today. Being a FreeBSD enthusiast myself I still don't agree with that you say, simply because it is not true.

    Every enlightened person knows that Linux is optimized for doing one thing at a time. Therefore, Linux will easily saturate a 100mbs connection. Unless, of course, it does something else at the same time.

    -T

  • You went about installing FreeBSD the long and hard way! All it take is only 2 floppies with images put onto them (kern.flp and mfsroot.flp). The floppies will boot the kernel then you have the option of installing off the CD's, FTP, or network.
    RTFM:
    FreeBSD Handbook Installation Guide [freebsd.org] and the FreeBSD Newbie install [freebsd.org] for screenshots with play-by-play instructions (screenshots are for 2.2.5, but they look the same for 4.0).
  • Could this mean, at least theoretically, we could see an open source base for clustered OS-X?

    The idea could be awesome if it happened. Designers using OS9 are screaming for multiprocessing.

    If some OSS team came up and gave them "BeoMac" or whatever, Apple had better watch out or pull its hardware - roll - out - socks up.

  • erm, if you read my subject line "-with added Mach and Aqua" I figure (at least i thought this was clear) that any micorkernel integration for clustering and GUI on top would be the icing on the cake.

    The OS-X kernel is quite extensible and pretty well documented. Since this Free-BSD clusterrelies on network cards between node (prolly not great for memory latency on large datasets tho') I expect you could make some use of reading the Kernel Network Extensions [apple.com]reference for a start.

    But what I was thinking about was to replicate Darwin across a cheap cluster and modify the Standard Apple Numerics Environment or its OS-X enquivalent to pass real heavy duty FP to the cluster. Oratleast do something similar in a set of system calls / api in Cocoa or another environment.

    Im sorry if my postwas a quickie, but I dont want to open source all of OS-X just maybe open up the hardware hegemony which Apple has at the moment. I mean, can you imagine if OS-X became a serious compute platform / renderfarm candidate. Surely even Jobs would drool :)
  • Is it just me?--or does anyone else see 22 computers in that rack.

    They probably just don't quite have enough for a 5x5 (25), and so are dropping down to a 4x4 until they get a couple more machines going.

    -k
  • What about a beowulf cluster of these. Oh, wait ....

    But seriously. Anybody got any info on multilayer clustering technology?
  • Maybe the rest are going to Area 51? Sorta like those $50 government hammers. BTW. The added costs are usually the product of the complex buearacracy, not some government plot. If they were plotting, they could do a much better job... A lot has to do with pork barrel politics and paying everybody down the line who touches that hammer... and some people who don't in order to make sure that we don't discriminate against them (IE the wealthy ).
  • And just what use would you get from clustered Aqua? That's got nothing to do with the issue.

    In some sense, this could be done pretty easily.

    1) Port PVM and MPI to Darwin (if not already done).
    2) Use bitchin' G4 with OSX as your head system.
    3) Send your tasks to the cluster for computation.

    Now, there are a few problems, like the fact that PVM and MPI aren't transparent, but require specialized programming in each app, and that you can't get PowerPC nodes as cheaply as recycled x86s, but these are just details, right? Right?

    Rock on, Mac Beowulf Darwinthing!
  • IT's not a Beowulf cluster, it uses PVM. PVM != Beowulf
  • You know, I'm tempted to go out and spend a few thousand on a BSD beowulf cluster. But I'll be damned if I can think of a single productive thing to do with it.
  • Ok, it's cheap, but AMD K6-2 are really *bad* for floating point (except when using 3DNow!, but gcc/pgcc does not generate these instructions). My K6-2 300 MHz is equivalent to a Pentium MMX 166 when doing floating point computations without 3dnow.
  • Notice that Luddite's links are to the very system he's describing. This system seems to simply shrug off the /. Effect. If I were going to start up a Web Server Farm, I wouldn't definitely talk to these guys!
  • HAHAHA... this is a kina funny story! I honestly wouldn't have bothered, or I would have at least compiled a custom boot/installation kernel on another BSD box that supported CD-ROM so that I could have used the CD. ... would you have shit a brick if CD# 417 was bad or something, and you had to start over?

    Someday, we'll be installing OSes with stacks of CD-ROMS... that'll be when I need to install FreeBSD on this machine:

    500 GHZ Quad Intel Octium (80x886) Processors
    30 Terrabyte HD
    256 Gb RAM
    1.44 floppy

    --Cr@ckwhore

  • Calling BSD a Conservative Republican incarnation is like saying that RFK was a centerist. Its ludicrous.

    BSD came out of the most liberal atmosphere one could think of. Anybody remember those yellow t-shirts with the daemon dressed up as a flower child, and the slogan "Peace, Love and Rdist" across the bottom.

    The truth of the matter is that the BSD development psychology may seem outwardly conservative when compared to Linux, but is really democratic, and quite liberal. There is simply more control over the final product than there is in Linux.

    Linux is just weird. I cannot understand it. It seems to be a mix of part anarchy and part autocracy.
  • I'm not arguing with the PC architecture concept, the choice of operating system, or anything like that - but in the real world companies will be forced (by cost) to go for 3 * 4 processor Enterprise servers rather than 16 freebsd boxes (the comparison used on their web page)

    Why?

    Ever seen how much machine room space costs?

    It'd be interesting to see a price comparison where individual nodes were as powerful as individual Sun units (though add on the cost of freebsd OS support, since you do get that from Sun)

    james
  • Erm. Could you be more specific? "FreeBSD is superior to Linux in every way" just sounds like plain ol' FUD to me.

    And stop pretending our camps are at odds; both camps share. FreeBSD & Linux are both kernels; both kick ass.

    BTW, does FreeBSD allow for symlinks? I remember my days of using SunOS (pre-Solaris days) and symlinking was verboten then.
  • so the idea of posting this story was to see whether the cluster could handle the /. effect ?
  • There's a more recent, less in-depth review comparision of FreeBSD vs. Linux vs. Windows NT here [cdrom.com]. Why Yahoo! Uses FreeBSD [geocities.com] is also interesting.

  • Our Athlon based KLAT2 [aggregate.org] Beowulf cluster at the University of Kentucky achieved over 64 GFLOPS [aggregate.org] on LINPACK for only $41K using 3DNow! instructions. The FreeBSD Cluster at Purdue doesn't even mention ANY benchmarks for performance. I'm a Purdue Alum, so I think this is great that they are getting slashdot coverage for an inexpensive cluster. However, when we submitted KLAT2's (Kentucky Linux Athlon Testbed 2) story to slashdot last month, which in many respects is much more "news for nerds", it got passed over. Ah well, thats the way of slashdot.
    --
    --
  • The technology used in KLAT2 [aggregate.org] scales up and down in size. The Flat Neighborhood Network [aggregate.org] architecture can be scaled down to use several eight port 100 Mb/s etherenet switches (about $80 each) to make a very formidable network for a small cluster on the cheap. Check out our new CGI [aggregate.org] for designing your own FNN. However, for their full-up cluster with 27 nodes, it is more practical to use 16/24 port switches...

    As for using the 3DNow! stuff, their K6-2's can have some real punch if they are willing to code for it... Check out our SWAR - SIMD Within A Register [aggregate.org] compiler technology for doing just that. Actually, the Ph.D. student doing most of the work on SWAR is AT Purdue.
    --

  • We are releasing the technology into the Public Domain as soon as we can (i.e. when the hacked code has at least some clarity/documentation). So, yes, you can apply the concepts from KLAT2 [aggregate.org] to most any size cluster. See my my other comment [slashdot.org] for a little more info.
    --
  • by Anonymous Coward
    Every single mother board is an old Asus P55T2P4 modified to accept a K6-2 400. Most boards came out of machines being upgraded and therefore had zero cost. Some were also purchased at swap meets for little cost. The memory also what was left over from upgrades and available at no cost.
  • Or why not Linux? A couple of reasons each way:

    • There is still a lot of hardware that BSD doesn't support. In December, I had to migrate my BSD email server to a Compaq system with the Compaq SMART RAID controller. BSD didn't support the controller at the time, so the decision was a no-brainer. The Linux "hype" does get real vendor participation, which translates into a larger suite of higher-quality device drivers.
    • There is the issue of how you feel on the whole GPL versus BSD license thing.
    • OpenBSD is incredibly slow in implementing the latest standards. They are still on Bind 4 (and yes, I know there was a root vulnerability, but everybody else is using Bind 8).

    The BSD people are great, and Linux owes a lot to them. BSD continues to make great contributions to the world of Linux. It would be the best of all possible worlds if each had the same capabilities. But, because of the hype factor and the real development it brings, BSD has no hope of being as flexible as Linux in the near future.

    I guess it is a question of what you grew into, the level of risk you are willing to tolerate, and the hardware that you need to support. The decision of BSD or Linux starts there.

  • These guys are buying AMD K6-2 3D 450 processors, which they say work in a variety of motherboards. Do these work in non-MMX (single voltage) motherboards?

    I'm using an old Gateway P75 as a masquerade box for my cable modem. It would sure be nice to upgrade it to 450 on the cheap.

    I am looking for the best way to squeeze a little more life out of this box.

  • Actually beowulf clusters commonly use eithe MPI or PVM. Neither is required to classify a cluster as a Beowulf. The first demonstrated Beowulfs were by NASA and ran on the PVM libraries. Refer to the Beowulf into http://www.beowulf.org/intro.html
  • OpenBSD is slow including new features/software sets on purpose. Some OpenBSD developers have made remarks that if you want the latest feature "X", then OpenBSD may not be for you. They like to audit and examine things and make sure everything works and is secure first before including it. Some people may not like it, but that's their basic design philosophy.

    But its really not a big deal (in my opinion). If you want Bind 8 on your OpenBSD box you can just install it.

    My Slashdot Observation...
    What really funny though, is how the same general questions show up on every BSD related article on slashdot. "Whats the difference between Linux and BSD, which is better...." Someone should just make a Slashdot FAQ for this and be done with it.

  • let me see. free/netbsd runs on 68k machines.

    macbsd (netbsd/mac68k) runs on my LCII.

    LCIIs can be had at the local surplus auction for $5 apiece. Most of these are formerly lab machines, and have ethernet already.

    i think a cluster of 25 of these low profile 16mhz monsters could fit on a desk, _maybe_ put out the MIPS of a PII (and only about twice the heat :) and all for about the price (including cabling & hubs) of a cheap celeron machine.

    hmm.. for $10 i can get powermac 6100s by the truckload, and freebsd/ppc...

  • So THAT'S where all the money we pay for Ethernet in the dorms goes...
  • There is a date in there from 1997. Sounds like a lot of that stuff is old and outdated. Linux has come a long way since 1997.
  • Just to clarify, FreeBSD does not currently support 68k. NetBSD and OpenBSD do, though.

  • And what did you do with your $250 cluster? You could have joined SETI or gone after a few prime numbers or any of a zillion other cool things.

    The point of ACME is to solve a few very hard (yes, NP-hard) problems. We don't particularly care the form of ACME in the end as long as we can solve the problems at hand.

    One of my gripes about the typical /.er is that they are in love with the technology and not with *doing* things with the technology. ACME is about doing, not screwing around. Our intention is to produce papers that used the results from ACME, and not to do papers about ACME. Being /.ed was a pleasant surprise.
  • 44X CDROM drives in a cluster system? What are you smoking? Plus, why do cluster systems need high end graphics cards? I don't know of any GL implementations that can be clustered. (Get a WildCat 4200 for the main machine and be done with it.)
  • This is cool that they were able to make a cluster for so cheap, but what advantages does this have over a beowulf cluster of Linux boxes? Also, in what areas is FreeBSD superiour to linux outside of clustering?

  • Hang on there, dude.

    Read more of the article first...
    From the news page:
    6/2/00 - PUCC and the University announce the purchase of a $10,000,000 IBM SP/2 (272 processors) for general purpose scientific computing. It will complement (nicely) the smaller (?32? node) SP/2 they've already got. This comes after a long period where PUCC had little to offer in BIG computing. "Hey PUCC! Wanna race? Our MIPS/$ versus yours?" We look forward to running some jobs over there. It should be in production before the start of the school year in August 2000.
    I think that the tide still says that the power is in the almighty dollar$$$$

    --
  • There's a problem with that...
    Apple uses proprietary licensed stuff in many of the features of Aqua, so it couldn't be open-sourced. I'm sure a watered down version of Aqua could be created, but lacking the PDF windows and openGL programming, it'd be no more then the Aqua theme which I use with gnome.

    --
  • Page says this is what they've paid so far. From the looks of that list, they've got a ways to go.

    Interesting start, but they've still got memory, motherboards, and some other stuff to go. That's going to crank the price tag at least a couple thousand dollars...

  • Look through the budget, and there's no entry for motherboards. That is unless you can get a AMD K62 450 + mobo for 64 dollars.
  • From what I can tell the price (~$2500) doesn't include the motherboards or memory which they obtained from various sources. This is probably one of the most significant outlays they had to make, next to the processors.

    It is still really a great price, and I can't believe what they paid for the racks.

    -k
  • Ok, look at the budget they laid out. Yes, a university with a significant excess of computers can do this cheaply. That's the argument that many people are using trying to get MY university to build one. LOOK AT WHAT THEY ARE PAYING FOR EQUIPMENT. If I got CASES at $1 a piece, and NIC's for the same price, and RACKS for $10. Then I could build just about whatever you want for under $3Grand myself...
  • Nononononono. The Ethernet is used only on the fighters. Anyone can clearly see that the fighter umbilical is FDDI. (And of course the mother ship runs Windows -- don't you know that the reason Jeff Goldblum was using a Mac (apart from the fact that they own him anyway) was that the aliens couldn't backcrack him? Go ahead, moderate me down; my karma is a bit swollen anyway, and this bit wasn't half as funny as I'd hoped... /Brian
  • ACME currently consists of sixteen Pentium-class computers, [...]

    Is it just me?--or does anyone else see 22 computers in that rack.

    --

  • Boy.. I bet that thing gets hot..

    ACME currently consists of sixteen Pentium class computers, each with 450 MHz AMD K6/2 processors and at least 32 MB of memory.

    I have two 450 Mhz Amd boxes in a small room, and they sure pump the heat up there..
    -
  • You see 22 cases. 16 in the cluster, the two controlling machines, and EMPTY cases. Misleading, I know. They'll be filling them in later. The top 5 machines in the nearest 3 columns, and the top one in the 4th column, are all currently in the cluster. The bottom machines in the two nearest columns are the controlling machines. The bottom four in the 4th column are not connected to the cluster yet. They also plan on adding another rack with 5 more machines for a total of 25 machines. (The cases can be seen sitting in a stack at the left side of the racks)
  • by BigD42 ( 2965 ) on Tuesday June 06, 2000 @11:46AM (#1021405)
    Despite the synicism of the subject, its nice to see a beowulf cluster on Slashdot which doesn't use Linux. It helps to remind people that Beowulf != Cluster of linux boxes. After all the definition of a beowulf is
    It's a kind of high-performance massively parallel computer built primarily out of commodity hardware components, running a free-software operating system like Linux or FreeBSD, interconnected by a private high-speed network.

    http://www.dnaco.net/~kragen/beowulf-faq.txt

  • by bko ( 73379 ) on Tuesday June 06, 2000 @12:55PM (#1021406) Journal
    First, realize that the machines they are talking about are Pentium (not PII/III) class machines. Thus, they require SIMMs, not DIMM memory. This is why they're using K6/2-500s and the like--they run in Socket 7 mainboards (albeit only at 66MHz memory bus). Memory like this is available quite cheaply.

    Second, good quality motherboards are basically there for the salvage -- on the news page they mention a an ASUS P55T2P4, which is, I believe, a 430HX board. But there's a big integer compute difference when it's outfitted w/ a 500MHz K6/2 vs the (likely) P133 that used to be sitting there.

    Thirdly, they mention that the machines are outfitted with at least 32MB of memory. This is not 128MB. You don't need 128MB to do a lot of tasks on either *BSD or Linux -- as long as you're doing things that have a <32MB resident set, you're going to be fine on either. FreeBSD is particularly good in low memory situations (its swap performance is better than Linux in my experience), but i'm pretty sure that this isn't important, because they are looking for big integer performance first and foremost. Otherwise, they're probably better served with fewer, faster nodes w/ K7s or PII/IIIs.

    Note that the machines have local HDs, so they can do local swap -- they don't need to keep shells, etc. swapped in over a network drive, either.

    So this sounds good, for the right task. there are obviously a bunch of tasks that would be better served by other styles of clusters, or other resource allocations, but for doing fast integer calculation on the super-cheap, this is a great way to go.

  • by QBasic_Dude ( 196998 ) on Tuesday June 06, 2000 @11:49AM (#1021407) Homepage
    Some excerpts from the FreeBSD vs. Linux page at http://www.futuresouth.com/~fullermd/freebsd/bsdvl in.html page:

    Subject: Re: Why FreeBSD?

    Any response to a question like this is bound to upset someone. I'll
    answer with the caveat that this is my opinion that developed over the
    past three years following them both as well as other commercial OSs.
    Those of you offended in any way by this, please cat flames > /dev/null.

    That said -- the differences between FreeBSD and Linux can best be
    understood in the context of American politics. There are essentially two
    philosophies: Republican (FreeBSD) and Democrat (Linux).

    The FreeBSD organization is a republican structure -- we have our say as
    users, but the final decisions devolve to the core team who take the final
    responsibility for their decisions. FreeBSD takes a conservative approach.
    In other words, better things should work correctly at the expense of a
    minorities desires, than to please all of the people all of the time and
    have unexpected components of the OS breaking on a regular basis. We are
    free to vote our approval or disapproval by changing our OS.

    Linux is a democratic group. There is no single authority to accept final
    responsibility except for Linus as it relates to the kernel. Linux adopted
    early on a consensus approach (POSIX, etc.). In a sense, Linux is much
    like current Democratic politics -- the mob pretty much rules. The end
    result is that there is really no such thing as Linux -- there are
    distributions that use the Linux kernel and from then on you have
    essentially different operating systems. Slackware, for example, doesn't
    look at all like Red Hat. Describing Linux is much like describing Mach.
    (There isn't much - both are just micro kernels. _Anything_ can be
    implemented over them.)

    So as I see it, it comes down to this: vote for the philosophy that
    appeals to you. I use FreeBSD because I rely on my machine for many other
    uses besides tinkering with operating systems. FreeBSD doesn't change the
    world on me every 6 months. Linux is in constant change. New things are
    showing up all the time. If you like tinkering with operating systems and
    having things that used to work break, Linux may be your answer. If you
    don't know Unix -- pick one and get started. You'll learn how to pick the
    best choice. No matter which one you pick, it will be infinitely better
    that Micros**t anything.

    Enjoy.

    -- Jay

    ----------
    Subject: Re: Why FreeBSD?

    And the clouds parted on 21 Mar 97, and Jeff Roberts
    said:

    >On Fri, 21 Mar 1997, Bob Dole wrote:
    >
    >> Hi, I plan on changing to UNIX and I wonder wether I should take Linux or
    >> FreeBSD...
    >> Both seem to be an excellent choice, so you can't say one is better than
    >> the other. But in what are they different, in what is each specialized?
    >

    Then try them both: they're both "free", but you'll have to pay something
    for you Internet connection or CDROM distribution, depending on your
    circumstances. The following is not impartial, as I don't play with Linux
    much, but when I did I wasn't as happy as I am now 8).

    [opinions on]

    Linux is SysV-flavored (barely); FreeBSD is BSD-flavored (definitely).

    Linux's kernel is authored by one person (Linus Torvalds); FreeBSD is
    authored by (essentially) the core team.

    Linux addons come from pretty much everywhere; FreeBSD's get submitted from
    a lot of places also, but have to pass review to be included as part of the
    release.

    Linux has multiple releases (based on who's packaging), all somewhat
    different from each other, and somewhat inoperative as well. There's only
    one release to FreeBSD (per major version)

    Linux tends to be more cutting-edge and trendy, and tends to work with more
    hardware (to some degree), partly because of the "arrangements" made with
    vendors. FreeBSD requires that source code be freely obtainable for
    (nearly?) all it's parts, which scares some vendors into not cooperating,
    or at least not as well. The hardware that _is_ supported tends to be done
    pretty robustly.

    Linux is snappier for low-user-count systems, depending on what you're
    trying to do. FreeBSD tends to shine under real load (like WWW/FTP
    servers), and I don't really know if any major sites base such Internet
    services on Linux; quite a few seem to be using FreeBSD, particularly
    Walnut Creek CDROM, which carries quite a load on a consistent basis.

    There are far more books on Linux than FreeBSD per se, something I draw no
    conclusions on.

    The support on the Linux list(s) is something I haven't any personal
    experience with; the support on the FreeBSD lists is exemplary.

    [opinions off]

    Please correct any sins of commission and ommission you find above; I don't
    do this often enough to be any good at it.

    your mileage may vary, and best wishes,
    larry

    --------
    Subject: linux vs freebsd testimonial

    A few weeks ago, my single linux box fried. I replaced both the hard
    drive (with an identicle one) and linux with freebsd 2.1.5.

    The machine runs majordomo, ftp, apache, and an irc server.

    The performance is way up there! Under linux it would frequently slug
    down to a crawl. under FreeBSD it just keeps zipping along.

    There is a very definite noticable difference in response and load
    handling.
    -------
    ubject: Re: linux vs freebsd testimonial

    Since we've gotten along fairly well in our migration from Linux, I
    thought I'd share my experiences as well...

    We currently run on FBSD:

    1 Shell/user www (was NetBSD 1.0)
    1 DNS/mail/dialup auth/syslog (these two are sharing mail spool over NFS)
    1 utility/backup/freebies
    1 DNS/mail for seperate, wacky project
    2 virt www servers (3 are still Linux)
    2 co-locate www servers (www.firstview.com 3.6G/day, www.villagevoice.com)

    These have been the most trouble-free machines we've worked with. Some of
    the recent security problems were a bit tough (lots of cvsup-ing), but
    nothing compared to the nasty Slackware Linux Bug-o-the-month. The only
    reboots *any* of these machines have seen were intentional, which is
    something I just can't say about Linux. Performance is much better, and
    the "out of box" configuration is a lot more sensible than Slackware. A
    few of these machines really get beat on hard, and they just ask for more.

    We have to keep one Linux web server for compatibility with some odd
    sourceless C cgi's, but the other two will be history soon. Our news
    server is running Linux, but it's being replaced with a machine to be
    named "fridge" which will have 3 SCSI busses and 15 drives, and of course
    be running FBSD.

    I must say, this has made my job much easier. Linux is just too
    unpredictable when you don't have the time to play the
    "kernel-of-the-week" game. One of the Linux boxes still does the routine
    of freezing with no log entries or other hints; which is extremely
    frustrating. FBSD just seems like it was meant to be in a production
    environment...

    Thanks to all involved,
    -------

The use of money is all the advantage there is to having money. -- B. Franklin

Working...