Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Linux Desktop Clustering - Pick Your Pricerange 199

crashlight writes: "A Linux cluster on the desktop--Rocket Calc just announced their 8-processor "personal" cluster in a mid-tower-sized box. Starting at $4500, you get 8 Celeron 800MHz processors, each with 256MB RAM and a 100Mbps ethernet connection. The box also has an integrated 100Mbps switch. Plus it's sexy." Perhaps less sexy, but for a lot less money, you can also run a cluster of Linux (virtual) machines on your desktop on middle-of-the-road hardware. See this followup on Grant Gross's recent piece on Virtual Machines over at Newsforge.
This discussion has been archived. No new comments can be posted.

Linux Desktop Clustering - Pick Your Pricerange

Comments Filter:
  • but sounds interesting none the least.
  • I heard that the new thing will be putting a hundred procs on a board instead of designing a better arcitecture for the processor itself. This is the new intel modow. Everyone hop on board.
  • Why would you want 8 virtual machines when you can have 8 physical machines? Isn't the whole point faster processing in a cluster?
    • I don't know much about building a cluster, but I thought one of the limiting factors of clusters is network latency. If they have it all on the same motherboard, or designed their own bridge to connect multiple CPU/MB, it would be faster right?

      By the fact that they are all incased in the same box, rather than connected via a switch, it has less distance to travel. I don't know that 5 feet of CAT5 could make that big of a difference. On the otherhand, they could have designed a different way of bridging the systems and dramatically reduce latency. In either case, it is intriguing.

  • Virtual macines??? (Score:5, Insightful)

    by TheAwfulTruth ( 325623 ) on Tuesday January 22, 2002 @02:33PM (#2883487) Homepage
    The purpose of running clusters is to increase processing power and/or fail-safety. How is running 8 virtual machines in any way a "less sexy" version of an 8 CPU cluster?
    • Ah, but you are missing the true beauty of this. Just wait until I setup my beowolf cluster of these boxes, and you will realize the fail-safety and processing power increases.

      Yes, I just said BEWOLF CLUSTER of these boxes.
    • It let's you point at your P100 laptop and say - that's my cluster, I do some SETI on it :)
    • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Tuesday January 22, 2002 @02:43PM (#2883552) Homepage Journal
      Maybe I want to develop software for a Beowulf cluster, but either I don't have a cluster of my own, or I just want to write the software, not run it in production. Either way, a set of 8 virtual processors would be good enough for the job.
    • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Tuesday January 22, 2002 @02:48PM (#2883582) Homepage Journal
      I just thought of something else. I have never used a Beowulf cluster, so maybe I'm completely wrong, but virtual machines could make a Beowulf more easily upgradeable. The idea is that you'd make a cluster with a whole bunch of virtual machines, say 1024. The cluster is fixed at that size for all the software that runs. But in reality, you've got 32 processors actually running. When you upgrade the cluster to 64, you don't need to reconfigure any of the software that runs on the cluster, because they all assume that you've got 1024 processors. But, you get a performance increase because there's now more physical processors. As I said before, I don't know much about clusters. I imagine that somebody who really does know will quickly either confirm what I said or reduce my idea to a pile of stinking rubble.
      • by cweber ( 34166 )
        You're mostly off the mark, I'm afraid. Most software that uses a cluster runs through MPI or simply through scripts. Both mechanisms allow for easy adjustment in the number of nodes/CPUs you use.

        Many large compute problems are embarassingly parallel, i.e. the same calculation needs to be repeated with slightly different input parameters. There's basically no interprocess communication, just a little forethought about filenaming conventions, total disk and memory usage, etc.
        Execution of such tasks reduces essentially to a simple loop:
        foreach parameter_set
        rsh nodeN myprog outfileM
        end

        For those programs that actually run a single instance of the code on several CPUs, you have to be acutely aware of how many nodes you use. Your code has its own limits on how well it scales to multiple CPUs, and your cluster imposes limits on how well (in terms of latency and bandwidth) nodes can communicate. Very few codes in this world scale well beyond 64 CPUs, especially not on run-of-the-mill clusters with plain ethernet interconnects. Fortunately, it is trivial to readjust the number of nodes used for each invocation of the code.

        Lastly, virtual nodes cannot easily simulate the behavior of real nodes. Again, it's the interconnect latency and bandwidth. When it comes to supercomputing, only trust what you have run and measured on a real life setup with your own code and input data.
      • A beowulf cluster operating system could expose a standerdized set of API's to an application if no matter how many processors it had it told the application that it had 1024 but the application could not then be optimized for the actual hardware it is running on. KLAT2's applications were optimized to take advantage of the 3DNow! extentions which Trippled the effective processing power of their cluster. Just like Direct X provides API's to applications for standardization does not come close to what could be achieved hand codeing machine language to the x86 hardware. Look at the applications being run on clusters now and you will see that most are cash poor but manpower and entusiasm heavy. If they could afford 3 times as much hardware to achieve the same performance they would have ordered expencive big iron from IBM. Also not that 1024 processors limits the scalability of the cluster. The real Idea is to force hundreds of commodity machines to act like a single processor
      • The idea is that you'd make a cluster with a whole bunch of virtual machines, say 1024. The cluster is fixed at that size for all the software that runs. But in reality, you've got 32 processors actually running. When you upgrade the cluster to 64, you don't need to reconfigure any of the software that runs on the cluster,

        Or better yet, cut out all the overhead of running "virtual" processors and install MOSIX. Then just run your 1024 processes simultaneously. The processes will transparently migrate to the least busy nodes to load balance. Add more machines to the MOSIX cluster (while the cluster is up) and the load will be further distributed. Need to take your workstation out of the cluster temporarily? Just force a removal and the processes migrate off to other machines...
      • The idea is that you'd make a cluster with a whole bunch of virtual machines, say 1024.

        Good plan. There's a large BUT however....

        There are two sources for N processors not being N times as fast as one.

        First there is the communications overhead. If you have 1024 virtual processors, running on 32 real processors, there are going to be about 32 programs running and communicating on one processor, while they could've gotten by without communicating at all...

        Also, the problem may not be divisable in 1024 equal chunks.

        If you write your software to scale to 32, there is going to be a parameter that says how many processors there are already. So adjusting it to 64 is not going to be hard.

        But test-running on a small (but virtually large!) cluster before running it on the "real cluster" could allow you to guess the magnitude of both "performance impacting issues" before buying time on the expensive "real cluster". So there certainly is a point in doing it that way....

        Roger.
  • Hey (Score:1, Offtopic)

    by Guitarzan ( 57028 )
    Can I get a Beowulf clu......um, nevermind.
    • Re:Hey (Score:2, Funny)

      by ch-chuck ( 9622 )
      Sure, why not. If an 'internet' is a network of networks, we should be able to build a cluster of clusters. One cluster calculates the reality I'm flying thru while another one calculates the effects of the nuclear device I just heaved, both feed into the headmounted 3D stereo graphics processor with surround sound audio helmet on the hydraulically actuated platform, while an input/output cluster handles sim data from the other players over the fibre channel...
      • If an 'internet' is a network of networks, we should be able to build a cluster of clusters

        They do. It's called Grid [ibm.com] computing. Now that Qwest can shoot 400Gbit/s over 1,000km or something insane like that, supercomputing centers are connecting clusters with faster links than your 64-bit PCI bus.

  • So, it's about equal to my dual Athlon 2000+?
    • Uh, no.

      2*20008*800
      40006400

      Of course, the dual athlon kicks butt for many applications (I know, I use one!), but it all comes down to the needs. What would be cool is if low cost dual athlon boards (ala tyan tiger) were substituted (8*2*2000 = 32000, or 5 times faster)... but then the whole thing would just melt with the heat. I do like the embedded flash/diskless aspect of the system, though. Its a simple cluster in a nice box.
      • Although I'm joking, let's just take a look at some numbers, hypothetically speaking.

        *borrowed from Tom's Hardware*

        Linux Compiling Test

        3.35 minutes for a Athlon XP 2000+
        14.2 minutes for a Intel Celeron 800mhz

        (now, here's where we stretch it)

        Figure 1.7 minutes for a dual Athlon XP 2000+, 50% of the other time.

        1.7 x 8 = 13.6 minutes


        But, who really compiles with a cluster, really?

        It'd still be faster....At least on a few benchmarks, and at least in theory
  • by eaglej ( 552473 ) on Tuesday January 22, 2002 @02:39PM (#2883523)
    Yeah, it's a nice compact little box... But they're pulling in a phat few g's on each box. I'll build my own, thank you very much.
  • ...the articles he posts? Tim, baby, if you don't like or agree with the article, either don't post it or keep your opinion to yourself.

    I know I'll get modded down for this, but here's an example:

    Posted by timothy on Tuesday January 22, @02:45PM
    JackBlack tells us about the "unbelievable deal you can get at KMart [bluelight.com] on all their overstocked computers and periphials! You won't believe the kind of prices on these things! I don't know about you people, but I'd rather swallow Draino than buy boxen from Big K! But, shit, whatever, I guess.

    Timothy is his own conflict of interest.

    What is happening to Slashdot these days?

    --SC

    • Conflict of intrest only happens, I belive, when the diffrence of intrests can or does effect the outcome. Since Timmy is posting the artical, it means that even though he doesn't agree with it, he still thinks that it falls under the "news for nerds" part of /. (Something tells me this isn't "Stuff that matters".) If he, say, didn't post this story because the cluster wasn't sexy enough, then it would be a conflict of intrest.
      • I think this article was interesting and don't mind seeing this stuff. You get an idea of what exists out on the market. Sometimes even if you don't need it right now in six months you may go "wait, I saw something that would do that..."

        Now as far as bias, what I find interesting is that /. made absolutely no mention last week of Microsoft making .Net available for download. The free SDK compilers, the free redistributable, or the MSDN subscription downloads of VS.Net.

        It made all the other news sites, along with articles complaining about server capacity. (VS.Net is like a 3 Gig download)

        I was at least expecting an article titled "Microsoft servers don't hold up to massive load!" :)
    • Knowing I will probably burn some Karma... It is interesting that these guys always have some smarmy comment to make after every article. Couldn't they just post the damn thing?
    • Wow! Timothy refused to advertize KMart on Slashdot. As a result, KMart filed for Chapter 11.
  • by azephrahel ( 193559 ) on Tuesday January 22, 2002 @02:43PM (#2883550)

    I'm sorry, but for that price this is way under engineered. The origonal bewulf cluster was made with components on par, for the day, as the celeron modle of the redstone, for far less. If your going to spend the time and money building and marketing systems like this, they could have done a better job. They suck mobos in a big case and eth linked them togeather. Call me crazy but I think for that much money you could get a small backplane, 8 industrial PC's (powerpc/copermine/whatever on a pci card each w/its own memory) toss em in and spend the rest of your "engineering" budget, making a patch to the kernel for reliable communication over the bus, instead of slow eth connections.

    besides with the speed advantages shared memory brings to multiprocessing a quad xenon would probably outpreform this.. deffinately a quad proc ultrasparc but those are pricey even used...
    • by benedict ( 9959 )
      Sure, you could build it yourself, and your
      hardware cost would be lower.

      But what about the cost of your time, in terms of
      dealing with vendors, putting machines together,
      testing them, integrating them, and testing again?

      I'm guessing these machines come with support, too,
      though I can't tell because their web site is
      Slashdotted.
      • Ya know, one argument to stuff like this that doesn't hold water for me is the 'What do you value your time at?' argument. Honestly, if I'm building a system like the one described, I'm doing it for me and my own education/geek factor. I'm not going to sell the damn thing, I'm going to use it for me, therefore my time costs nothing. And here's why: It's MY time...I already own it, therefore it costs nothing!!

        People seem to be so obsessed with placing a value on things that they forget that some people do stuff like this for a hobby, and therefore it is more likely to cost money than pay for itself. By definition a hobby shouldn't really pay for itself, otherwise it's a business, aka work.

        If I were to build one, it wouldn't be work. But, then again, maybe I'm one of the few people who spend my day working on computers, then go home and work on computers for fun.
    • If your going to spend the time and money building and marketing systems like this, they could have done a better job. They suck mobos in a big case and eth linked them togeather.

      But for some reason chose a midi tower, rather than a 4U rackmount.
    • $4500: 8xCeleron 800, 8x356mb=2g.


      My new workstation (yes, it's finally coming, for those whohave been wondering; the purchase order goes to main campus today):
      $4800: 2xAthlon 1900, 2gb ddr.


      My memory and bus are significantly faster. I believemy total processing power is equal or close under most applications.


      Then I get a few things that aren't in that bundle:
      2x18gb cheetah 15000rpm u160
      4x 9gb cheetah 15000rpm u160
      21" sony monitor & video card
      scsi cdrw


      would I really get any more from this unit???

    • Really, I agree this way over-priced for the specs. I built myself dual-node quad Xeon cluster (8 processors) for almost half that price.

      Besides, why in the hell is timothy spreading this??? The guy is promoting this "personal cluster" by spamming several newsgroups. Last week he hit a number of groups including comp.os.linux.hardware, comp.parallel, and comp.graphics.rendering.misc. That by itself ought to be more than enough to convice you not to buy for this guy!
    • besides with the speed advantages shared memory brings to multiprocessing a quad xenon would probably outpreform this.. deffinately a quad proc ultrasparc but those are pricey even used...


      First, I'm guessing you meant quad-Xeon and not quad-Xenon, cause why would you want to have 4 instances of Xenon text editor [proximity.com.au] running on your desktop, or 4 atoms of the element Xenon [maricopa.edu].

      Second, there is the misnomer that Ultra-sparc boxen are rediculously expensive. It's just not true. The place where I work recently purchased a new shared solaris server - quad-proc Ultra2 4x300mhz w/ 1 Gig of ram and 2x18 gig scsi drives. It only set us back about $1500. That is NOT MUCH in the world of shared website hosting. Plus, it compiles things a lot faster than, say, our P4-1.5 Ghz, despite the 300 Mhz advantage.
      High end hardware is doable, and most people don't need much, especially considering that a Sparc IPC (25-ish mhz w/ 24 megs of ram) runnig solaris 7 can host 200 web pages easily, and handle 3 million hits a month. Web serving is *not* difficult, and it doesn't take a lot of power, just a proc that can switch between processes quickly and efficiently. It's poorly-written CGI that takes power.

      ~z
  • Rack Density (Score:2, Interesting)

    by Genady ( 27988 )
    So... how many processors can you fit into a standard 44U enclosure now? If they've got an integral Ethernet switch do you get a gigabit uplink out? This would actually be really cool for Universities/Government Agencies to build insanely great clusters with small floor space. Still if you want insanely great maybe you should cluster a few briQ's [terrasoftsolutions.com] together.
  • by tcyun ( 80828 ) on Tuesday January 22, 2002 @02:47PM (#2883571) Journal
    I saw a quick demo of a multi-noded briQ [terrasoftsolutions.com] (by Terrasoft Solutions) at SC2001 [sc2001.org] a few months ago and was very impressed. The ability to leverage the power of the PPC in vast numbers (and in a very small form factor) was incredible. I wonder how these would do in a head to head competition?

    They offer a 4-8 node tower running 500 MHz G3 or G4 CPUs and drawing "roughly 240 watts per 8 nodes (less than a dual-processor Pentium-based system)." Quite impressive.
    • try $1500 for a g3/500/512mb node
      and $1500 for the case

      so for $4500 you get 2 count 'em 2 g3 nodes (no altivec)

      $2k for the g4 node (same speed & mem)

      so for an 8 node cluster, you are looking at $17,500

      makes the rocketcalc machine look like a bargain, no?

      (ok, ok, the briq is better engineered, but still...)
  • I'd like to have see chips that incorporate the CPU, RAM and something equivalent to the North+South bridge. Motherboards should be designed to take 1-32 of these plugged into some godawfulfast bus. CPU and RAM should be one in the same and scale together. RAM co-located w/ the CPU would be much, much faster. Most systems and applications can scale with more threads or CPU's. CPU's by themselves are just about as fast as they need to be for any task that cannot be divided into multiple threads (I'm not talking about poorly written progams). This whole getup would be significantly more elegant, reduce parts and complexity and probably be cheaper to produce in the long-run.

    I don't see this as the same as a system-on-a-chip. With those, you're integrating video and audio. I'd either rather NOT see that integrated at all or have a portion of this new CPU combo thingy incorporate a DSP or FPGA region(s).

    Whoa, time to put down the crack pipe.
    • I'd like to have see chips that incorporate the CPU, RAM and something equivalent to the North+South bridge. Motherboards should be designed to take 1-32 of these plugged into some godawfulfast bus. CPU and RAM should be one in the same and scale together. RAM co-located w/ the CPU would be much, much faster.

      Most of this (apart from the RAM with the CPU) sounds like a Sequent Symetry. There's also the Vax cluster. Where processors (with RAM) and storage controllers connect to a 16 (IIRC) way star interconnect.
      Both of these are over a decade old technology. So something similar which fits in a box one person can pick up is probably overdue.
  • only 100mbps? (Score:4, Interesting)

    by Restil ( 31903 ) on Tuesday January 22, 2002 @02:47PM (#2883575) Homepage
    The primary disadvantage of clustering is the network bottleneck. You lose out because even 100mbps is only a small fraction of what the pci bus of even low end pentium systems are able to handle. At LEAST go with gigabit ethernet so you can push over 100 megs per second between processors. This will greatly increase the usefulness of an integrated cluster by decreasing the one primary disadvantage.

    Also a bit pricey, but there would be some cost advantage in reduced footprint for some environments.

    -Restil
    • GigE won't help much because you're still stuck with ethernet's awful latency. Last time we shopped for supercomputers, cluster solutions lost out because of this, even with pricey Myrinet, Via or other high-end interconnects.
      • You are correct, but it all depends on one's application. Latency may or may not be a big factor in one's application. If it isn't, then you can easily save thousands of dollars with a cluster.
    • being in the same box I wonder why they didn't use U160 scsi for comms. hell you can have 16 nodes on a SCSI bus nowdays and transferring data between 8 machines and the array of shared drives would make it scream like a raped ape.

      hell we use that kind of setup for spitting 24 mpeg2 streams out for commercial insertion from pentium 166's
      • Do you have any information on setting up a shared SCSI bus like this? I'd be very interested in doing it, but I have not been able to find any information about it.

        gigs at vt dot edu
        • Re:only 100mbps? (Score:3, Informative)

          by Lumpy ( 12016 )

          Yes start here

          http://linux-ha.org/comm/

          this is used in the linux HA system extensively. Xenix and Unix has it also.. No version of NT has this ability nor any Microsoft product. Maybe someone has written a driver for NT4.0 but I doubt it.
          • also look here

            http://www.ibiblio.org/pub/Linux/ALPHA/linux-ha/ Hi gh-Availability-HOWTO-7.html

            this will get you started. searching for linux SCSI communication and start digging.
    • Re:only 100mbps? (Score:2, Insightful)

      by mpe ( 36238 )
      The primary disadvantage of clustering is the network bottleneck. You lose out because even 100mbps is only a small fraction of what the pci bus of even low end pentium systems are able to handle. At LEAST go with gigabit ethernet so you can push over 100 megs per second between processors. This will greatly increase the usefulness of an integrated cluster by decreasing the one primary disadvantage

      Depends on the application in question. There are many parallel processing tasks which do not need massive communication between processors. Effectivly each processor simply gets on with it's task on it's own.
      • As pointed out above the communication bottleneck is no problem at all for many of the most CPU-intenstive parallel computations. Monte Carlo-simulations for example can often be done with just about no communication at all. The same is true for many of the really hard computational tasks where we just don't have any algorthims that can make use of intensive communication among the processors.
    • Took a bit of thought to what you said. Wouldn't you be communicating smaller pieces of information on the norm vs huge amounts? Or are you tinking of doing large data translations? Perhaps they were building for one and not the other?

      Heh, there'll always be a bottle neck.
    • 100mbps ethernet consumes 75% of a 32 bit/33 mhz pci bus. Jumping up to 64bit/66 mhz it's still consuming nearly 20% of available bandwidth. The current generation of PCI busses (even "high end" ones) cannot pump out enough information to saturate a gigabit line.
      • That's totally bogus. PCI is 32 bits * 33MHz = 1056 Mbps.
        • Uh huh... have you ever used a reasonably new 32bit PCI computer when it's transfering data at close to 100mbps? Let alone when it's actually supposed to do something with that data, like write it to a hard drive or process it. Want to multiply that effect by 2... or 3... or 8... or 10?
          • Yes I have. Copying files from one drive to another over the PCI bus often transfers at 15-25MB per second, which works out to 120-200mb/sec. The CPU shows some usage and things are a little sluggish, but the OS is still totally usable, especially if it's on a 3rd drive that is not involved in the copying.

            Besides, due to inefficiency in TCP/IP you can't really get the full 100mb/sec, it's mostly limited around 80mb/sec for the better NICs.
    • Nice in theory, but in practice Gigabit ethernet does not deliver Gb/sec - and in fact Ethernet's contention access mechanism (CSMA/CD) guarantees that you won't!. In practice you're better off with channel bonded 100Mb/sec Ethernet and a smart switch topology such as KLAT2's Flat Neighborhood Networks to minimize latency and maximize inter-node bandwidth.

      KLAT2 FNN [aggregate.org]

      Note to moderators: above link refers to Linux, AMD processors and Beowulf clusters! Do the right thing! ;-)
  • i believe this was already posted a few weeks ago.
  • How is 8 800MHx Celerons sexy?
  • ...or maybe it's something else (bandwidth?), but it looks like they're slashdotted.
  • you can also run a cluster of Linux (virtual) machines on your desktop

    So you're suggesting that I divide my machine into 8 virtual machines and then cluster them for uber fun? Figuring the extra latency, wouldn't it be faster to just leave it alone?
    • The point isn't gaining speed, it's gaining reliability and maintainability. Instead of one PC being your email, database, proxy, etc. server, you create a virtual machine for each one.

      If you need to upgrade the kernel on your email server, you can just reboot that virtual machine and leave your database, proxy, etc. servers untouched. If a process goes wild on your database server it'll only screw with that service and not your email, proxy, etc.
  • by CDWert ( 450988 ) on Tuesday January 22, 2002 @02:56PM (#2883642) Homepage
    Clustering under vserver scheme is pretty dumb, if it was in fact meant to be serious.

    vservers assume that a machine has resources avaiable and that no one instance is consuming 100% of the systems resources. Application built for utilization under a cluster would most likey CONSUME 100% quickly and easily, otherwise why run them under a cluster in the first place ????

    I do like their monitor applet, its pretty coll for basic cluster managment/monitoring.

    At the same time all you people complaining about price...lets see

    800mhz cele w256 meg each MB, NIC and whatever storage, lets say on the cheap $300 each

    4 350 Watt PS $100

    Ok a really cool case and PS

    Time to load software, lets say weve done it before and it take 8 hours

    Physical Assembly and testing 5 hours

    My bill rate and personal time is worth 120/hr
    $1560
    Hardware $2400
    Power Supply $ 100
    Mildy cool case$ 100

    THAT COMES TO 4160, Hell, add to that I dont have to build the SOB AND it comes under warranty , you betch you A** Id buy one of these

    PLEASE think before you gripe about prices...
    Looks like a deal to me ....

    PS, Kernel version is 2.4.12 from what I saw on their link to products showing a screen shot Ayeee.....
    • So you make $250,000 a year?

      For those of us who's time is worth a little less than $120/hr, it's a pretty good deal to just build our own.

      I'm building my own 6 node, Duron 950 for a total hardware cost of $1300. That could have just as easily been 8 nodes for $1700, had I the money to spend on two more nodes.
  • I may not have the $4.5K to buy the box but that software looks spiff. The java demo of the cluster monitoring was realy cool. How about I get a stack of these [tigerdirect.com] and use the free software.

    Yeah yeah network speed blah blah. Let a man dream!
    • Please never order from TigerDirect. Check the BBB record on them and resellerratings.com. They are the target of many consumer lawsuits, and have a terrible record.

  • OK, the price for these kinds of things is really nice and low. Low enough to make anyone in the numerically-intensive computing arena really want something like this.

    (However, I probably wouldn't want one of these as my desktop machine if the power supplies took more current than a typical wall outlet, if it made as much noise as a helicopter taking off, and if heated up my cubicle to 92 F.)

    But the key ingredient in my mind is making these distributed boxes more conveniently usable, much like those 64-way big boxes from Sun and SGI.

    How far along are MOSIX, Scyld's products, others(?) that make these distributed clusters have a nice Single System Image?


  • #!/usr/bin/perl

    how_about_a_beowolf_of_these();

    sub how_about_a_beowolf_of_these {
    fork();
    how_about_a_beowolf_of_these();
    }
  • by ChaoticCoyote ( 195677 ) on Tuesday January 22, 2002 @03:07PM (#2883701) Homepage

    I've been building my cluster from various remaindered/cast-off/refurbished machines I find. Computer Geeks [compgeeks.com] is a good source.

    Load balancing is frelling difficult, but I've been doing some solid parallel programming work that translates nicely to "real" clusters. I'd love to buy one of the Rocket Calc boxes -- but I can make a darned nice box for a lot less money with more processing power, if I'm willing to have cables and such all over the place.

    The only real cluster-related problem I have is my lovely wife. She's one of those people who want things to "match" (so why in frell did she marry me?), and my "heterogenous" cluster just isn't very aesthetic. She just doesn't understand that the cluster's job is to compute, not to look pretty!

    Then again, the Rocket Calc machines are attractive, and the color would go with the living room furniture...

    • frell? frelling???

      You mean we weren't supposed to all understand that you really meant "FUCK" when you said that? Try it. Repeat after me. FUCK. F-U-C-K. FUUUUUUCK! Feels good, doesn't? Wheh, now aren't you glad you've got that out of your system?

      I knew you would be.

      >:-/

      C//
      • ...I suggest a laxative. ;)

        "Frell" is a word used on the TV series Farscape; it has the nice ability to replace many different cuss words with on catch-all phrase. For example: "To frell with it!" or "Frell you." And using "frell" avoid those nasty negative moderations that can so bruise my tender ego.

        As for "fuck", "hell", and other cursatives: I make sailors blush, youngster; I've been coding so long, I've had to invent or borrow new cuss words because I wore the old ones out. I'm bored with "fuck" the word "although not the act, mind you), and am looking for new, fresh alternatives.

  • here is the google cache since the site seems to be slashdotted....

    http://www.google.com/search?q=cache:q2egeLKmoVQC: www.rocketcalc.com/+rocketcalc&hl=en [google.com]
  • What I want to know, is how does the software manage to leverage the use of the cluster. They mention that programs wont need to be changed to take advantage of the clustering. How can this be? It doesn't mention much about this Houston software that seems to handle it. Usually applications have to be especially written for clustering. The website also doesn't seem to mention what tasks this system is really good for.
  • (1) Since this is all about high-performance computing, why use Celerons?

    (2) Why is it taking so long for someone to make the obligatory "Imagine..." post?

  • To run multiple linux instances on one "middle of the road" server, you need VMWare GSX [vmware.com]. It ain't free. In fact, it's $3,550.00. (there goes your "a lot less money" idea, T.)

    As for the value of this product, I see it clearly. Not all computational problems need high data throughput between nodes. And their Redstone-A product gives you an 8 node PIII 1Ghz cluster with 4GB of ram for $6000. And all the networking set up and ready to go. Give it to your Scientist and they don't need to know jack about network or configuration, they just treat it like another unix workstation.

    When I think of the ~ $20k each we spend on Sun and SGI workstations for our scientists, I cringe. This I wouldn't (won't?) think twice about buying.
  • You know when spielberg said that the next big smash wont be from him or some big studio but it will be from a woman named buffy from idaho. This brings that possibility one step closer. I can see this as a renderman render farm or even just an effects renderfarm for some of the effects the AVID generates.

    gimmie 2 of these, an AVID, and a $200,000.00 camera and lens you've got the next ET on your hands... (Nooo not cheech and chongs Extra Testicle get your mind out of the gutter)

    it's getting there where the general schmuck has the power that hollywood does.
    • Talent != equipment. You can do a film is craptastic 8mm with butcher paper soaked in crisco as a lens and still make a rad movie if you've got a smidgen of talent behind the camera and it helps to have some in front of the camera too. I'd rather have a real 8-way system than a cluster-in-a-box. The 8-way with the right software is going to be alot more productive than an 8-node cluster limited to batch rendering stuff I throw at it. Shit I'd rather have four dual processor systems with fast storage systems and 4 talented guys on them rather than 8 processors and one talented guy.

  • To me, this almost looks like a combination of various old SGI systems.

    So THAT makes it COOL.
  • If you're looking to do the same thing, cheaply and with a little DIY hardware tech time, simply buy a big ATX case with room for a couple of extra power supplies, install a 386 or 286 mobo (all ISA slots!!!), and buy a number of these computers [powerleap.com] and plug them in. They don't draw power from the motherboard, so you don't even bother connecting the 286 to power. Instead, tie each power supply to one or two of these cards directly (requires a little soldering), and there you go. A cluster in a box.

    If I had the money, I'd be doing this myself. Instead, I've got a rack full of 4U AT cases with dual PPro 200mhz machines instead. The one advantage to having full sized motherboards (with PCI slots) is that I'm installing triple-channel-bonded ethernet so I get gigabit ethernet bandwith, without paying gigabit prices.
  • The fastest processor out there (in terms of sheer power, not mere MHz) is the Athlon XP 2000+. I've checked with AMD, and they don't say you -can't- use it with the MP chipset. (They don't say you can, either. What they DO say is that they don't list or recommend that combination.)


    That'd give you 2 processors per chunk. By stapling four chunks together, using MOSIX, you'd get the same as an 8-proc SMP box, without the hyper-expensive motherboard.


    Now, this is where it gets fun. (Yes, I've been reading up a bit on this. :) The processor uses a clock multiplier, to translate the clock rate of the system into the clock rate for the processor. You -can- fiddle with this multiplier, but it occured to me that you don't actually need to. You can simply replace the system's clock, instead.


    I don't know what the "default" multiplier is, but I'd guess it's probably x16, or thereabouts, given the speed of the rest of the system. In which case, if you threw in a 1GHz clock (I suggest oven-baked, as they taste better), you'd get 16 GHz on the processor.


    Now, THAT kind of ramp-up is not going to be easy. Chips run hot as it its, and if you plan on overclocking an Athlon by a factor of 8, you can expect to fry eggs on the surface. As for the poor RAM chips.... those are REALLY going to suffer!


    My guess is that if you strapped pelziers onto all the chips, immersed the entire system into some synthetic, non-conducting fluid that you can super-cool to, say, -135'C (there's some stuff 3M makes that'll handle it), and you devised some way to keep it that cold, the chips might survive the experience.


    Might. I'm not even going to say "probably". To the best of my knowledge, no overclocking or supercooling experiment on conventional PCs has gone to that extreme. The only ones that came close (a NZ project involving pouring liquid nitrogen into the case) trashed the disk drives and BIOS.


    On the other hand, I've been checking up on the tolerences of components, and what you COULD build, if money wasn't an issue. The technology for an 4x2 MOSIX/SMP cluster, overclocked to 16 GHz and still running -does- seem to exist. (The keyword is "seem" - most chip specs are calculated, not actually measured, according to the data sheets.)


    Now, I suspect that it'd cost a damn sight more than $4000, just for the parts, even if you could mass-produce such a monster (assuming you had the engineering skill to build it in the first place), or even find anyone crazy enough to buy one, given that it'd probably be more cost-effective (and certainly safer) to go with an IBM mainframe at that point.


    On the other hand, kernel compiles would be quick.

    • The fastest processor out there (in terms of sheer power, not mere MHz) is the Athlon XP 2000+.

      I thought the IBM POWER4 processors were still out front..

    • One catch :

      Anyone spending this kind of money on a cluster is likely to be doing serious number crunching. Now, some types of number crunching don't mind posibly letting a wrong answer slide through somewhere (IE - in a render-farm, you might get a bad pixel). Other environments, where calculations are all interdependant, a single bad number could make all the computation worthless (the old "A butterfly flaps its wings in Tokyo and we get another John Katz article" story).

      Besides, no point in spending the time designing & testing this monster OCed box, when you can just buy a few 8-way Xeons, or call Sun/SGI.
  • Oh well.

    The other day, I saw an auction on eBay for an SGI Octane. The price was over 100,000 bucks. Looking at the pictures, I could tell why: It included a rack with a bunch of really fancy stuff on it, and an SGI Octane. That is why I consider sexy.

    Actually, here is what I really want. I want to have several of every brand of computer, running all the operating systems available for each brand. That way, I'll be able to access software and information for any of them. It'll cost a ton of money, and that's money I don't have, but hey, who said you can't imagine stuff?!

    Oh well.

  • I would love to use one of these (especially the higher-end model) for a build server. I seem to remember a parallel make utility for Solaris (one that allows you to use multiple machines to do a build). Anyone know of something similar for Linux?
    --JRZ
  • Claims about VMs (Score:2, Interesting)

    Ok.. so VMs makes sense because it allows seperate virtual linux O/Ss for each major server function. (web database ftp..). This is good for management and accounts. However, "they" are still making claims that it offers somekind of performance advantage over running them all on one O/S. I mean... call me crazy but I don't see how there is much of a performance advantage.. if not just wasted memory here. Someone please throw me a clue.
  • I can play solitaire and code so much faster with this baby. Come bring it on.
  • I would have been ecstatic if someone made personal clusters consisting of dual 1.5Ghz Athlon nodes on gigabit ethernet a long time ago yet only now has someone decided to market personal clusters of single Celerons. Time to give up on the technically possible and base wish lists only on the technically marketable.

1 1 was a race-horse, 2 2 was 1 2. When 1 1 1 1 race, 2 2 1 1 2.

Working...