Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Cringely Wants A Supercomputer in Every Garage 277

Nate LaCourse writes: "Real good one from Cringely this month. It's on building his own supercomputer, but with some twists." You'll probably also want to check out the KLAT2 homepage to learn more about their Flat Neighborhood Network. And since KLAT2 has been around for nearly a year (check out the poster on this page!), perhaps a 3rd generation is in the works?
This discussion has been archived. No new comments can be posted.

Cringely Wants A Supercomputer in Every Garage

Comments Filter:
  • Am I the only one to spot the "The Day The Earth Stood Still" reference here?
    • Well, not to be one of those stick in the mud 'Read the %($#ING article' type people, KLAT2 is a reference to The Day The Earth Stood Still. Had you looked at the articles in question (particularly, the KLAT2 page) you would have discovered that indeed, they were intending the reference. Heck, go check it out - the poster they made up for it is worth the look! :-)

    • probably not....

      Our poster is based on one of the posters for the classic 1951 science fiction movie The Day The Earth Stood Still. Yes, KLAT2 is an obscure reference to Klaatu, the fellow from outer space who came, with the robot Gort, to explain to all the people of the earth that if humans cannot work together in peace, the earth will be destroyed for the good of all planets. Of course, in the movie, Gort didn't have an AMD Athlon logo on his chest and Tux, the Linux penguin, wasn't the actor inside the suit... it's a very good movie anyway. ;-)


      :)
  • Great (Score:3, Funny)

    by evacuate_the_bull ( 517290 ) on Thursday December 27, 2001 @12:13AM (#2753948)
    Make sure it has a red dot and says things like "Dave, what are you doing Dave?" Can't wait for mine!
  • It seems in the article that an extensive amount of calculations was necessary to design the network. Ironically, they needed a supercomputer to design a supercomputer.
    It is really cool considering that $6,000 is now enough to take on massive projects. How many months would this machine take to render Final Fantasy: The Spirits Within?
  • I just love the idea of having a little super computer (and not just buying a Mac cube which claims to be a super computer, or a Dreamcast which can't be sold to Iraq because it qualifies as a supercomputer). Making a super computer must qualify as one of the ultimate hacks, a combination of technical skill, imagination, and pure unadulterated tech balls. This seems like one of those projects that I would do if I had the spare 6k needed. And oh yah, imagine a Beowulf cluster of these!
  • Haveing a super computer would be great, imagine have the possibility of being able to do your own DNA Research at home. Or you could just get like a 1 gig Vid card and play an awsome game of UT!
  • A story that beowulf cluster posts will be relevant to!
  • by BlueJay465 ( 216717 ) on Thursday December 27, 2001 @12:25AM (#2753975)
    This is a very interesting concept that he is putting forth, but at the same time, how many geeks out there are going to really make use of such a clustering farm? Not everyone I know does video compression projects, and it would seem kinda prohibitive for a black-hat to set one up to break encryption codes. Could someone please tell this naive soul what useful everyday application all these CPU cycles could be used for? (if you say SETI@Home, I am going to bitch-slap you)

    Secondly, UWB seems to be the holy grail of wireless networking, yes, however is this something that the agencies of the world are going to let out of the bag so easily as he says, I can think of the CIA and the NSA having a few choice words about such "undetectable signals" being used by commonfolk after September 11th...

    Just my two cents
    • Keep an eye out for Gigawire. It could be a inexpensive wireless broadband solution.

      We won't know until Mr Jobs wants us to know.
    • by An Onerous Coward ( 222037 ) on Thursday December 27, 2001 @01:10AM (#2754069) Homepage
      I am now telling the computer exactly what it can do with a lifetime supply of chocolate.

      Okay, we need to burn some spare cycles. Lots of them, in fact. I have some ideas. There may even be a couple in here that can be taken semi-seriously.

      * SETI@H. . . Why are you looking at me like that? Admittedly, it's cliched, and I'm the impatient type who figured I'd find my first LGM within a week. Or by the end of the year at the very latest. But I still think that it's a pretty cool thing to be doing. Or load up one of the alternatives like Folding@Home.

      * Find a buddy with a similar supercomputer, and have them play chess. Or tic-tac-toe billions of times every second (sorry, War Games flashback).

      * There are lots of mathematical problems out there just begging to have a few supercomputers thrown at them. I'm not aware of what they are, so consult your local Mathematics department and offer your services.

      * If you're not interested in doing video compression or complex scene rendering, you might be able to find someone who was. Some indie film maker who wants to play with the big kids is going to become your new best friend. Be sure to ask for a walk-on.

      * Some sort of AI project could be interesting, providing you have some specialized training. Or you just give someone at MIT telnet access.
      • This sort of thing would be an absolute god send for those involved in AI. But any comparison of techniques requiring runtime analysis would be an absolute god send.

        That'd be much peferable to running some particular piece of code for a week or whatever on a workstation that some bunch of 1st year undergrads are using night and day. (All for one result - then realising you'd made a mistake in said code)

        It would speed up research in so many diverse fields.
    • I've got one ..... (Score:3, Interesting)

      by taniwha ( 70410 )
      25 dual p650s in my home office ... when I crank it all the way up I come in at somewhere in the 50-100 range on the dnet rc64 dailies. Sadly the original reason I built it has evaporated and with the current cost of CA power I just have a fraction running
    • "I can think of the CIA and the NSA having a few choice words about such "undetectable signals" being used by commonfolk after September 11th..."

      On the contrary, UWB can not be used for long range communication, so it's not going to replace your cell phone anytime soon. However it's probably the best thing we've got to screen people at airports. This technology can literally see through walls and it can do it without hurting anyone.

      Stephan

    • I can't think of any "useful everyday" uses, but surely a lot of different people have a lot of different ideas about what to do with a supercomputer.

      When I was a kid I played with particle systems. I'd set up a cloud of particles with mass and/or electrical charge and see how their simple interactions created large-scale behaviour. It was a simple system that didn't scale well (I made some attempt to break the space up into cubes and treat the contents of far away cubes as one particle, but it wasn't a seamless transition). Even with the limited number of particles I could play with (a few hundred), I still saw a lot of interesting things happen, like the material breaking up into 2 or 3 separate clusters. If I had oodles of CPUs, I'd enjoy figuring out good ways to split the load between them.

      In today's society (those societies whose members waste time on Slashdot, anyway), life isn't just about making a living. So in essence, these machines can be used for having fun, which is a good enough reason to make them.

      P.S. It's no reason to build a cluster, but if SETI@home doesn't turn you on, perhaps Folding@home [stanford.edu] will.

  • HP Did This Too (Score:5, Interesting)

    by MathJMendl ( 144298 ) on Thursday December 27, 2001 @12:29AM (#2753985) Homepage
    ZDNet has an article [zdnet.com] of HP building a supercomputer like this as well, called the "I-Cluster." It has 225 networked computers running Linux Mandrake (so changes could be easily made) on 733 MHZ out of the box PCs. The only catch is that is is slightly more expensive- $210,000 (minus network cabling). On the other hand, they plan to release the open source tools they made as well, so that people can repeat this.
  • Sure (Score:2, Offtopic)

    by jsse ( 254124 )
    I always think that it isn't worth to waste the valuable garage space on my second-hand japanese car, which worth no more than $1,000.

    Now it is used to place a $41,000 supercomputer! Ph43r m3!!

    but then, I wouldn't allow anyone driving a car into my garage(WATCH THAT NETWORK CABLES ON THE GROUND!), so should I build another garage for my real cars?....
    • I just think of the crappy switching layout and think to myself: Think: set think! [ I am glad I'm drunk because ordinarily a bunch of clustered dual processor equipped machines equivocating on the nearest VC makes me ill, But chemically now, I THINK IT IS PURE ZEN. ]
  • easy cowboy (Score:3, Funny)

    by ProfKyne ( 149971 ) on Thursday December 27, 2001 @12:37AM (#2753995)

    Those are some interesting ideas.

    Now how about organizing them before publishing them? Call me pre-postmodern (and I'm still in my twenties), but I tend to learn more from a coherently-organized message than from a random jumble of statistics and facts. Cringely jumps from a detailed description of the KLAT2 and its innovative networking technology to a brief description of UWB. And then it's over.

    Maybe I'm missing something.

    • Yep, it was still a fun article, though. Now, I'm off to read what I can about UWB, and why one would need a super-computer to use it...
    • If you followed Bob's 802.11b adventures (which ran off-and-on for about 2-3 months), you know that this is the beginning of what will be a whole series of articles about this supercomputer.

      Yes, he is actually going to try to build this thing, and he is going to document and post his progress as well as every single technical snag and kludgey solution.

      I can hardly wait!
  • Now what... (Score:1, Redundant)

    by jsse ( 254124 )
    Now what am I going to do with the extra computational power that I created?

    Running Super-SETI at home, claiming to be the greatest contributor when they really find ET?

    Running Super-Quake with all the transparant cheat-code on without a slight jitter?

    Rendering MSN frontpage in less than a second, with Mozilla?

    Any better idea?
    • I'd really like a cheap farm for video compression. I tied up a G4 433 for six hours last week compressing a 20 minute movie using Sorenson 3. Fortunately was using OSX and so the machine remained relatively responsive, but still, six hours pinned just for 20 minutes. (Of course it did take a 4GB movie down to 150 MB)

      So now that we have a cheap supercomputer, all we need is cheap software. :-) I imagine Apple won't be porting iDVD to Linux anytime soon, and the stuff the studios use is either custom or very expensive.
      • If you upped the priority of the process you might have gotten it to work a bit faster. The renice command sets the priority of a process so that it will take more processor time. Try using a value of -16 (lower values are higher priorities, go figure), but be warned that your computer might now be quite so responsive. It's a small price to pay if it cuts the time from 6 hours to 3, however.

        Just do the following:

        1) start the program going
        2) run terminal
        3) type top and hit return
        4) look for the pid number for your process
        (this is the pidnumber in step 6)
        5) hit control-c
        6) type sudo renice -16 pidnumber
        7) enter the administration password
        8) watch the time needed drop
        • Not knowing what commands are on MacOSX, but this should work too...

          renice -16 `ps | grep <insert app name here> | grep -v grep | awk '{ print $1 }'`

          if you have a few apps, stick it in a loop like:

          for i in `ps | grep <app name> | grep -v grep | awk '{ print $1 }'`; do renice -16 $i; done

          (change 'renice' to 'kill', and you have my fav alais)
          • Thanks! I'll try it out when I get back to work.

            This movie project was my first experience with OSX, and the first real time I've spent with a Mac since I gave away my 7100 a few years ago... With so much control over the OS and a system that didn't crash on me once I think I'll be spending more time with it.
  • What good is a supercomputer in your garage if you do not use it to maximize garage-holder value? If you provide supercomputer habitat for the progeny and supercomputer embodiment of the JavaScript AI Mind, [sourceforge.net] which has also been coded in Forth as Mind.Forth Robot AI, [scn.org] then your home-sweet-home garage will be a major waystation on the road to the Technological Singularity. [caltech.edu]

    Just as the Shroedinger Equations for atomic bombs and such were developed seventy-five years ago when Erwin Schroedinger spent his 1926 Christmas vacation holed up in the Swiss Alps and working out a few mathematical formulas that shook the world, nowadays over the 2001 Yuletide there have been the first stirrings of True AI in the JavaScript AI Mind, [sourceforge.net] which any garage tinkerer may adapt for either 'pert near all-powerful supercomputer AI or a killer-app if not killer robot. [mit.edu]

    Following in the footsteps of the giants who created Visual Basic Mind.VB [virtualentity.com] and Java-based Mind.JAVA, [angelfire.com] be the first on your block to create the supercomputer-based Garage-Mind.

  • I thought it was 100base-T? Am I missing something?

    Here's the qoute "And fast Ethernet (10base-100) costs about three percent of gigabit Ethernet on a per-card basis, so using four cards per PC still saves 88 percent."

    Google results for 10base-100
    - Results 1 - 10 of about 81.

    Google results for 100base-T
    - Results 1 - 10 of about 64,800
  • where do we get um? where do they go, it seems u have enough then u run out! does ne1 know of a site that sells um?
  • I've decided to call her Wendy.

    I wonder if he's referring to Stahn Mooney's wife from Rudy Rucker's *ware novels...

  • I think a fake Stanford degree would do nicely.

    Maybe they could set things up so that ALL his articles hit the main page as soon as he posts them.

    If this were the case he could put a "discuss this article" link on his page and simply link to /.

  • Old news (Score:4, Offtopic)

    by SumDeusExMachina ( 318037 ) on Thursday December 27, 2001 @01:28AM (#2754095) Homepage
    Sorry to say it guys, but this is a repeat of an old Slashdot post that linked to an ArsTechnica article [arstechnica.com] more than a year old.

    Still though, after having to wallow through Cringley's painful lack of comprehension of basic technical knowledge, reading the ArsTechnica piece again was quite refreshing.

    • Thanks for the link to the ArsTechnica article, you are right it is a much better read on KLAT2. I was particularly interested by the network design, I'd never thought about how to solve that problem and thought the KLAT2 solution was great.
    • I think you missed the point. Cringley isn't a gearhead and doesn't claim to be. If he can make this work then anyone with sufficient interest and a willingness to learn can build their own scaleable computer cluster, for whatever goofy project turns them on. Does this loss of technical priesthood priviledge bother you? :)

      Look on the bright side: at some point in the future when your relatives bother you for help with computer problems, the problems might actually be interesting. Instead of wondering why Windows has eaten Uncle Bob's resume, they'll wonder why there's an anomalous 6ms latency on node 4 and want you to help them figure out whether the problem is related to cable shielding degradation or whether there's a subtle error in the routing algorithm...

  • Through Google I found the UWBWG [uwb.org], and there's lots of detailed papers at Aetherwire [aetherwire.com]. Interesting reading.
  • by markj02 ( 544487 ) on Thursday December 27, 2001 @01:37AM (#2754114)
    Cringely is completely missing the point. KLAT2 uses multiple routes and switches, not channel bonding. And what the project contributes is not the basic idea of using multiple network interfaces (which is decades old), but a specific approach: using genetic algorithms to optimize the network topology. More traditionally, such clusters have used manually designed topologies with known performance bounds.
    • by funnyguy ( 28876 ) on Thursday December 27, 2001 @03:32AM (#2754248)
      the FNN which was created for KLAT2, is not a speed increase of ethernet by using multiple network cards. It basically allows full speed (100mb full-duplex) without a 64+ port, full wire speed switch. If such a thing even existed. Cringley's network is just 4 channel bonded network layers. Channel bonding actually has slightly more overhead than FNN. With KLAT2's FNN, each machine is on 4 seperate networks. No matter what other machine a single machine needs to communicate with, they each share one common network. Each network is held together with one switch, so there is always a full speed route to every other computer in the cluster. The OS handles this directly by using /etc/ethers to hard code the hardware addresses of every computer. different networks are different subnets, and the network routes are layed out accordingly.... blah blah... I could go on and on, but aggregate.org has more info.

      As for the algorithm everyone is talking about. there are some versions which can return a pattern in a second or two on a slow celeron. then there are some version which are designed optimized for certain datasets which take time to run. but generally, you don't need a supercomputer to design a fnn. even with 64+ nodes.
    • by Zeinfeld ( 263942 ) on Thursday December 27, 2001 @08:45AM (#2754491) Homepage
      Quite, the problem with measuring super-computer performance is that every single machine in the class is highly optimised to a particular niche. That is the main rason they are so expensive compared to the components - large machines sell in the tens rather than the tens of thousands.

      Anyone can build a machine with a really high processing performance. Just by a few thousand X boxes and plug them into the same ethernet cable. The real issue is how much communications bandwidth you have between the CPUs. Some problems require almost none - the 'trivial parallelism' problems like DEScrack and the mandelbrot set. In the 1980s we had a machine that had 1000 20MHz processors that could bang out mandelbrot sets like anything (using the goofy algorithm, not the modern optimizations). But is wasn't much use for anything else.

      The problem with competitions for supercomputers is that they rarely measure the communication bandwidth because (a) its hard to do and (b) the effect on performance is highly algorithm dependent.

      As for the KLAT's ingenious topology, I once did some research in the area myself when it was the fashion. I tried using minimum diameter graphs which should in theory have been better than a plain taurus. However as with Bill Dally at Cal Tech I concluded that the additional cost of exotic topology (more than double the price) was not really justified by the performance advantage (about 10-30% on a good day).

      Certainly the many companies that set up to build transputer based processing clusters with high performance switches inside did not seem to go anywhere much.

      Using a high performance router at the core of a processing cluster might be interesting. They are pretty cheap these days and are headed cheaper.

  • by Bowie J. Poag ( 16898 ) on Thursday December 27, 2001 @01:46AM (#2754126) Homepage


    Speaking as someone who, yes, has actually worked with the big iron...

    Why bother. Remember, Moore's Law is still in effect. Recently, we've hit the point in the curve where supercomputers are no longer needed, nor cost-effective. That is, the time it takes for the industry to deliver a far superior product has eclipsed the average lifespan of your typical supercomputer.

    We're living in an age where a single graphing calculator you can buy at Walgreens has more horsepower under the hood than what got us to the moon 30 years ago. Your $2700 PC will be worth $150 within 3 years.

    Having a supercomputer in every garage makes about as much sense as taking a rocket fuel-powered dragster to the supermarket for a gallon of milk.

    Cheers,
    • "taking a rocket fuel-powered dragster to the supermarket for a gallon of milk"

      sorry to be a pisser but wouldn't that analogy fall flat considering that advances in land speed vs computation speeds are highly different? the question I'm wandering is this, "What the fuck are we supposed to do with all this processing power" In other words what is the killer app that would use this? I can only come up with artificial intelligence used for slave like tasks around the home + generic latest whiz bang entertainment. Does anybody else know of anything?

    • Sadly, you are partially correct [arxiv.org].
    • <disclaimer>I know little about 'big iron'</disclaimer>

      But isn't the point of these kind of projects to derive more computing power in a generic form, something useful to many situations?

      Sure, my Athlon isn't too slow at the piddly little hobbyist 3d rendering stuff I play with, but what if I suddenly get grandiose dreams of 3D worlds, wouldn't it be nice if I could divert the down payment for a house and move myself a year or two farther along Moore's timeline?

      I can think of some small business applications where a nice quick video compression would be nice, especially if the hardware and software were all generic enough to buy off the shelf without a serious outlay of cash. Granted, there are very nice and very fast hardware codecs but then what if that same small business wanted to render some 3D along with that video stream? Or I'm working for them and get permission to render my VR opus overnight?

      What about applications that could be enabled by cheap and standardized GFLOPs? If you can't think of any you're not thinking hard enough.
    • Two Reasons (Score:3, Interesting)

      by nuintari ( 47926 )
      One: Because we can.

      Two: Ever seen the stuff they run on supercomputers today? Simulating a supernova for 1 nano second can take a month of CPU time on some of the world's fastest supercomputers. Oh, its still very nessesary. If the past is any indication of the future, we will always need blazing fast machines to push the limits in the scientific world.

      I assume you mean big iron as in mainframe, which is NOT a supercomputer by any means. Mainframes do the work that runs this world, supercomputers help us discover what we'll do in tommorow's world. They are very different worlds.
    • by Rasta Prefect ( 250915 ) on Thursday December 27, 2001 @06:21AM (#2754398)
      I don't know about every garage, but as someone who is currently working on a research project at a University, I can say we'd find something like this very interesting, as would a number of other departments on the campus. We've got a couple of Crays sitting around, but can't afford the cost of maintaining the things. Something like this would be way more affordable to buy and maintain for educational/research purposes where traditional supercomputers aren't even vaugely an option.
    • Why bother. Remember, Moore's Law is still in effect

      What's Moore's Law got to do with this? This is more the area of Murphy's Law, I think. As for why bother, heck, I don't know: because I can. When I had a 286 PC, it did everything I wanted it to do at the time, why did I need a 386? My 386 was dandy, what was the benefit of having a 486? My trusty 486 was quite fast at the time: was the premium price of a Pentium worth it?

      Stuff happened! People thought up new applications for newer and faster machines, and then we couldn't do without them. Remember when your average machine could push out 5 frames per second of 160x120 video, tops? I remember when encrypting a 26k text file took almost a minute, each. Back in the day I didn't think I'd be watching DVD videos on my desktop or laptop PC: who'd want to, that's what TVs were for!

      Years and years ago I had a program that simulated stellar interaction in small globular clusters. A few hundred stars pushed a 086 as far as it would go and it was still an overnight crunch to simulate much interaction. I kinda gave up on it after a while: other interests, etc. I think about it occasionally, wondering when that sort of stuff will get commoditized to the point where I can take a look at it again without having to pull away from current projects for six months. Not quite there yet, I think, but gettin' close, gettin' mighty close... :)

    • by Zeinfeld ( 263942 ) on Thursday December 27, 2001 @09:32AM (#2754558) Homepage
      Speaking as someone who, yes, has actually worked with the big iron...

      The machine I worked on in the early 90s is still in the top 100 of the supercomputer charts (or would be if the compilers knew about it).

      While a desktop Cray-1 can now be had at commodity prices the machine is now two decades old. The obsolescence rate is nowhere near as giddy as some would claim.

      The really big iron tends to have a lifespan of about five years and is typically retired because the power consumption and maintenance costs favor a move to newer hardware. True supercomputers rarely fall victim to Moore's law. Even the KLAC machine discussed only barely qualifies as a supercomputer, 64 processors is at the low end of the scale. People have Web servers with that number of CPUs. True big iron starts with a few hundred processors and goes up to the tens of thousand.

      If by working on the big iron you merely mean you used to use IBM 3090 class machines, then the joke is on you, those machines were often obsolete before they were manufactured. When I worked at one lab I had a desktop machine (first production run Alpha) that was considerably more powerful than the CPUs of the just-installed campus mainframe.

      Fact is that many of the people buying 'big iron' in the 1980s and 1990s were incompetent. They bought machines that ran the O/S they knew, which often meant they bought obsolete IBM mainframes for applications where a ntwork of IBM PCs would have served far better. I spent quite a bit of time in institutions where wrestling control of the computing budget from an incompetent IT dept was a major issue. In fact the World Wide Web began at CERN in part as a result of such a struggle. Tim, bless him wanted the physicists to switch from the IBM mainframe CERN VM to use NeXTStep machines. One of the schemes that the CERN CN division had cooked up to force people to use the mainframe was to only make information such as the address book available on the IBM mainframe. Attempts to make it more widely available were treated much the same way that Napster was treated by the RIAA. The Web took off at CERN initially because you could access the address book from a workstation or from the VAX.

      Very few mainframes were actually designed to provide fast processing. The IBM 3090 series was actually designed to perform transaction processing for banks. As a scientific CPU it offered tepid performance at a price arround 100 to 500 times the price of a high power workstation.

      There are certain applications in which CPU cycles are still the limiting factor. Admittedly they are much smaller as a proportion of the whole than they were 10 years ago.

      • Even the KLAC machine discussed only barely qualifies as a supercomputer, 64 processors is at the low end of the scale. People have Web servers with that number of CPUs.

        I know this is completely off topic, but Travelocity (the travel web site, you know) has lots and lots and lots of SGI Origin systems for running their front-end app-- it does session management and HTML generation, and passes data back and forth from the user to the database, so it's basically just a web server.

        I've lost count, but I know they've been buying at least one 32-processor system per quarter for several years now. And, if I remember right, they recently bought something like four 32-proc Origin 3000 systems, too.

        So, yeah, they've got a hell of a big web server. ;-)
    • I am reminded of the US Patent Office manager that reported that all that could be invented has been invented. NOT

      As someone with their own supercomputer (ACME [purdue.edu] and /. of 6/6/2000 [slashdot.org]) I can say that you'll come up with a bunch of things you would like to do but haven't found the CPU time to do. This of course presumes that you have half a brain.

      We run NP complete problems to completion. Our idle loop is a prime number factoring of one of the RSA challenge numbers. If we were to hit one of those numbers (even the $10k one) we'd more than pay for the machine (but not the A/C or power).

      I do ponder what a typical PBS.org [pbs.org] reader would do with their own supercomputer. Most lack the sophistication to get a return on investment on even just the air conditioning and electricity better yet the cost of the hardware and the set up. But what do you expect from someone who practices identity theft [wired.com]?

      All that said, it is having this class of power out in the hands of the masses that could well bring the next BIG NEW IDEA. It is neat that it can be done and I hope a bunch of /.ers write the code they want to run on such a thing then build one to run it.

      -- Multics

      • >But what do you expect from someone who practices identity theft [wired.com]?

        How exactly do you steal the identity of someone who never existed? The man we know as Robert X. Cringly was the Infoworld Cringly for 8 years! I'd say he pretty much defined who that Cringly was (or is today, I don't read Infoworld.) Saying he practices identity theft would be a valid argument if the Infoworld Cringly was someone else and he had just appropriated the name for use on PBS, but he didn't. He built up the Infoworld Cringly and so I believe he has a right to go on with the persona he's used for all this time.
  • nah... (Score:3, Offtopic)

    by elmegil ( 12001 ) on Thursday December 27, 2001 @01:48AM (#2754131) Homepage Journal
    I'd rather have a superMODEL in every garage.
  • Of course, heat *is* an issue... but imagine a half inch between each layer, you would rack mount them at a slight angle and use heat convection to pull up air, a chimney effect...

    $1,499 for a 600MHz iBook, 20 of these would cost ~$30k, but you couldn't use the channel bonding concept, unfortunately. You'd be stuck with 100bT, which would probably get swamped with any real work in a 4 iBook per switch, 6 switch topology... without even trying to minimize latency.

    20 iBooks would also take up about
    8x9.1x11.2 per stack, so all 5 stacks would take up about 40 inches in space... You could stick these next to a desk or bed and use it as an end table! Okay, that'd be a tall end table...

    $2,999 for a 667MHz ToBook, 20 of these would cost ~$60k, but these *are* Gigabit capable! In a similar topology, or perhaps because of prices for Gigabit switches, you might as well use one switch. Who knows?

    Of course heat is even more of an issue, but give n the same space as the iBooks, there's a whole extra half inch of space available to the TiBook!

    40x9.4x13.5 inches! It would even make a good space heater!

    Okay, okay, I know, it's damn expensive. But... consider, how much is a 20 CPU machine from HP or IBM? I know, I know, they tackle different uses, like reliability, uptime, IO throughput, etc. A 4way 680 pServer from IBM is $220k, from their own website :)

    Damn... I wonder when Apple is going to release a thin rackmount slab server?
    • Damn... I wonder when Apple is going to release a thin rackmount slab server?
      When they can figure out how to make it cute.
    • I have a TiBook. Apple *could* do away with the screen, keyboard, and speakers, and replace the CD-ROM slot with a ram-bay.

      Not only could you hook them together using gigabit ethernet, you could take advantage of the firewire port as well, perhaps chaining them together with some sort of SAN, though you are still limited by the ~50MBps, though perhaps that's not useless, I don't know.

      Still, with the ram bay you could up the memory from 1GB to something crazy, like 16GB. The battery is useful as a backup-emergency device, allowing the slab to run for about 4 hours in case of emergency (woo!).

      You could even concievably netboot the thing, since OS X allows for that, right? Minimize the hard drive or get rid of it altogether... you could seriously make a slab about the size of 1/2" by 8" by 8" I suspect :)
  • Talk about power! (Score:2, Informative)

    by alouts ( 446764 )
    Literally.

    The costs of a clustering setup go well beyond the initial hardware. At the level that Cringely is building (with only 6 machines), it may not be a huge problem, but running KLAT2 will cost you some dough just for the power.

    A couple years ago I made a dumb mistake and bought a saltwater reef tank without realizing that it would end up costing me $150/mo. in electricity bills (it ain't cheap running 4000+ watts in lights and pumps 18 hours a day). I'm sure running 66 machines 24 hours a day ain't cheap either.

    • Ok, let's see how much this would cost. Let us assume these computers are using 300W power suplies and as a worst case senario, let us assume that all 300W that the PS is capable of supplying is being used in each machine.

      I live in Quebec where electricity is the cheapest in the world costing about 6 to 7 cents (CAN) per kWh. I don't know how much it is in the US.

      so I have .300 kW/machines x 66 machines x 24 h/day x 0.06 dollars/kWh = 28.50 dollars/day.

      28.50$ a day in the worst case might be a bit pricy for a household but it is cheap for a university. Of course, electricitry is much more expensive in the us, I have seen prices of 0.14$/kWh in New-York many years ago but the 300W power supply is probably not being fully used making it cheaper.
  • by Binary Tree ( 73189 ) on Thursday December 27, 2001 @02:07AM (#2754165)
    Technically we almost all have a "supercomputer", depending on what era's standards you're referring to.

    Also, if everybody had a supercomputer in their garage, they would no longer be so "super."

  • ...you'll be lookin' at a whole lotta Kentucky Fried Penguin!

  • by IvyMike ( 178408 ) on Thursday December 27, 2001 @02:33AM (#2754188)

    Dr. Dietz used to teach at Purdue, and I had the good fortune to take a compiler course taught by him. On the first day, when introducing himself, he came to the part where he was describing how to get into contact with him. When giving out his phone number (at Purdue, on-campus numbers were 5 digits long) he mentioed that his phone number was "GEEKS". He added, "No, I didn't ask for GEEKS, but when I figured it out, I thought it was pretty cool."

    Needless to say, it was a pretty cool course.

  • Wow! (Score:1, Flamebait)

    by Squeeze Truck ( 2971 )
    Can you imagine a beowulf cluster of these supercomputers??!?!

    ...I'll get me hat...
  • by fgodfrey ( 116175 ) <fgodfrey@bigw.org> on Thursday December 27, 2001 @02:49AM (#2754206) Homepage
    The article would have people believe that all a supercomputer is is a collection of not-quite-modern processors, memory, and an interconnect of some sort. This is simply not the case. If it were, why do many (granted a smaller number than before) people still buy real big iron? The answer is that Cringely's (sp?) collection of processors is not a real supercomputer for the kinds of applications that are associated with traditional machines. Traditional vector supercomputers still have processors that are faster than Pentium 4 class systems. Traditional massively parallel supercomputers (which are the most similar to a cluster) have a number of features not found in your average garage built cluster like a truely low-latency interconnect, gang scheduling of entire jobs, single system image for users/administrators/processes.

    Clusters are great for embarassingly parallel applications (ie ones that have threads which don't communicate with each other much. This includes things like SETI@home and batch rendering of images. What they don't compare on is applications that communicate a lot like nuclear physics simulations. This is not to say that that will never change in the future, but for the time being it's still true.


    Last, and certainly not least, real supercomputers have memory bandwidth that can match the speed of the processor. A Cray or an SGI Origin has an absolutely massive amount of bandwith from the processor to local memory compared to a PC. That allwos a traditional supercomputer to actually *achieve* the fantastic peak performance numbers. On many applications, the working sets are huge and don't fit in cache so you end up relying on memory being fast. On a PC, it's not and I've heard from sources I consider reliable (though I have no actual numbers to back this up so it may be rumor only) that one large cluster site sees around 10% or less of peak on a cluster for a nuclear physics simulation, whereas, on a vector Cray, you can hit ~80% of peak. This means that the cluster has to be 8 times more powerful and when you start multiplying the costs by 8, they start looking like the same price as a real supercomputer.


    So my point is that building a real supercomputer does not mean grabbing a bunch of off-the-shelf components, slapping them together with a decent network and running Beowulf (or a similar product).

    • by Multics ( 45254 ) on Thursday December 27, 2001 @10:51AM (#2754722) Journal
      Your comments are true for a 486. They are not true for anything much newer. An IBM SP machine, which owns half of the top 10 on the top500 [top500.org] list, is basically a commodity parts built system.

      Yes, these systems are not sometimes the best for handling vectorizable jobs, but they are so inexpensive compared to the old specialized hardware that it is easier to waste cycles than build special hardware.

      As to memory bandwidth. Modern CPU caches make the question nearly moot.

      If all of this were not true, then people wouldn't be building clusters and the majority of the top500 list wouldn't be dominated by clusters. Instead there are 3 traditional architecture machines in the top 20. This is the reason that Cray (etal) no longer dominates the marketplace... commodity systems have overtaken nearly all of the specialized hardware world.

      -- Multics

      • As to memory bandwidth. Modern CPU caches make the question nearly moot.

        This is simply not true. Your other points are pretty wacked, too, but I'll take this one because I have personal experience.

        I have some image processing code that runs on IRIX, and I recently did a shoot-out between an Origin 2000 and an Origin 3000. Both machines had eight 400 MHz R12000 processors with 8 MB of secondary cache and 4 GB of RAM, and both were equivalently equipped for disk.

        The Origin 3000 was almost twice as fast as the 2000 was, with identical CPUs, memory, and disk. (The actual numbers are on a spreadsheet at the office, unfortunately.) The difference? Memory and interprocessor bandwidth. The Origin 3000 platform has a specified memory bandwidth of about 2.5 times the bandwidth of the Origin 2000.

        The test involved taking a big multispectral image, splitting it up into tiles, handing each tile off to a thread, and doing some processing on the tiles. The data set was pretty huge, but not so big that it couldn't be cached entirely in RAM, so the first step was to load the whole thing into memory. But for the actual test run, there was a lot of fetch-operate-fetch, which really exercised the memory bandwidth of the system.

        So your comment about memory bandwidth being moot is completely off base.
      • No longer dominates the Top 500 and no longer dominates the marketplace are two different things. The Top500 benchmark (LINPAC) doesn't do a lot of interprocessor communication and hence is the type of job well suited to a cluster.


        As for cycles wasted/cost, that is going to depend on the applications involved. At some point, the sheer cost of the power wasted is going to be a factor. Obviously not on a garage built six node cluster, but if you start talking about 2048p the power *will* be an issue.


        The IBM SP, while being *mostly* commodity uses some non-commodity parts and has a lot of proprietary software to make it work.


        CPU caches, modern or otherwise, are not an issue with an application that has, say, a 1 gigabyte working set. It simply doesn't fit in the cache no matter what you do. You can restructure loops to make things better, but you're still going to be banging on memory.


        You're right, commodity systems have overtaken a lot of areas that used to require traditional supercomputers, but then, the market for traditional-architecture supercomputers has *never* been big.

  • This is just spread spectrum, but with even more spread. See TimeDomain [timedomain.com] for the hype. Even they admit "UWB's best applications are for indoor use in high-clutter environments. We already have wireless LANs, and they work quite well. UWB may or may not play in that market, but it's not a big deal.

    The FCC is being very cautious about mass-market UWB products. Since these things blither over a gigahertz or so of spectrum, they overlap with other services. At low power, a few of these things are probably OK, but in bulk, there could be trouble. The concern is that mass deployment could wipe out other services in congested areas.

  • by Chazmati ( 214538 ) on Thursday December 27, 2001 @08:25AM (#2754479)
    "(the operating system) will be QNX, a real time OS that supports massive parallelism and has very low overhead. QNX is fast! QNX is also Posix compliant, so there is lots of software that almost works under it."

    If you're looking for software that almost works, I know of an OS that might fit your needs. You're not going to hook this thing up to the Internet, though, are you?
  • by peter303 ( 12292 ) on Thursday December 27, 2001 @10:43AM (#2754698)
    By definition only the fastest devices are supercomputers. These days that is about a teraflop. Thta includes the US DOE ASCI series and the announced installation of the Blue Storm and Blue Gene IBM computers. Ten gigaflop computers a dime a dozen and a hundred gigaflops not so rare.
  • 1) heat in garage in winter
    2) Top 10 in Seti@home
    3) Porno-ize you favorite anime (Final Fantasy anyone?)
    4) Why are you reading this? I thought you were doing #3

Suggest you just sit there and wait till life gets easier.

Working...