Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Transmeta

Transmeta Meets Blades 160

The Griller writes "Gordon Bell, one of the creators of VAX, and Linus Torvalds were at the launch of a new supercomputing platform at the Los Alamos National Laboratory. Based on Crusoe processors from Transmeta and running a version of linux, it is aimed at being cheaper than conventional supercomputers by requiring no cooling and lower maintenance. " Basically, it's blade clustering, using Beowulf.
This discussion has been archived. No new comments can be posted.

Transmeta Meets Blades

Comments Filter:
  • by Anonymous Coward on Saturday May 18, 2002 @06:13PM (#3543976)
    Oh well, here's a list of mirrors... [ebay.com]
    • That's got to be the worst joke ever! You could have at least pointed to a never ending link to this page or a picture of the Funhouse hall of mirrors.
    • LOL, nice link....
      Is that orginal? Well even if not I thought it was funny...
  • I've got to wonder (Score:4, Insightful)

    by Arker ( 91948 ) on Saturday May 18, 2002 @06:14PM (#3543979) Homepage

    I've got to wonder why they are using Crusoes. It's a good chip for the application, don't get me wrong... but the last I heard the main advantage it has over StrongARM is x86 compatibility, which shouldn't be an issue here.

    • I've got to wonder why they are using Crusoes
      Because if they used Intel chips, Transmeta wouldn't make very much money off it.&lt/joke&gt

      But this does explain why it's been very important for Linus to push MP in the kernel.

      • by gmack ( 197796 )
        Uhh no.. this has nothing at all to do with MP since i's a beowulf cluster and last I checked you can't to MP with transmeta.

    • According to the story, they are using the Crusoe chips because they don't require active cooling, unlike Intel or AMD chips.
      • by Arker ( 91948 )

        According to the story, they are using the Crusoe chips because they don't require active cooling, unlike Intel or AMD chips.

        Obviously you are not familiar with the ARM family of processors - they are very similar to the Crusoes, and in particular they don't require any active cooling either.

    • i am not sure how they are making an advantage out of the code morphing, but the article states that it is a primary consideration.
  • Dude, imagine a beowulf cluster of...*KRONK* [hercynium is clubbed with a shotgun and dragged away by the moderators...]

  • by guttentag ( 313541 ) on Saturday May 18, 2002 @06:17PM (#3543995) Journal
    Imagine if these weren't clustered...
    • Touche! Ignoring the obvious joke, you went for something subtler and much more funny.

      Now, just how do we make sure that no one ever posts this joke again?

  • Cube of Crusoes (Score:4, Interesting)

    by geoffsmith ( 161376 ) on Saturday May 18, 2002 @06:23PM (#3544012) Homepage
    Given that you don't need to actively cool these chips, I think what would be even cooler(N.P.I.) is a cube of chips stuck together and interwoven with some sort of vascularized heat-sink. A meaty cluster of 100 chips you can hold in your hand, and plug into a big cube-shaped socket on your supercomputing motherboard. Now *that* would be New for Nerds.

    Websurfing done right! StumbleUpon [stumbleupon.com]
    • Is that something similar to what is in the movie Pi?
      • Exactly. Except it wouldn't have that melted goo all over it.

        I was also thinking like a mini Borg ship you could hold in your hand. I think it would be really satisfying to have a big mass of processors in your hand, not like these wimpy delicate little things we have now in their static-proof baggies. Also, once we've conquered the 2nd dimension (ie. we've hit fundamental size limits, like 1 molecule thick wires), 3rd dimension is the next logical step. And vascularization like that found in the brain is a pretty good way to cool things off.

        Interestingly, what separates us from the Neanderthals is an extensive system of veins in the back of our head designed to cool the brain. It was an important evolutionary step that allowed us evolve a lot more cerebral processing power.

        Websurfing done right! StumbleUpon [stumbleupon.com]
    • by Anonymous Coward
      Cool idea, but where do you put all the memory?

      Somehow you need to fit a wide memory bus in there...having 100 ~500Mhz processors usually means hundreds of gigs of ram. Where would it fit? (Remember, you gotta keep memory as close to the CPU as possible, or you take a very big performance hit...that's why we have multi-level memory cache on the processor dye these days).

      • You'd put it in the middle, of course. Or, you could design your own chips with say 64 MB of memory per chip and both the processor and the memory are on one chip! Then, all you'd have to do is stick them all together and watch your SETI@home [berkeley.edu] units crunch!
        Speaking of SETI@home, you could create fast fourier transform chips out of those transmeta chips, right? Just something to ponder, but a dedicated fast fourier transform chip would be übercool to have in a cluster. Also, you'd have to rewrite/recompile seti@home to use the chip, but if they really wanted those work units then they'd write one.
        • Sounds like a transputer from inmos that was popular in the beginning of the '90. Dunno if they can be bought though. I've used them at the university. The most easy way to program such a cluster was with the occam programming language.

          Hmm, I think I'm getting old
    • ...will be based on Legos.

  • transmeta.com (Score:5, Informative)

    by jbrw ( 520 ) on Saturday May 18, 2002 @06:31PM (#3544035) Homepage
    transmeta.com has more information [transmeta.com] on why a Crusoe based solution was selected.

    It all comes down to "power consumption, size, reliability and ease of administration", apparently.

    And the marketing people at RLX Technologies [rlxtechnologies.com] should be shot for not having a press release up for this, as it's all based on their product...

    • and ease of administration

      could someone explain how a microprocessor is administered?
      • could someone explain how a microprocessor is administered?

        I imagine that with supercomputing, or any significant concentration of complicated hardware, hardware administration is a significand cost.

      • by yerricde ( 125198 ) on Saturday May 18, 2002 @08:57PM (#3544391) Homepage Journal

        could someone explain how a microprocessor is administered?

        In a large cluster, the question is not whether a processor has failed, but how many have failed. Such clusters generally make it possible to swap out a failed processor while the program is running. Chips that last longer will reduce the dependency on expensive technicians to keep coming in and swapping in new boards.

    • Re:transmeta.com (Score:2, Insightful)

      by Anonymous Coward
      I can't imagine how a choice between Chip A or Chip B influences 'ease of administration'. Linux is Linux. I didn't know one has have people maddly flipping jumpers on lesser chips to do a calculation.

      Also, silicon is silicon. Pick your chip and reliability is all pretty much the same. Failures are almost 99.9% power supplies, support items like Caps, resistors, and edge connectors. When a chip fries, the root is almost always static or support electronics. (Well, there is overclocking).

      Low power/small size is a good thing. I guess the right choice boils down to balancing watts and bucks for FLOPS per node.

      Anyway, I like the point about "stop using more transisters to make it go faster" bit. What a hoot. That't exactly the point of building a cluster. More chips, more transistors, more FLOPS.

      • Anyway, I like the point about "stop using more transisters to make it go faster" bit. What a hoot. That't exactly the point of building a cluster. More chips, more transistors, more FLOPS.

        That's exactly why he likes designs that don't use more transistors per cpu. The heat and power consumption of a P3-P4 class chip may not seem all that bad when you have one in your PC, but when you have 100s of them racked up it can become a very serious problem.

  • by guttentag ( 313541 ) on Saturday May 18, 2002 @06:34PM (#3544041) Journal
    ...the unveiling of the supercomputer, a Beowolf cluster called Green Destiny...
    Computing legends Bell and Torvalds looked on in envy as the Green Destiny blade cluster was unveiled, knowing only the great Li Mu Bai was worthy of wielding the blade cluster's power. Upon plugging the Green Destiny in, they were appalled to find it had been "r00t3d" by some Chinese hacker calling himself Yu Jen...
    • ... the Jade Fox never really learned the hacking techniques but her disciple did; and did very well.

      Li Mu Bai died one day, but yet his spirit lived on and he still fights today as a Giang Hu soldier - destroying script kiddies everywhere.

      Now only Lo, Jen and Shu Lien have the root passes and the universe is safe.
  • by baka_boy ( 171146 ) <lennon@@@day-reynolds...com> on Saturday May 18, 2002 @06:42PM (#3544057) Homepage
    Personally, I'd much rather have a rack of XServe [apple.com] 1U boxes than Transmeta chips -- G4 processors may not be quite as power-efficent as Transmetas, but they also run at higher clock speeds, have two processors per mobo, give you fast 128-bit vector processing unit (very nice for scientific calculation), and still beat the pants off of PIII/IV and Athlon chips in the power/heat/size arena.


    The only trick would be getting the things to work properly in a headless configuration -- Apple won't ship them without a graphics card, but I'm relatively certain that you could get a LinuxPPC installation to work even without the card installed.

    • Actually, look a little more closely at the tech specs on apple's site... it says that os X server was specially tweaked to run headless on these. (it also mentions the db9 serial port set up for the old-skool unix geeks! Yay!)
    • even at 2 pricessers per 1u they aren't as dense as a blade.

    • Actually the transmeta chips aren't actually x86 chips at all. In theory, the transmeta chips could be made to utilize the same byte code instructions as G4 chips. For that matter, they could be made to use Jave virtual machine byte code. They're really quite dynamic chips.
    • by Anonymous Coward
      fast 128-bit vector processing unit (very nice for scientific calculation)

      Actually, vector processing is essentially useless for most scientists as long as the compiler doesn't autovectorize the code.

      First, most algorithms are NOT trivially vectorizable.

      Second, most scientific code is Fortran-77 that has been developed over decades. If there are trivial function calls where you can use an Altivec library it's fine, but there is no way people are going to rewrite all their code in Altivec since it would destroy portability (and Altivec primities only exist for C/C++ anyway).

      3. Almost all scientific software users double precision.

      There are a handful of cases where vector processing is wonderful, but it's a very limited subset (and although that subset might be important to you, it doesn't suffice for most users). Just look at x86; you can argue that SSE/SSE2 isn't as capable as altivec, but it definitely accelerates performance significantly. Still, very few programs are handcoded with those instructions even though the x86 marked is 20 times larger and SSE2 supports double precision - it simply isn't worth the effort.

      The G4 Altivec might be wonderful, but I want my code to run fast on all platforms, and have a lifetime of at least 10-20 years. If we are to invest any time in handcoding vector instructions it will be SSE and not Altivec, since that userbase is 20 times larger...
      • Actually, vector processing is essentially useless for most scientists as long as the compiler doesn't autovectorize the code.

        Thats wrong. The rest of your post also.

        double x[veclen]; // init it somehow
        double y[veclen]; // init it somehow

        double scalar_product = 0;

        for (int i; i less_than veclen; i++) {
        scalar_product += x[i] * y[i];
        }

        This above is scalar code. Any compiler aware of a vector processor compiles that to a singel vector processor instruction. At least that was the case 14 years ago when I worked on vector processors.

        I'm not sure if Altivect is a true vector processor, I think it supports like MMX only very limited SIMD processing, but I'm not sure as I say.

        Operations on "arrays", hence vector processors, are very easy to map on vector processing units.

        Regardless if it is as easy as above or if you have offsets or gaps like i+=3 in the loop above.

        Same is true if the result is a vector again of course.

        Manual vector processing instructions get interesting if the loop aove would calculate a vector and that vector was nput for a further stage.

        Like this:

        Vector a, b, c, d, e;
        Scalar i, j, k;

        a = i*b + j*c; // result is a vector
        e = a + k*d;

        Ususlay you would have loops calculating that, the second loop would run after a is completely calculated.

        If there is a second vector processor (or just a unit on the processor) you can feed a dirctly into it tocalculate e.

        AND THIS is hard to figure for a compiler. Probably youment that. As all vector units are different in that respect there exist fortran libraries with standard subroutines for that.

        angel'o'sphere
    • Despite all of the replies to your post indicating otherwise, I'm pretty sure you're wrong.

      No mac has ever been able to boot without some kind of graphics hardware. Not while running MacOS, LinuxPPC, or anything else. This is of course, completely ok. They will still run headless. That is, don't connect a monitor, and then they're headless. Just imagine that the graphics card isn't installed. If you ever see the window manager using more than 1% CPU, I'll eat my hat.

      I could be mistaken, but I was also pretty sure that there is no standard PC hardware that will boot without a graphics system either. The operating system has nothing to do with it.

      Anyway, as someone else pointed out, three blades stack in 1U together. Your CPU density is still better with transmeta.

      Someone else pointed out that Transmeta chips could run code morphing software that supports G4 instructions. This is the dumbest thing people keep saying about Crusoe. Of course it *could* run different code morphing software, but it never will. It cost Transmeta as much to develop that software as it did to develope the hardware. There is *no* *way* that anyone will ever write the software that will allow Crusoe to emulate different types of chips. Too expensive.
  • Gordon Bell created Linus Torvalds? Or is he just a clone of the real Linus Torvalds?
    • Try reading the cover of "Just For Fun", it says

      "Linus Torvalds creator of Linux and David Diamond"

      So not only did Linus create Linux, he also fund the time to make David Diamond.

      Impressive.
  • Wouldn't it be great to have a Beowulf cluster of these? Oh, wait. Nevermind :)
  • Mainframe? (Score:3, Interesting)

    by GigsVT ( 208848 ) on Saturday May 18, 2002 @07:07PM (#3544112) Journal
    How can you even compare this to a mainframe?

    Clustering is a very good and very cheap and superior alternative in some cases.

    In the cases where you really need a mainframe, no cluster is going to help you. Mainframes aren't even really that fast. What they are good at is having tons of I/O bandwidth, even between nodes.

    If we quit comparing clusters to mainframes, then people might take clustering more seriously. They are not intended for the same classes of problems.

    I have an OpenMosix cluster at home, and I work with an Origin 2000 at work. (If anyone else uses IRIX you know that you work *with* IRIX, not on it, it has a mind of it's own :) They are vastly different concepts, apples and oranges.
    • Excellent point! (Score:2, Informative)

      I was always amused at how a unibus PDP-11 with 512K of main memory could beat the snot out of a 386 at real-world tasks. I/O is critical for so many applications...

      However, don't write off clusters yet; have you looked at The AGGREGATE? [aggregate.org] The link points to Klat2 (Gort, klaatu barada nikto! Sorry) which is a very photogenic aggregate-based machine. The techniques these guys are developing may bring high I/O throughput into clustering at mainframe levels eventually.
      • I've head that aggregate.org is supposed to have info on a cluster, but over the past year, whenever I go there, it just says "Our site is under construction, please come back later"

        with a banner for http://www.aggregate.org/images/anetlogo_100x50.gi f
        and
        http://www.aggregate.org/images/names4ever banner1_ logo.gif

        Does the URL only work for people with certain ip addresses or something? Because people keep referring to that site when it seems to have zero content.
        • I don't think there is anything special about my IP address... just tried it from two different netblocks and I can get these links just fine:

          Genetic Algorithm CGI [aggregate.org]
          Main Site [aggregate.org]

          It's at a university - maybe they blackballed your subnet at their firewall because some loser tried cracking their systems from your site? I dunno. Maybe your browser is just busted, I'm using Mozilla 1.0 and it works fine for me.
          • Thanks for the reply. This has been bugging me for a while - I just thought most people were getting the 404s that I'm getting. The main page has a link for aplus.net, which is the same ISP I use. I think there must be some internal routing problems. Time to fire off an email to tech support.
      • I like the high-tech office fans cooling in between the individual units. Very attractive and cool.
    • Where did you see anything about Mainframes?

      These clusters are NOT designed to take over from Mainframes, but from Supercomputers. Totally different animals.

      • Maybe I used the wrong word.

        I've seen the two used almost interchangbly when referring to modern large systems.

        What would you call a cluster of Origin 2000s with a single system image? A supercomputer? Then my point still stands, as long as we are talking ethernet as a system interconnect for this type of clustering, it's not in the same ballpark as far as classes of problem.
        • I've seen the two used almost interchangbly when referring to modern large systems.

          I have as well, it's a common mistake for nontechnical types (particularly reporters) to make. But they are very different systems. Mainframes have massive redundancy and i/o bandwidth. Supercomputers also have lots of redundancy but they are typically not built for I/O bandwidth at all, but sheer number crunching power. Mainframes are designed to run large databases, supercomputers to do complex mathematics, so you get very different designs for different problems.

          The Origin2000 is, if I'm not mistaken, the latest iteration of some of the old Cray designs, and those are definately Supercomputers, not Mainframes. That said, you are of course absolutely correct that ethernet is a major limitation of the sort of cluster we are talking about, and the Crays are still a much better bet for a subset of traditional supercomputer jobs. This is changing, though, as more and more effort and thought goes into improving them.

  • by Anonymous Coward
    From the transmeta page...

    > Specifically, the RLX/Transmeta solution results in a 5x to 10x savings in power, i.e., 15 watts versus 75 watts under load, and seven watts versus 75 watts at idle.

    So "lesser chips" must run at 75 watts, flat. I know Intel chips cool remarkably at idle. Remember all those Toshiba laptops frying when they're actually asked to compute? Watts is BTUs, and 75 of 'em emit a contatant amount of heat.

    I hate it when they try to spin even the obvious and well know facts. If they're doing that with the black and white, what are they doing behind the Grey?
  • by evilviper ( 135110 ) on Saturday May 18, 2002 @07:27PM (#3544155) Journal
    "Gordon Bell, one of the creators of VAX, and Linus Torvalds"

    Wow, so it is true... Linus is a robot.
  • by eagl ( 86459 ) on Saturday May 18, 2002 @07:54PM (#3544207) Journal
    Why limit yourself to the x86 instruction set when the transmeta processor just needs a new instruction set decoder to emulate pretty much ANY processor? It seems like while they'll be able to use lots of existing software out there, they could get even more performance, efficiency, or maybe just easier programming by using whatever instruction set makes sense for the project.

    It's all in the pre-processing with the crusoe, x86 is just there for slideways compatibility and doesn't need to be a limiting factor. When you're using a custom computer, whether it's one or a thousand crusoe processors, wouldn't it make sense to try for some compiler efficiency based on the actual hardware instead of the 8086 legacy?
  • by chill ( 34294 ) on Saturday May 18, 2002 @08:23PM (#3544299) Journal
    Using this site [projecta.com] as an example to estimate power usage, we get:
    240 computer blades in Green Destiny x 6,480 hours uptime (9 months) = 1,555,200 computer hours of uptime

    Assuming the only thing changed on the blade is the CPU -- and North Bridge chipset, since the Crusoe includes
    a North Bridge on die
    and the P-III does not -- at full blast the Crusoe consumes about 1.75W of power and the
    P-III + NB consumes between 4.5 - 8 W, depending on chip model. However, the 4.5W number is an approximation
    from the 0.13 micron ULV P-IIIM chip running in "Battery Saving" mode, or SpeedStepped down to 300 MHz. Running
    at full 700 MHz tilt, with NB, we are still talking 5.75W of power consumed.

    1,555,200 * 0.0175Kw * 0.10 (dollar per KwH power cost) = $2,721.60 electricity cost/year (Crusoe)
    1,555,200 * 0.0575Kw * 0.10 (dollar per KwH power cost) = $8,942.40 electricity cost/year (Intel)

    A saving of approx. $6,200/year in direct electric costs.

    However, the big savings comes from the heat dissipation of the units. While the newer LV/ULV P-IIIs do not require
    active cooling, they still run quite a bit warmer than the Crusoe units. As a result, you don't stick a rack
    full of them in a room that isn't temperature controlled. The difference in the air conditioning bill can
    easily reach tens of thousands of dollars.

    In business, there are two types of money/budgets. One-time grants and acquisition budgets are large chunks of
    cash. Recurring expense and operations budgets are smaller. Being able to get a large chunk of cash to BUY a
    cluster/supercomputer is one thing. Being able to go back year-after-year and get the funds to keep it running
    is another project altogether. $15,000 - $20,000/year for electricity used in running/cooling computers is a
    LOT of money to some people. This doesn't include construction or maintenance costs on a custom facility/room.

    As far as reduced administration costs go, many conventional supercomputers required chilled water and other
    special considerations for operation. People with experience managing things like Sun E15000s and Cray T3Es
    are few and far between. They are the last of the "high priesthood" of computer administrators and cost a LOT
    of money to employ.

    A blade server, on the other hand, is a bunch of x86 computers running Linux -- nothing a couple of grad students
    can't learn the ins-and-outs of over a term. Maintenance contracts, spare parts, etc. are also TONS cheaper for
    the blade/cluster solution as opposed to high-end SGIs, Suns, Fujitsu and Cray super-computers.

    Another site with a bit of good supporting information is [pcstats.com]
    PC Stats.
    • your post is interesting; because this [intel.com] claims that the p3m burns &lt 0.5W in the 300mhz "battery optimized mode". Even after throwing in the 1W your pc stats page claims for the north bridge chip, the p3m comes in under the crusoe. this [intel.com] claims about 1W under full load (audio/modem/etc). Interestingly enough though, the 440mx is a single component chipset, so now we have to figure in comsumption for a south bridge for the crusoe.

      also, the first paper claims &lt 1.0W for the p3m at 533mhz, which is about equivalent to a 677mhz crusoe

      anyway, its hard to judge what the total wattage each system would drain on average, and thus how much heat they would emit, but intel is much more competitive than you would lead us to believe.
      • Hmmm. It was my understanding that the 0.5W figure for the ULV P3M was in "Deep Sleep" mode. I was also assuming that when running a task, the CPU would be a full-tilt for any of the types of applications a "supercomputer" would be needed for. I see where Intel is reporting the AVERAGE power of the unit running TYPICAL OFFICE APPLICATIONS. The problem with these measurements is the CPU is 99% idle when people are typing in word -- it doesn't matter if the CPU is running at 700 MHz or 7 MHz, you aren't going to out-type it.

        The ULV P3M runs a 100 MHz bus, like the 633 Crusoe but the 677 Crusoe runs a 133 MHz bus like some of the LV P3Ms.

        The final problem with the P3M is the thermal diode. To control heat, once the core CPU temp reaches a certain number (100 deg F, I think -- the "maximum junction temperature"), it clocks down to reduce heat. Again, that's fine for someone typing in Word or Excel. It can clock up for the 3 seconds needed to run that macro, but for sustained high-performance computing, it will be a problem.

        I'll agree that Intel is very competitive in the laptop CPU market and their LV and ULV, SpeedStep enabled chips are great in that market -- hell, I'm typing this on an IBM laptop with a SpeedStep enabled 1 GHz P3M, and it blows the doors off the Dell P3-450 I just got rid of.

        However, for sustained computing where you aren't relying on user input to clock-down between, I think the fewer transistors on the Crusoe generate a hell of a lot less heat and use lots less electricity. Transmeta has some nice thermal photos on their website, but I believe they are comparing with the "old", non-SpeedStep P3M and not any of the LV/ULV stuff.
    • Your math is wrong. It should be:

      1,555,200 * 0.00175Kw * 0.10 (dollar per KwH power cost) = $272.16 electricity cost/year (Crusoe)
      1,555,200 * 0.00575Kw * 0.10 (dollar per KwH power cost) = $894.24 electricity cost/year (Intel)

      This savings is absolute dollars is much less significant when you divide by 10.
  • by Billly Gates ( 198444 ) on Saturday May 18, 2002 @09:18PM (#3544436) Journal
    I had a beowulf supercomputer designer at my linux users group and he mentioned that alpha's were the cheapest per operation to run over any other platform. This was 2 years ago so this might be a little outdated and would be cheaper today to implement but anyway he was processor agnostic but if he did the math. The processor is only a small fraction of the total cost of the system. In this guy's example for weather forcasting modeling he had a 1 to 2 gigs of ram in each node and some expensive fiber based networking cards and switches. If you do not have at least a 1gb/sec transfer rate you have a major bottleneck. Anyway an intel based solution for his 35 mode cluster added with the networking, ram and switches averaged $2,000 a node. An alpha would average close to $3,000 a node. But he would recieve close to a %50 performance gain for using alpha's. So thats a %50 gain for a %30 price increase. Sure cooling might cost more but thats tiny compared to the amount saved by the cluster finish faster.

  • by morcheeba ( 260908 ) on Saturday May 18, 2002 @09:21PM (#3544443) Journal
    > Feng also proposed that a new technique is needed for measuring the performance of supercomputers. Instead of looking primarily at how many calculations a system can run in a given amount of time, researchers should also consider factors such as downtime, size, price and maintenance requirements, he said.

    Following Feng's lead, the whole supercomputing industry has reacted to this new paradigm shift. Industry leader Cray [cray.com] has ceased development of its upcoming SV2 [cray.com] and has designed a system based on the reliable commodore 64. Explained lead scientist Joel Grey, "We managed to get a C64 computer out of the dump, and bought 1,000 surplus 'Barney' solar calculators off of ebay for $30".

    The new system, dubbed the SV64, [pattosoft.com.au] is not quite as fast as the SV2, but exceeds at new metrics: Converted to run on solar power, and having spent the last 15 years in an uncooled closet continously generating the "experiencing technical dificulties" logo for a local community access TV station, the new computer shatters existing power and reliability records. "With an expected retail price of less than $1M USD, we expect this computer to eclipse [Japanese rival] NEC's lead and become the platform that will be used to perform most of the world's weather, biological, and nuclear simulations well into the next decade", said Grey.

    Wall Street analysts pointed out the the system has never needed maintence, nor suffered downtime, nor needed the services of an UNIX system administrater, and as a result, the total cost of ownership should remain low. Shares of component manufacturer Commodore rose 10 points to 10 1/64 in heavy trading today.
    • In other news...

      Olympic Speedskating will be judged, not on speed, but on fashion, sweating the least, and the contestant who books the best airfare to the event.

      North Korea became the second country to land a person on the moon and return them safely to earth. Although technically the rocket blew up on the launch pad, it was still considered a sucessful mission given the impoverished country's lack of funds, the technical embargoes placed on the country by space-faring nations, and the total lack of a Korean space program.

      Life insurance companies will now pay benefits for near-death experiences, close calls, and "getting really scared".

      sorry... I just prefer the normal metrics of FLOPS, MIPS, bandwidth and topology for my supercomputers...
  • 280 blades in a standard 42U rack. Each blade is a P3 700 based upon the Tualatin line.

    HP is continuing Compaqs blade line along with their own which will be geared toward the telco market. Also, beowulf is not really a good idea with these blades (Compaq or others) due to the need of a high speed interconnect like Myranet (sp?). Blades of these types are really only good for infrastructure and perhaps web-farms. Anything more is too much.
    • For the application/project the Dr. Feng is performing, the Transmeta blade suited it best. But if you had an application that you wanted to introduce the HPQ blade, I would suggest that the RLX 800i blade be carefully considered as well(336 blades in a std. 42U rack, P-III 800). Of course their is some bias in my view.:-)
  • It's sort of hard to imagine Gordon Bell sharing a stage with Linus, at the unveiling of a Linux cluster. Isn't he the guy who absolutely loathes Unix in all its incarnations, and has been steadily trying to kill it as part of his job at Microsoft? I imagine he (and his superiors) are foaming at the mouth over the fact that Windows isn't running this cluster.
    • Comment removed based on user account deletion
    • Gordon Bell was a hardware architect and was responsible for the PDP-11 amongst others. Dave Cutler left Digital and joined NT where he was involved with architecting the NT 3.5 Kernel.

      I know Cutler's designs from RSX-11M and VAX/VMS days. He likes clean code but he is probably less than satisfied with what happened later with NT, the amount of code that ended up running in the same space as the kernel. The original NT design was quite clean and based a lot of its ideas on Mach. Unfortunately, MS are relatively undisciplined as a company (just look at their version control problems), and eventually lots of compromises had to be built in.

  • I went and read their tco estimation in their whitepaper [lanl.gov] and came across something that really made me question their conclusions.

    They compare tco for 24 node clusters of different architectures of beowulfs against the bladed cluster. The biggest expense by far for the traditonal systems is sysadmin time, over half, this after they spend most of the article talking about power. They estimate sysadmin costs for each of the traditional beowulfs at $60k over a 4 year period, while the bladed cluster at $800. Where does the $800 come from? They say that they haven't had to do any maintence on their system in the 9 months its been running! That doesn't sound like a very scientific data sampling to me.

    There are other bladed designs, non-transmeta based, presumably the sysadmin costs would be the same. The last chart demonstrates that sysadmin costs are what's important, and that power, space, and downtime not nearly so.
    • The most curious thing is that if sys admins are proced at 60k for the transmeta, then Athlon platform will be less expensive than the Transmeta Blade solution. Somehow, I would feel more comfortable with the putatively more powerful Athlons....
  • a version of Linux that can run on my Sony VAIO C1MV?
  • So, would that make it a.... Beowulf cluster?

If you didn't have to work so hard, you'd have more time to be depressed.

Working...