Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Silicon Graphics

SGI Negotiating Cray Research Sale 67

Aviast writes "SGI is in talks with the Gores Technology Group to sell the Cray Research unit of SGI. Read the [Yahoo News] story here. SGI bought Cray three-and-a-half years ago for $700 million. According to this story Gores originally offered $100 million for Cray, but has since lowered its offer." Rumors about this have been floating around for weeks. Looks like they *may* become reality, but the deal is apparently still a long way from done.
This discussion has been archived. No new comments can be posted.

SGI Negotiating Cray Research Sale

Comments Filter:
  • I'm not an econ expert, but if they slid from $700 million on their last buyout to under $100 million this time around, wouldn't that indicate their market value has fallen sharply?

    Perhaps Cray computers are not as valuable now that we have beowulf & other clustering technology which can give you the same amount of raw processing (with alittle extra latency) as the big iron?


    --
  • It sounds to me like SGI bought it, cut off a chunk of it for money, and will promptly sell it away. Granted, they were competitors and everything, this doesn't sound like good sportsmanship to me. Then again, I've stopped being impressed since they changed their logo :).
  • They got the technology they needed out of Cray... Pass on the carcass! :)
  • Now why on earth would SGI sell a company like Cray for less than what it costs to buy one of Cray's boxes? :)

  • by Lonesmurf ( 88531 ) on Monday November 22, 1999 @03:51AM (#1513285) Homepage
    I read an interesting article this morning on SGI and it's future. Heck, it may even be this article that in my rush to beat the floods of AC's out I've decided to temporarily postpone reading to jot this down. Anyways, this article talked about how for the longest time, Cray held this niche in the market that noone could penetrate. 'Cray' was synonamous(sp?) with 'SuperComputer'. Then something terrible happened: BEOWULF. (Yes, yes, I know.. make all the cracks that you like, but this story just SCREAMS Beowulf threads.) They said all kinds of neat-o things like how it has become easy for companys that want huge processing power to get it at a fraction of the price through massively powerful parallel computers. A thousand P3's are most likely still cheaper than a Cray.

    One interesting thing that stuck in my mind was how the CEO, you know, what's-his-name, said that the advantage of Cray Machines was in the architecture.

    This would keep the mean old Beowulf at bay.

    He was talking about how in the PCs under (or above, doesn't really matter) our desks, the processor is powerful and the pipe is small which limits the amount of data that can be pushed through. Crays are fast with huge pipes, making them perfect for big data-crunching applications (like simulating wind sheer in a Cumulonimbus Thunderstorm in real-time).

    Now, how long will it be before the architecture (finally) in our PCs changes to something that will be more along the lines of this system? 10 years? more? less? Who knows.

    Man, I rambled alot. I know that I had a point somewhere in there.

    --
  • C'mon, really, who expected this not to happen when SGI picked up Cray awhile back? Long gone are Cray's glory days, where it was whispered on the lips of computer geeks who swore theyd own one when they grew up.
  • I find it kinda sad that Cray has gone down this far. I remember way back when wheels were square, Crays were the fastest computers on the planet. On the other hand, it's pretty cool for the Open Source movement that Beowulf clusters are taking off and actually being ~very~ competitive in the super computer market. This might even help me sell Linux to my boss. I for one would love to setup a cluster to run our intranet on...




    Dive Gear [divingdeals.com]
  • Rumor that I've heard is that the sale is pretty imminent. SGI wants to get rid of Cray, as long as the offer is half-reasonable.
  • Actually, if anything, it's linux that will kill
    them. I doubt they'll make any money off this
    new age PC market. But then again, who needs
    SGI? Linux only brings another segment of the
    "daddy can I have a new computer" market to
    power. They'll never buy SGI's anyway.
    --------------------------------------
    slashdot: you still have the wrong logo.
  • Am i mistaken, or has not SGI sold part of Cray Research (the high-speed crossbar part) to Sun some time ago ?

    AFAIK a bit later Sun came out with the E10000 which incidently had a high-performance Crossbar architechture....

    But SGI had a much better Idea: Windows NT...
    (we all know the Desaster for SGI... ;-)

    --
  • Then something terrible happened: BEOWULF

    The decline of SGI and Cray began long before Beowulf. Cray's rivals, such as Hitachi, Fijitsu and Thinking Machines took huge chunks of its markets, even the IBM machines were a threat. There simply wasn't enough momentum behind UNICOS for it to compete with Cellular IRIX even within SGI.

    SGI are struggling as much as they can, but really they are only delaying the inevitable. They are being attacked on all sides, HP and IBM are crushing them in the VLDB market, Sun are making ever larger inroads into rendering, Compaq's professional workstations are forcing SGI out of CAD shops, the latest kit from Apple is competitive in the DV space... SGI need a miracle to save them.

    This would keep the mean old Beowulf at bay.

    Beowulf doesn't really compare to a Cray, since it lacks common memory across all of its nodes. Beowulf is more analogous to PVM.

  • This will no doubt result in more layoffs from SGI's engineering team. We remember how they laid off 3000 engineers in April. Now is a bad time to be in anything but e-commerce. SGI's e-commerce strategy is definitely to get rid of everything else. That's the method of operation in Silicon Valley: if it doesn't work don't fix it. Get rid of it.
  • When you look at memory bandwidth, as measured with benchmarks like stream [virginia.edu], Beowulf systems and PCs look pathetic.

    My limited experience has been that large datasets will blow out the cache on PCs and send the performance into the crapper.

    High-performance memory systems cost money. PC vendors don't seem to be interested in building balanced systems.

  • by substrate ( 2628 ) on Monday November 22, 1999 @04:51AM (#1513301)
    Cray is a sliver of its former self. When SGI purchased them there were over 4000 employees, they had their own semiconductor fab, PCB house and manufacturing operations. Now they're under 1000 employees, have no fab, no PCB or manufacturing operations.

    Beowulf clusters still can not touch the Cray supercomputer market unless vector processors were sold to markets where there wasn't a good match in the first place. The price/performance of a beowulf cluster is much higher than any Cray machine that has been built, but a vector processor running the right codes can make the difference between something being unfeasible or feasible.
  • Show me what drugs you were smoking?
    As a dyed in the wool Linux user/developer, I haven't seen too many machines which can outperform the 108 node R10K Cray we have in one of our buildings here (yes there are some Beowulf clusters out there, but they can't keep up in the memory bandwidth area). And remember that Linux-on-Intel is terribly limited by the architecture of the bus, etc.

    This kind of FUD is not helping...

  • > When cubicle SGI purchased office Cray they went around and RIPPED the doors off every
    > Cray office.

    My door is happily right where it has always been. Actually, I got the latch fixed after the assimilation. Thinking about it, I don't know of a single door removed in the whole Eagan complex previously known as Cray Park.
  • by jd ( 1658 ) <`imipak' `at' `yahoo.com'> on Monday November 22, 1999 @04:54AM (#1513304) Homepage Journal
    Cray computers -cost- more than that! The name alone is worth more than $100,000!

    Mind you, if SGI are flogging Cray off cheap, I'm offering $10! That's right, I'll offer a whole $10 to buy Cray from SGI, no questions asked. And I bet I could make it profitable =and= bleeding-edge, too. All I ask is the chance for SGI to prove me wrong! :)

  • ...because we still need those funny-looking IRIX-boxes for Computational Chemistry. Most of the powerful commercial software is written for IRIX and the big companies like MSI have been rather reluctant to port to Linux.

    With all the news coming from SGI, it doesn't look like they will be going anywhere in the future.

    Even if the software eventually gets ported to Linux in the future, I bet it will have to be repurchased and this might be a real obstacle especially for academic institutions since it is rather expensive.

    ________________________________
    If encryption is outlawed, only

  • It never seemed like Silicon Graphics had a solid financial or technological interest in Cray when they bought the company. The two products don't pair well together, the supercomputing and workstation technologies are totally different, and Cray had been floundering around for years. The computing trend of the time was toward the desktop, and Cray was the opposite.

    Now that they're simply SGI, it hasn't taken quite as long to figure that out. We need Cray, but we need one that can turn a profit in the private sector.
  • > Cray's rivals, such as Hitachi, Fijitsu and Thinking Machines took huge chunks of its markets,
    > even the IBM machines were a threat.

    Simply not true. Each of the cited examples had quite small chunks of the worldwide market. Hitachi and Fujitsu (NEC as well) had quite healthy marketshares in Japan, but no where else. At the time of the merger, as far as I recall, there was only one serious Japanese supercomputer in the United States (NEC at HARC). IBM was only a real threat where they were the incumbent or had key software advantages (i.e. CATIA).

    Question, if Thinking Machines had such a huge chunk of the market, where is their hardware now?

    -Dean
  • Ummm... you are joking aren't you..

    Linux has been well known for it's sucking at SMP. All Beowulf is really doing is clustering of CPU's. All you really are doing is throwing stuff out onto the EXTREMELY slow network medium, to get any real performance benefits you have to rewrite your apps so they stay local to your memory, do minimal IPC, etc. Lots of software does clustering not just Linux, hell even NT does clustering.

    Quad xeon smoking a Cray??? are you sure you haven't been smoking? Seriously look again at what you said, damn you are funny... Go check out top500.org and see how many Linux boxes there are compared to Cray's and then go and check the number of CPU to Performance, nothing more need be said.

    For SGI dumping Irix checkout http://www.sgi.com/developers/index.html#irix for the next year they are spending more on development and have more developers working on Irix then Linux

    Security... well how about this one Irix and Solaris are the only B2 classified OS's out there. Irix had some "EXTREMELY" stupid things in it a couple of years ago, of course I remember lots of VERY stupid things Linux dist had in them and many more of them over the years.

    Scalability... Irix sacles to 1024 proc SMP box using Numa, what does intel do currently... 32 (I think) that's about 32 times smaller than a O2k; and if that's not enough, you can add Beowulf type clustering on top of that if you wish, so you could have 10 1024 proc boxes with superfast IPC speed in each box and then add the slow clustering network on top of that.

    Linux may be starting to tear up the 3D gaming market, but they haven't started into the heavy-duty 3D market yet... NT, Irix, Solaris are seriously dominating this market, until some of the big-time software is ported to Linux that's the way it will probably stay (Maya, Softimage, etc.).
  • Cray was carrion before Beowulf happened.

    The killer for Cray was when large custom boxes full of cheap commodity processors started to appear (Connection Machine et al.). Cray's previous expertise of making hugely clever processors just couldn't compete with Fordian economics and a huge fab churning out Intel games boxes by the bucketload.

    Now Beowulf takes it all a step further and replaces a weird box full of standard CPUs with a room full of standard boxes.

    Another (quite reasonable) opinion is that Cray was/b> Seymour Cray, and without him they just lost direction.

  • As posted by a gracious AC exactly two months ago,

    ---------------

    Irix 6.x:

    all included in the soft distribution, download latest GCC from freeware.sgi.com

    Irix 5.3:

    download gcc from http://reality.sgi.com/ariel/freeware
    download headers from http://www.interlog.com/~kcozens/sgi/gcc-irix.html

    ------------------

    If these are not sufficient, please contact me directly and we can get to the bottom of it.

    -Dean Johnson (dtj@sgi.com)
  • You've hit it on the head. This is the real reason for the decline of the supercomputing market. The end of the cold war and the rise in performance of general purpose systems has made the traditional supercomputer obsolete except in some extreme niche markets. I predict that in the future, the only participants in this market will be companies like IBM which can afford to use their supercomputer business as a place to develop new technology but not necessarily make any money. (I spent 6 years in Intel's Supercomputer Systems Division).

  • We had a bunch of Sun reps at our university come tout their products a month ago. One guy stated that when SGI bought Cray, Sun bought the one part of Cray that was still making money. They bought a group that was producing this crossbar architecture using Sparc chips.

    The architecture is different from the 3000/4000/etc line to the 10000 line.

    Apparently the new upcoming 10000s are super sweet. More than 64 procs (128+?) The starfire (10000) also has some nice advantages, like dynamically allocating processors to multiple "virtual" machines running in one single box...

    anyway, the Sun rep backs up your statement.

    --ed
  • You are right, but you're also wrong. I agree that Linux SMP sucks --at least the 2.2 series. But a Quad Xeon III can offer equivalent performance (~40%) of a Cray T3D anyday, at an order of magnitude less $$$. The 12 Xeon III cluster I sysadmin can go head to head with a T3 easily, over Base 100 Ethernet! (we just got Gigabit Ethernet --give us a week ;-).

    At the low end, a compact Beowulf can easily compete with a Cray. Most scientific codes these days use MPI anyways, which can be ported easily from UNICOS or Irix to Linux. At the high end, or for specialized applications, we have ways to go. But, with better SMP support, and better support for fast networking (SCI, Gigabit Ethernet, Fibre Channel) Linux is getting there, fast. I wouldn't put money into SGI --except maybe to short them ;-)...

    engineers never lie; we just approximate the truth.
  • > if SGI are flogging Cray off cheap, I'm offering $10!

    And I bet you'd be buying a bloody big debt with your $10. SGI wouldn't be selling if they were making money with Cray.

    Regards, Ralph.
  • by Greg Koenig ( 92609 ) on Monday November 22, 1999 @06:29AM (#1513320)
    In my opinion, in order to put this into perspective, you need to look at the history of the subject at hand.

    The first types of supercomputers were faster and better than typical computers because of the design and features put into them. They used faster components which were custom-built (and thus a lot more expensive) and had features like vector units which made them attractive to scientific applications (but again, more expensive). Then, people started to think about how they could make supercomputers at the same or faster performance but bring the cost of producing them down. Rather than using expensive custom-built processors that had to be submerged in cooling fluid or using vector units to manipulate large arrays in a single operation, they started to develop new designs for supercomputers. One new type of machine was SMP based systems such as the Cray PowerChallenge type of machine. In this machine, many processors share a common memory, just like in your 2-way or 4-way desktop boxes now. With these types of machines, the lack of vector units isn't such a big deal since you can instead just separate your array into N different portions (where N = the number of processors) and apply your vector operation in parallel over the processors in the system. The problem with these types of computers is that scaling up to large numbers of processors is difficult since contention for the system bus (to talk between the CPU and memory or I/O) gets complicated with the larger number of processors. Another new type of machine were Massively Parallel Processor (MPP) machines such as the Cray T3D and T3E. In these types of machines, many processors (~1024) are interconnected with a very fast network. Each processor has its own individual memory, so the system can be scaled up to much greater numbers of processors. The problem is that now instead of having a single common shared memory, you have all these distributed memories and you have to use message passing techniques to get your data distributed around, which is a pain. So, this led researchers such as John Hennessy (at Stanford) to come up with a new architecture that uses Distributed Shared Memory (DSM). To the applications programmer, things appear to be a large shared memory (although if you touch certain parts of memory, access times are slower than touching other locations in memory -- since they have to be fetched from a remote machine). In fact what actually happens is that each processor still has its own local memory, but a processor on a very fast interconnect card coupled with each processor examines memory references and if it sees you are using memory that is not local to your processor, fetches the desired section of memory from the remote processor. So, it's sort of an MPP type system but appears to the programmer as sort of an SMP type system. This is what SGI/Cray sells as the Origin 2000. It's still cheaper to produce than traditional vector machines which use custom CPU's and memories (since it uses more commodity CPU's and components), but at the same time offers good relative performance.

    Now, in the late 80's, Seymour Cray decided that building supercomputers out of commodity components wasn't the right way to go. His opinion was that, all things being equal, you could always make a faster supercomputer if you used more expensive components and designed your supercomputer with that goal in mind (i.e., use SRAM for all memories, use the fastest technology in your CPU, etc.). To that end, he created a company called Cray Computers which was separate from Cray Research (i.e., Seymour was in charge of Cray Computers and had nothing to do with Cray Research). Cray Research produced the computers such as the PowerChallenge and T3E while Cray Computer continued to make expensive vector-type computers. Unfortunately what ended up happening was that Cray Computers folded because their machines were so expensive and the performance gain you got from them did not justify the greater cost. (Really, the only places that bought these types of computers were "spook sites" like the NSA, to the best of my knowledge.)

    The pervading idea is that this trend towards computers that offer decent performance while costing significantly less will continue. This is the idea behind clusters such as the Beowulf or, more importantly, clusters like the NT Supercluster at NCSA. The NT Supercluster differs from a Beowulf in that it uses a more costly network adapter (specifically, a Myrinet adapter from Myricom [myri.com]) to allow internode communication to take place at higher bandwidths and lower latencies than a standard Ethernet. No, the performance of these types of machines is nowhere near what you get from a machine like the Origin 2000, but the idea is that you get comparable performance at a huge reduction in cost. Additionally, because the components used to construct these clusters are commodity components, everybody will be producing these components and continuing to improve their performance. So, the speed of cluster-based computing relative to machines like the Origin improves over time. [Disclaimer: I am one of the people who helped develop the technology in the NT Supercluster, so I have some bias.]

    To say that SGI ruined Cray is no more true than to say that they ruined MIPS. The reason that people are not that interested in MIPS processors any more is that Intel processors are a commodity now. Everybody uses them, so the overall industry trend is to make Intel and Intel-related technologies faster and better since everybody works together in a sort of de facto way. Yes, probably the MIPS design is a much better processor design than the Intel design (it wouldn't be difficult), but the key thing is that everybody in industry is using Intel. This is the same reason that building supercomputers out of commodity components (i.e., clusters) will probably be the way things work in the future.
  • Note the URL for the headers for Irix 5.3

    Though the initial post was really abusive I have to admit that SGI is a classic example of a case when the left hand does not know what the right hand does:

    You ask a SGI salesman on their x86 prices, they answer with a booklet about Origin
    You ask a SGI salesman on their Linux system, they answer with a booklet on Irix
    In both cases they also require using conventional mail or phone in order to contact them and do not supply email for contact.

    I did had the intention to get some info on their x86 systems but I gave up and threw all the materials (as listed above) in the rounded foulder.

    I would say that SGI needs a major cleanup amidst its salesd...s. Otherwise the AC abusive description may prove right.

    Having "cool developers (TM)" is not enough. And having "cool hardware (TM)" is not enough either. You can send all the developer's work along with the hardware to /dev/null if the salesforce has not even heard about it (which appears to be the case).

    In btw: if this the attitude they applied to Cray I am nmot amazed that its market value dropped 65 times over 3 years...
  • Want to know why SGI can't give you the IDO, they LICENSED PARTS OF THE TECHNOLOGY FROM OTHER COMPANIES, i.e. some other company than SGI gets money for each compiler they sell. If I licensed tech to SGI and got a nice chunk of change for each license they sell, do you think I'd say sure, give it away for free, I don't care about me making money, I care about SGI making money. On the SGI Linux mailing list people have mentioned about opening up parts of the source code, well the problem they run into is that years ago, they licensed tech from other companies and the other companies (ATT for instance) haven't let them open up their tech. I'd bet the cost that they'd sell 6.5 to you ($600) is less than what is costs them in Licensing for IDO (If I remember right 5 years ago it was something like $3k).

    I'm open to debate, and I'd actually be really interested in an actuall GOOD reason for them to dogged, especially when their hands are tied due to licensing, etc.

    Another point, SGI wants 5.3 dead, it's NOT Y2k compliant and they are NOT supporting it anymore. I don't blame them at all since 6.2 came out 4+ years ago (it's like complaining about a company not having a 386 bios Y2k compliant, get over it), what version of Linux was out then, NT 4 wasn't even out yet (I remember playing with beta at the time). If you want old hardware to run, start supporting the Linux Mips project, or pay the $600 for Irix 6.5 or checkout below.

    Have you tried get gcc running on 5.3 by using headers from Linux?.. check out Ariel Faigon's website at SGI for instance http://reality.sgi.com/ariel/freeware/ for GCC, goto http://www.interlog.com/~kcozens/sgi/gcc-irix.html for the binutils (ar, etc.) and linker. Use the header files from a Linux dist and if everything works well you should be off to the races. (as I said before 5.3 is so dead that it's turned into oil along with the dinosaurs, so I've not been able to validate this myself)
  • Here's a very nice Cray FAQ! [pipex.com] All you ever wanted to know about 'em.
  • SGI sold of the Cray ( Nee cs6400) "Starfire" that has evolved into the E10000 server. It was/is a 64 way solaris box that competed with the Origin servers so it was kill it or sell it. Gannett
  • Ummm... I'd upgrade to a new Cray, the last one I see installed was installed years ago, the cray.com website doesn't list that model anymore. I don't doubt your Xeon cluster gets 40% of a T3D, but I bet that 3D would kick your 486's butt if we compare apples to apples, or maybe T3E's to Xeon's :) Of course a person could probably make an app that would smoke a T3D on a 486 with a proper (totaly legit) program... i.e. running vi on a 486 might actually be faster since it wouldn't have to worry about context switching, etc.... the same way people complain about an Onyx being dog slow compared to their Pentium box running quake.

    The biggest beef with Intel (other than shoddy manufacturing) they can't push data around fast enough, we put in SGI O2k's for the sole purpose of being able to push files around fast enough (6 FC controllers out to lots of EMC storage), we don't do anything CPU intensive, get a file push it out, but we have to have that much CPU power to drive all of the IO; I have get to see an Intel box be able to do that.

    The biggest problem I see with Beowulf is that it doesn't do very well in the large memory department, NNuma is the way to go here, if I have to access memory on a node 12 hops away it takes a lot more time than going directly to it, as you mention FC, GB, etc. puts a bandaid on top of it for awhile but isn't a very elegant solution to the problem as a whole.

    I wouldn't put much money in SGI at this time either, but time will only tell what shakes out. This is a very turbulent time for the industry, the only one that is really profiting is SUN, and really only because they are eating into old SGI, HP, DEC, etc. customers how long they will continue can only be guessed at.
  • > I'm going to give SGI one last chance here in the next 20 minutes... I'm going to call them and tell them
    > what I will do if they don't provide it to me. If they don't break down I will do as I say. This is a true crock of shit.

    I have repeatedly requested over the last two months that you (and others having the same problem) to contact me personally so that we can get this resolved. Lets get this resolved! I can't read minds and I don't think management will spring for a CD-ROM to carpetbomb every person in the US to insure that the right party gets what they need.

    -Dean Johnson (dtj@sgi.com)
  • You know, that could have been a T3E --I am not the one running the benchmarks ;-)... It sounds like your applications are much more I/O intensive than the stuff I am used to (computational fluid dynamics).

    In our applications, a Linux-powered Xeon III-based Beowulf can compete with (slightly older) O2Ks *at the same number of CPUs*. Again, YMMV: as you say, the pipes in these SGIs are much, much fatter/faster than anything we can throw in on a Beowulf.

    But the important thing, IMHO, is that the price/performance ratios of Beowulfs are now enabling a new class of applications, with *dedicated* hardware built to fit the software requirements rather than the other way around.

    With the amount of money and man-hours being thrown into Beowulf enabling technologies (fast networking, maintenance schemes, HA, process migration, etc., etc.) I think we're approaching a shift along the lines of the old workstation/mainframe schism: cheap dedicated machines (Beowulfs now, workstations back then) versus very expensive, generic heavy iron (supercomputers now, mainframes back then). In the end, the largest mindshare (number of applications/developers) and the better price/performance ratio will win.

    I am siding with the 'wulfs ;-)...

    engineers never lie; we just approximate the truth.
  • True, but MSI [msi.com] does have ports of most of their programs for IBM RS/6000; some even have ports for DEC/Alpha stations.

    About platforms and computational chemistry, look at the list of ports for CHARMM(Serial machines [nih.gov], Parallel machines [nih.gov]); this list includes Beowulf clusters, CRAY and Intel supercomputers, and most UNIX workstations.


    Granted, this is only one program (available with source), and much of the visualization programs are SGI-specific [Quanta is also available for RS/6000], porting these applications would not be impossible; The companies will have to wait for a large enough crowd guaranteed to use that port, otherwise it would not be economically feasible
    [i.e. EVERYBODY OUT THERE START BUGGING MSI [or other vendor] ABOUT LINUX PORTS; hehe, even if you know nothing about chemistry, just call them up and bug them about a linux port, just try not to sound too stupid]
  • the list of CHARMM serial ports is at:
    http://www.lobos.nih.gov/Charmm/c27n2/install.ht ml# Machines


    ...I can't seem to get a space included in hyperlink here on slashdot
  • I'm very surprised that there has been no mention to the (I thought) famous phrase about "No one will survive the attack of the killer Micros!" [tuxedo.org] by Eugene Brooks. In the late 80's, Eugene described how micros were going to kill off all other types of computers

    One of the few references that I can find on the web is here [tera.com] in a 1990 paper.

    Basically, this handwriting has been on the wall for well over a decade, and one can only hope that SGI recouped their investment in the first few years after purchasing Cray.

  • I have the privilege of working with Cray systems, as well as working for a company that was owned at one point by Cray and then by SGI. We're an independant company now, but we still hear plenty of things from down in Eagan, MN (Cray HQ)... (We're a private supercomputing firm, where we sell time on supercomputers to companies that need time on the systems without wanting to foot the bill for owning and operating the beasts.)

    In my experience, Cray MPP systems are worth every penny of their performance. I'm doing research on Beowulf clusters right now, and I'm finding that no matter how many processors you add to it, no matter what magic you weave with the networking, its not going to match the performance of a Cray T3E.

    Crays will continue to have a market, where people need to run LARGE applications (we do a hell of a lot of seismic data processing... hundreds of gigabytes of data are going through our T3E-900) at high speed. Beowulfs will have their market too, where people need to run large applications at a lower speed for a lower cost.

    In my opinion, the Cray/SGI merger should never have happened. Hopefully, the company that looks to be buying Cray won't drive it into the shitter (they had bought Thinking Machines when they went under... look where they are now), and they'll get back into the market. Dust off and update the plans for the T3F, maybe introduce a lower-cost MPP system to bridge the gap between Beowulfs and the T3E.

    At any rate, we're not getting rid of our Crays for a long, long time. :)

    Please note, I am not speaking for the company I work for in any capacity whatsoever, I'm not even going to name them. If you want to buy supercomputing time, however, do drop an email...

  • (Ok, I'm posting late, so nobody will probably read this anyway, but....)

    The T3D has been end of life'd - get a T3E. Also, how many processors were in that T3D? I seriously doubt you're going to compete with a T3E, however. It's hard to argue with 2048 DEC Alphas at 600 MHz...

    Also, let's talk about memory bandwidth. I believe that the present T3E model gets 45 gigabits/second *BETWEEN* processors. I'd like to see you get that to local memory in your Beowulf cluster. Does that make a difference? Yes. In raw CPU speed, your Beowulf cluster may win. But that doesn't matter if your CPU's are idle half the time waiting for the RAM. Some of our (Cray's - well, technically I work for SGI but I wish I worked on the Cray side of the split - at least I still have access to their machine room :) customers will buy Beowulf. The biggest ones won't. The reason is that poor memory bandwidth could double your run time. While that makes hardly any difference for a 5 minute benchmark, for a 2 week MPI job, that could be a bit of a letdown if it now takes a month.

    The other thing is NUMA - a Beowulf cluster is not a NUMA environment - you can't DMA from one node into another node without kernel intervention. On the T3E (and the SMP based vector Crays) you can do this. On the Origin (SGI designed) ccNUMA boxes, you can actually allocate several hundred gigabytes of RAM into a pthreads job and access it all normally. You won't be doing this on Beowulf any time soon.

    Now, most of that doesn't make a difference for lower end customers. However, Cray has never targeted the low end of anything :) Check out the Top 500. There are a lot of Crays. There aren't too many Beowulf clusters.

  • Aaarrgh, I had a long reply ready to submit and then Netscape/Slashdot froze on me.

    Anyways, yes, I used to read comp.arch quite a bit back then and I seem to remember that's where Eugene Brooks first made that post. I seem to remember Henry Spencer used it as his .sig line for quite a while thereafter (which is how the quote got so much exposure). I also seem to remember that at the time Eugene Brooks was mainly refering to how microprocessor based workstations like SUN Sparcs and HP PAs were going to eat the lunch of Minis and Mainframes (using more discrete components) from IBM, Digital and others. I remember seing the performance range curves for the different computer classes and how the KM curve was increasing a lot faster than the mini and mainframe curves and would eventually overtake them.

    I remember seeing those same curves a few years later to show how the gap left at the low end by the KMs opened the doors for the portable/wearable computers such as pen tablets. As the user interface issues with wearable computers get resolved, their widespread adoption will increase and they too will become commodities (the Palm Pilot is the tip of the iceberg). It will take longer for the wearables (let's call them tracys after Dick Tracy watches) to overtake the KMs since the KMs can take advantage of fixed data transmission mediums. You'll always be able to put more data through fiber than through air. However when you can put an Origin2000 equivalent on a watch (using molecular circuits, Drexlerian rod logic, quantum gates or whatever) the KMs will
    be pushed back to a small niche indeed.

    No one will survive the Attack of the Killer Tracys.


    Of course by then, the computers are small enough that you can get them as cranial implants with spinal and optic nerve taps, however the upgrades are a real pain. Worse yet, everybody's always worrying about whether the generated heat or the coils for receiving the power from the external battery cause brain cancer. After all, if a new vision correction treatment doesn't work, you can always get your eyes cloned and replaced. However, nobody wants to take a chance on getting their brains fried, you can't replace that!

    "Duh, my brain hurts."
    "Oh, we'll just have to get it removed."

"You'll pay to know what you really think." -- J.R. "Bob" Dobbs

Working...