SGI Negotiating Cray Research Sale 67
Aviast writes "SGI is in talks with the Gores Technology Group to sell
the Cray Research unit of SGI. Read the [Yahoo News] story
here.
SGI bought Cray three-and-a-half years ago for $700 million. According
to this story Gores originally offered $100 million for Cray, but
has since lowered its offer." Rumors about this have been floating around for weeks. Looks like they *may* become reality, but the deal is apparently still a long way from done.
Hmmm (Score:2)
Perhaps Cray computers are not as valuable now that we have beowulf & other clustering technology which can give you the same amount of raw processing (with alittle extra latency) as the big iron?
--
did SGI just cripple cray? (Score:1)
No biggie for SGI (Score:1)
That price sounds low. (Score:2)
Massive parallel computing and SGI (Score:3)
One interesting thing that stuck in my mind was how the CEO, you know, what's-his-name, said that the advantage of Cray Machines was in the architecture.
This would keep the mean old Beowulf at bay.
He was talking about how in the PCs under (or above, doesn't really matter) our desks, the processor is powerful and the pipe is small which limits the amount of data that can be pushed through. Crays are fast with huge pipes, making them perfect for big data-crunching applications (like simulating wind sheer in a Cumulonimbus Thunderstorm in real-time).
Now, how long will it be before the architecture (finally) in our PCs changes to something that will be more along the lines of this system? 10 years? more? less? Who knows.
Man, I rambled alot. I know that I had a point somewhere in there.
--
No surprise. (Score:2)
Cray Tech (Score:1)
Dive Gear [divingdeals.com]
Cray (Score:1)
Linux will save em (Score:1)
them. I doubt they'll make any money off this
new age PC market. But then again, who needs
SGI? Linux only brings another segment of the
"daddy can I have a new computer" market to
power. They'll never buy SGI's anyway.
--------------------------------------
slashdot: you still have the wrong logo.
SGI sold parts of Cray to Sun some years ago (Score:1)
AFAIK a bit later Sun came out with the E10000 which incidently had a high-performance Crossbar architechture....
But SGI had a much better Idea: Windows NT...
(we all know the Desaster for SGI...
--
Re:Massive parallel computing and SGI (Score:2)
The decline of SGI and Cray began long before Beowulf. Cray's rivals, such as Hitachi, Fijitsu and Thinking Machines took huge chunks of its markets, even the IBM machines were a threat. There simply wasn't enough momentum behind UNICOS for it to compete with Cellular IRIX even within SGI.
SGI are struggling as much as they can, but really they are only delaying the inevitable. They are being attacked on all sides, HP and IBM are crushing them in the VLDB market, Sun are making ever larger inroads into rendering, Compaq's professional workstations are forcing SGI out of CAD shops, the latest kit from Apple is competitive in the DV space... SGI need a miracle to save them.
This would keep the mean old Beowulf at bay.
Beowulf doesn't really compare to a Cray, since it lacks common memory across all of its nodes. Beowulf is more analogous to PVM.
More layoffs from SGI in the future (Score:2)
Re:Massive parallel computing and SGI (Score:2)
My limited experience has been that large datasets will blow out the cache on PCs and send the performance into the crapper.
High-performance memory systems cost money. PC vendors don't seem to be interested in building balanced systems.
Re:Hmmm (Score:3)
Beowulf clusters still can not touch the Cray supercomputer market unless vector processors were sold to markets where there wasn't a good match in the first place. The price/performance of a beowulf cluster is much higher than any Cray machine that has been built, but a vector processor running the right codes can make the difference between something being unfeasible or feasible.
Re:SGI knows they are dying, only Linux will save (Score:1)
As a dyed in the wool Linux user/developer, I haven't seen too many machines which can outperform the 108 node R10K Cray we have in one of our buildings here (yes there are some Beowulf clusters out there, but they can't keep up in the memory bandwidth area). And remember that Linux-on-Intel is terribly limited by the architecture of the bus, etc.
This kind of FUD is not helping...
Re:Die SGI Die! (Score:1)
> Cray office.
My door is happily right where it has always been. Actually, I got the latch fixed after the assimilation. Thinking about it, I don't know of a single door removed in the whole Eagan complex previously known as Cray Park.
This is crazy! (Score:3)
Mind you, if SGI are flogging Cray off cheap, I'm offering $10! That's right, I'll offer a whole $10 to buy Cray from SGI, no questions asked. And I bet I could make it profitable =and= bleeding-edge, too. All I ask is the chance for SGI to prove me wrong! :)
I hope, SGI doesn't fade away too soon... (Score:2)
With all the news coming from SGI, it doesn't look like they will be going anywhere in the future.
Even if the software eventually gets ported to Linux in the future, I bet it will have to be repurchased and this might be a real obstacle especially for academic institutions since it is rather expensive.
________________________________
If encryption is outlawed, only
surprise, surprise... (Score:2)
Now that they're simply SGI, it hasn't taken quite as long to figure that out. We need Cray, but we need one that can turn a profit in the private sector.
Re:Massive parallel computing and SGI (Score:1)
> even the IBM machines were a threat.
Simply not true. Each of the cited examples had quite small chunks of the worldwide market. Hitachi and Fujitsu (NEC as well) had quite healthy marketshares in Japan, but no where else. At the time of the merger, as far as I recall, there was only one serious Japanese supercomputer in the United States (NEC at HARC). IBM was only a real threat where they were the incumbent or had key software advantages (i.e. CATIA).
Question, if Thinking Machines had such a huge chunk of the market, where is their hardware now?
-Dean
Re:SGI knows they are dying, only Linux will save (Score:1)
Linux has been well known for it's sucking at SMP. All Beowulf is really doing is clustering of CPU's. All you really are doing is throwing stuff out onto the EXTREMELY slow network medium, to get any real performance benefits you have to rewrite your apps so they stay local to your memory, do minimal IPC, etc. Lots of software does clustering not just Linux, hell even NT does clustering.
Quad xeon smoking a Cray??? are you sure you haven't been smoking? Seriously look again at what you said, damn you are funny... Go check out top500.org and see how many Linux boxes there are compared to Cray's and then go and check the number of CPU to Performance, nothing more need be said.
For SGI dumping Irix checkout http://www.sgi.com/developers/index.html#irix for the next year they are spending more on development and have more developers working on Irix then Linux
Security... well how about this one Irix and Solaris are the only B2 classified OS's out there. Irix had some "EXTREMELY" stupid things in it a couple of years ago, of course I remember lots of VERY stupid things Linux dist had in them and many more of them over the years.
Scalability... Irix sacles to 1024 proc SMP box using Numa, what does intel do currently... 32 (I think) that's about 32 times smaller than a O2k; and if that's not enough, you can add Beowulf type clustering on top of that if you wish, so you could have 10 1024 proc boxes with superfast IPC speed in each box and then add the slow clustering network on top of that.
Linux may be starting to tear up the 3D gaming market, but they haven't started into the heavy-duty 3D market yet... NT, Irix, Solaris are seriously dominating this market, until some of the big-time software is ported to Linux that's the way it will probably stay (Maya, Softimage, etc.).
Re:Massive parallel computing and SGI (Score:1)
Cray was carrion before Beowulf happened.
The killer for Cray was when large custom boxes full of cheap commodity processors started to appear (Connection Machine et al.). Cray's previous expertise of making hugely clever processors just couldn't compete with Fordian economics and a huge fab churning out Intel games boxes by the bucketload.
Now Beowulf takes it all a step further and replaces a weird box full of standard CPUs with a room full of standard boxes.
Another (quite reasonable) opinion is that Cray was/b> Seymour Cray, and without him they just lost direction.
Re:Die SGI Die! (Score:1)
---------------
Irix 6.x:
all included in the soft distribution, download latest GCC from freeware.sgi.com
Irix 5.3:
download gcc from http://reality.sgi.com/ariel/freeware
download headers from http://www.interlog.com/~kcozens/sgi/gcc-irix.htm
------------------
If these are not sufficient, please contact me directly and we can get to the bottom of it.
-Dean Johnson (dtj@sgi.com)
Re:Super computer market shrinking (Score:1)
That's what I heard... (Score:1)
We had a bunch of Sun reps at our university come tout their products a month ago. One guy stated that when SGI bought Cray, Sun bought the one part of Cray that was still making money. They bought a group that was producing this crossbar architecture using Sparc chips.
The architecture is different from the 3000/4000/etc line to the 10000 line.
Apparently the new upcoming 10000s are super sweet. More than 64 procs (128+?) The starfire (10000) also has some nice advantages, like dynamically allocating processors to multiple "virtual" machines running in one single box...
anyway, the Sun rep backs up your statement.
--ed
Re:SGI knows they are dying, only Linux will save (Score:1)
At the low end, a compact Beowulf can easily compete with a Cray. Most scientific codes these days use MPI anyways, which can be ported easily from UNICOS or Irix to Linux. At the high end, or for specialized applications, we have ways to go. But, with better SMP support, and better support for fast networking (SCI, Gigabit Ethernet, Fibre Channel) Linux is getting there, fast. I wouldn't put money into SGI --except maybe to short them
engineers never lie; we just approximate the truth.
You is crazy! (Score:1)
And I bet you'd be buying a bloody big debt with your $10. SGI wouldn't be selling if they were making money with Cray.
Regards, Ralph.
history/future of supercomputing (Score:5)
The first types of supercomputers were faster and better than typical computers because of the design and features put into them. They used faster components which were custom-built (and thus a lot more expensive) and had features like vector units which made them attractive to scientific applications (but again, more expensive). Then, people started to think about how they could make supercomputers at the same or faster performance but bring the cost of producing them down. Rather than using expensive custom-built processors that had to be submerged in cooling fluid or using vector units to manipulate large arrays in a single operation, they started to develop new designs for supercomputers. One new type of machine was SMP based systems such as the Cray PowerChallenge type of machine. In this machine, many processors share a common memory, just like in your 2-way or 4-way desktop boxes now. With these types of machines, the lack of vector units isn't such a big deal since you can instead just separate your array into N different portions (where N = the number of processors) and apply your vector operation in parallel over the processors in the system. The problem with these types of computers is that scaling up to large numbers of processors is difficult since contention for the system bus (to talk between the CPU and memory or I/O) gets complicated with the larger number of processors. Another new type of machine were Massively Parallel Processor (MPP) machines such as the Cray T3D and T3E. In these types of machines, many processors (~1024) are interconnected with a very fast network. Each processor has its own individual memory, so the system can be scaled up to much greater numbers of processors. The problem is that now instead of having a single common shared memory, you have all these distributed memories and you have to use message passing techniques to get your data distributed around, which is a pain. So, this led researchers such as John Hennessy (at Stanford) to come up with a new architecture that uses Distributed Shared Memory (DSM). To the applications programmer, things appear to be a large shared memory (although if you touch certain parts of memory, access times are slower than touching other locations in memory -- since they have to be fetched from a remote machine). In fact what actually happens is that each processor still has its own local memory, but a processor on a very fast interconnect card coupled with each processor examines memory references and if it sees you are using memory that is not local to your processor, fetches the desired section of memory from the remote processor. So, it's sort of an MPP type system but appears to the programmer as sort of an SMP type system. This is what SGI/Cray sells as the Origin 2000. It's still cheaper to produce than traditional vector machines which use custom CPU's and memories (since it uses more commodity CPU's and components), but at the same time offers good relative performance.
Now, in the late 80's, Seymour Cray decided that building supercomputers out of commodity components wasn't the right way to go. His opinion was that, all things being equal, you could always make a faster supercomputer if you used more expensive components and designed your supercomputer with that goal in mind (i.e., use SRAM for all memories, use the fastest technology in your CPU, etc.). To that end, he created a company called Cray Computers which was separate from Cray Research (i.e., Seymour was in charge of Cray Computers and had nothing to do with Cray Research). Cray Research produced the computers such as the PowerChallenge and T3E while Cray Computer continued to make expensive vector-type computers. Unfortunately what ended up happening was that Cray Computers folded because their machines were so expensive and the performance gain you got from them did not justify the greater cost. (Really, the only places that bought these types of computers were "spook sites" like the NSA, to the best of my knowledge.)
The pervading idea is that this trend towards computers that offer decent performance while costing significantly less will continue. This is the idea behind clusters such as the Beowulf or, more importantly, clusters like the NT Supercluster at NCSA. The NT Supercluster differs from a Beowulf in that it uses a more costly network adapter (specifically, a Myrinet adapter from Myricom [myri.com]) to allow internode communication to take place at higher bandwidths and lower latencies than a standard Ethernet. No, the performance of these types of machines is nowhere near what you get from a machine like the Origin 2000, but the idea is that you get comparable performance at a huge reduction in cost. Additionally, because the components used to construct these clusters are commodity components, everybody will be producing these components and continuing to improve their performance. So, the speed of cluster-based computing relative to machines like the Origin improves over time. [Disclaimer: I am one of the people who helped develop the technology in the NT Supercluster, so I have some bias.]
To say that SGI ruined Cray is no more true than to say that they ruined MIPS. The reason that people are not that interested in MIPS processors any more is that Intel processors are a commodity now. Everybody uses them, so the overall industry trend is to make Intel and Intel-related technologies faster and better since everybody works together in a sort of de facto way. Yes, probably the MIPS design is a much better processor design than the Intel design (it wouldn't be difficult), but the key thing is that everybody in industry is using Intel. This is the same reason that building supercomputers out of commodity components (i.e., clusters) will probably be the way things work in the future.
Re:Die SGI Die! (Score:1)
Though the initial post was really abusive I have to admit that SGI is a classic example of a case when the left hand does not know what the right hand does:
You ask a SGI salesman on their x86 prices, they answer with a booklet about Origin
You ask a SGI salesman on their Linux system, they answer with a booklet on Irix
In both cases they also require using conventional mail or phone in order to contact them and do not supply email for contact.
I did had the intention to get some info on their x86 systems but I gave up and threw all the materials (as listed above) in the rounded foulder.
I would say that SGI needs a major cleanup amidst its salesd...s. Otherwise the AC abusive description may prove right.
Having "cool developers (TM)" is not enough. And having "cool hardware (TM)" is not enough either. You can send all the developer's work along with the hardware to
In btw: if this the attitude they applied to Cray I am nmot amazed that its market value dropped 65 times over 3 years...
Re:SGI ruined Cray like they ruined everything els (Score:1)
I'm open to debate, and I'd actually be really interested in an actuall GOOD reason for them to dogged, especially when their hands are tied due to licensing, etc.
Another point, SGI wants 5.3 dead, it's NOT Y2k compliant and they are NOT supporting it anymore. I don't blame them at all since 6.2 came out 4+ years ago (it's like complaining about a company not having a 386 bios Y2k compliant, get over it), what version of Linux was out then, NT 4 wasn't even out yet (I remember playing with beta at the time). If you want old hardware to run, start supporting the Linux Mips project, or pay the $600 for Irix 6.5 or checkout below.
Have you tried get gcc running on 5.3 by using headers from Linux?.. check out Ariel Faigon's website at SGI for instance http://reality.sgi.com/ariel/freeware/ for GCC, goto http://www.interlog.com/~kcozens/sgi/gcc-irix.htm
Y'all Should Read the FAQ! (Score:1)
Re:SGI sold parts of Cray to Sun some years ago (Score:1)
Re:SGI knows they are dying, only Linux will save (Score:1)
The biggest beef with Intel (other than shoddy manufacturing) they can't push data around fast enough, we put in SGI O2k's for the sole purpose of being able to push files around fast enough (6 FC controllers out to lots of EMC storage), we don't do anything CPU intensive, get a file push it out, but we have to have that much CPU power to drive all of the IO; I have get to see an Intel box be able to do that.
The biggest problem I see with Beowulf is that it doesn't do very well in the large memory department, NNuma is the way to go here, if I have to access memory on a node 12 hops away it takes a lot more time than going directly to it, as you mention FC, GB, etc. puts a bandaid on top of it for awhile but isn't a very elegant solution to the problem as a whole.
I wouldn't put much money in SGI at this time either, but time will only tell what shakes out. This is a very turbulent time for the industry, the only one that is really profiting is SUN, and really only because they are eating into old SGI, HP, DEC, etc. customers how long they will continue can only be guessed at.
Re:BZZZZZTTT, WRONG! (Score:1)
> what I will do if they don't provide it to me. If they don't break down I will do as I say. This is a true crock of shit.
I have repeatedly requested over the last two months that you (and others having the same problem) to contact me personally so that we can get this resolved. Lets get this resolved! I can't read minds and I don't think management will spring for a CD-ROM to carpetbomb every person in the US to insure that the right party gets what they need.
-Dean Johnson (dtj@sgi.com)
Re:SGI knows they are dying, only Linux will save (Score:1)
In our applications, a Linux-powered Xeon III-based Beowulf can compete with (slightly older) O2Ks *at the same number of CPUs*. Again, YMMV: as you say, the pipes in these SGIs are much, much fatter/faster than anything we can throw in on a Beowulf.
But the important thing, IMHO, is that the price/performance ratios of Beowulfs are now enabling a new class of applications, with *dedicated* hardware built to fit the software requirements rather than the other way around.
With the amount of money and man-hours being thrown into Beowulf enabling technologies (fast networking, maintenance schemes, HA, process migration, etc., etc.) I think we're approaching a shift along the lines of the old workstation/mainframe schism: cheap dedicated machines (Beowulfs now, workstations back then) versus very expensive, generic heavy iron (supercomputers now, mainframes back then). In the end, the largest mindshare (number of applications/developers) and the better price/performance ratio will win.
I am siding with the 'wulfs
engineers never lie; we just approximate the truth.
Re:I hope, SGI doesn't fade away too soon... (Score:1)
About platforms and computational chemistry, look at the list of ports for CHARMM(Serial machines [nih.gov], Parallel machines [nih.gov]); this list includes Beowulf clusters, CRAY and Intel supercomputers, and most UNIX workstations.
Granted, this is only one program (available with source), and much of the visualization programs are SGI-specific [Quanta is also available for RS/6000], porting these applications would not be impossible; The companies will have to wait for a large enough crowd guaranteed to use that port, otherwise it would not be economically feasible
[i.e. EVERYBODY OUT THERE START BUGGING MSI [or other vendor] ABOUT LINUX PORTS; hehe, even if you know nothing about chemistry, just call them up and bug them about a linux port, just try not to sound too stupid]
Re: hyperlink error (Score:1)
http://www.lobos.nih.gov/Charmm/c27n2/install.h
...I can't seem to get a space included in hyperlink here on slashdot
Noone will survive the attack of the Killer Micros (Score:1)
One of the few references that I can find on the web is here [tera.com] in a 1990 paper.
Basically, this handwriting has been on the wall for well over a decade, and one can only hope that SGI recouped their investment in the first few years after purchasing Cray.
The future of Cray (Score:1)
In my experience, Cray MPP systems are worth every penny of their performance. I'm doing research on Beowulf clusters right now, and I'm finding that no matter how many processors you add to it, no matter what magic you weave with the networking, its not going to match the performance of a Cray T3E.
Crays will continue to have a market, where people need to run LARGE applications (we do a hell of a lot of seismic data processing... hundreds of gigabytes of data are going through our T3E-900) at high speed. Beowulfs will have their market too, where people need to run large applications at a lower speed for a lower cost.
In my opinion, the Cray/SGI merger should never have happened. Hopefully, the company that looks to be buying Cray won't drive it into the shitter (they had bought Thinking Machines when they went under... look where they are now), and they'll get back into the market. Dust off and update the plans for the T3F, maybe introduce a lower-cost MPP system to bridge the gap between Beowulfs and the T3E.
At any rate, we're not getting rid of our Crays for a long, long time. :)
Please note, I am not speaking for the company I work for in any capacity whatsoever, I'm not even going to name them. If you want to buy supercomputing time, however, do drop an email...
Re:SGI knows they are dying, only Linux will save (Score:1)
The T3D has been end of life'd - get a T3E. Also, how many processors were in that T3D? I seriously doubt you're going to compete with a T3E, however. It's hard to argue with 2048 DEC Alphas at 600 MHz...
Also, let's talk about memory bandwidth. I believe that the present T3E model gets 45 gigabits/second *BETWEEN* processors. I'd like to see you get that to local memory in your Beowulf cluster. Does that make a difference? Yes. In raw CPU speed, your Beowulf cluster may win. But that doesn't matter if your CPU's are idle half the time waiting for the RAM. Some of our (Cray's - well, technically I work for SGI but I wish I worked on the Cray side of the split - at least I still have access to their machine room :) customers will buy Beowulf. The biggest ones won't. The reason is that poor memory bandwidth could double your run time. While that makes hardly any difference for a 5 minute benchmark, for a 2 week MPI job, that could be a bit of a letdown if it now takes a month.
The other thing is NUMA - a Beowulf cluster is not a NUMA environment - you can't DMA from one node into another node without kernel intervention. On the T3E (and the SMP based vector Crays) you can do this. On the Origin (SGI designed) ccNUMA boxes, you can actually allocate several hundred gigabytes of RAM into a pthreads job and access it all normally. You won't be doing this on Beowulf any time soon.
Now, most of that doesn't make a difference for lower end customers. However, Cray has never targeted the low end of anything :) Check out the Top 500. There are a lot of Crays. There aren't too many Beowulf clusters.
Re:Noone will survive the attack of the Killer Mic (Score:1)
Anyways, yes, I used to read comp.arch quite a bit back then and I seem to remember that's where Eugene Brooks first made that post. I seem to remember Henry Spencer used it as his
I remember seeing those same curves a few years later to show how the gap left at the low end by the KMs opened the doors for the portable/wearable computers such as pen tablets. As the user interface issues with wearable computers get resolved, their widespread adoption will increase and they too will become commodities (the Palm Pilot is the tip of the iceberg). It will take longer for the wearables (let's call them tracys after Dick Tracy watches) to overtake the KMs since the KMs can take advantage of fixed data transmission mediums. You'll always be able to put more data through fiber than through air. However when you can put an Origin2000 equivalent on a watch (using molecular circuits, Drexlerian rod logic, quantum gates or whatever) the KMs will
be pushed back to a small niche indeed.
No one will survive the Attack of the Killer Tracys.
Of course by then, the computers are small enough that you can get them as cranial implants with spinal and optic nerve taps, however the upgrades are a real pain. Worse yet, everybody's always worrying about whether the generated heat or the coils for receiving the power from the external battery cause brain cancer. After all, if a new vision correction treatment doesn't work, you can always get your eyes cloned and replaced. However, nobody wants to take a chance on getting their brains fried, you can't replace that!
"Duh, my brain hurts."
"Oh, we'll just have to get it removed."