Tera Completes Acquistion of Cray 41
dewey writes "Tera, a new kid on the supercomputing block, has successfully completed its acquisition of the supercomputing pioneer, Cray (formerly owned by SGI). The new company will take Cray's name.
Tera has a press release from a month ago that spells out some of the details of the deal.
"
Re:imagine... (Score:1)
Big Iron supercomputers such as the ones made by IBM, Cray/Tera, SGI, etc., have high bandwidth and low latency interconnects between processors/nodes for those jobs that *can* run on more than one processor, but still rely on each other. These interconnects, oh by the way, aren't as cheap as NICs either.
As a followup, let's say I have a 200 processor Cray T3E. What would be smarter, buying 4 more 200 processor T3Es and linking them in a beowolf cluster, or buying 800 T3E Alphas and sticking them in my original box? Which solution is faster?
It's the bandwidth, stupid (Score:1)
Apps run faster when their particular bottlenecks get widened, at least up to the point where some other constraint becomes the bottleneck. Some bottlenecks are more costly to widen then others. You don't want to build a machine with (say) huge memory bandwidth if some other factor will then limit performance and cause those expensive wires to stand idle.
"Supercomputing" has become a hard term to define. In my book, you're supercomputing if your app's performance bottleneck is bandwidth to its big data sets, you're running on a system with huge amounts of bandwidth, and all of the other potential bottlenecks in the system have been overengineered so that that big bandwidth is still the most common bottleneck. This kind of computing is expensive, so the app must sure be important to somebody.
This is why vectors are still cool after all these years. VLIW, superscalar, etc. are all fine, but they don't scale up to saturate big per-processor bandwidths without hitting some other limiting factor first.
Re:Tera's multithreaded architecture (Score:1)
--
Re:Cray... Who cares. (Score:1)
The only thing that really competes with cray on the highend is the SGIO2k...hmmm, wonder where they got that technology?
Binder
Cray (Score:1)
two cents on the dollar (Score:1)
for which SGI bought for $780M.
How the mighty have fallen!
Here are the details (Score:1)
The details, spelled out: T-E-R-A
--
Intel a supercomputer power? (Score:1)
Intel dropped ot of the supercomputer market a whlie ago, right after they produced ASCI Red. Sure, their processors are used by other companies to build supercomputers, but the same is true for the Compaq Alpha, and soon for AMD's Athlon.
There are some new kids on the supercomputer block, Linux-using kids, even. Too bad Slashdot gives the old, stodgy, non-open-source supercomputing companies lots more press
Re:Tera's multithreaded architecture (Score:1)
No, car crashes are an example where clusters do just fine. While the Tera does do a bit better with load balancing on codes like that, cluster hardware is so cheap that you can just buy lots more cluster, and not worry much about load balance. That's the problem that has already made Tera's MTA not succeed in the market; it's not cost effective for any problem.
Re:Cray... Who cares. (Score:1)
Uh, SGI developed the O2k before they bought Cray, and there is lots of competition on the high end. For example, for the FSL weather forecasting bid, SGI bid an O2k, but the winner was a cluster running AlphaLinux [hpti.com].
Re:I hope they keep the Cray name (Score:1)
You're getting a little confused with the CRAY-4. That was actually a different company - Cray Computer Corp. (as opposed to Cray Research) which went bankrupt in 1995
OpenBSD on the Tera (Score:1)
Re:Cray... Who cares. (Score:1)
There's no market - Why supercomputing is dieing (Score:1)
imagine... (Score:1)
What ever happened to creativity? (Score:1)
Ok, I admit I would be doing everything to get rid of a name like 'Tera' too, but this is insane: taking over another company just because you're not creative enough to make up a name of your own. Such people just shouldn't be having access to these amounts of money.
Re:Cray (Score:1)
Not by a long shot. There are a lot of problems for which Beowulf clusters are totally worthless. Real supercomputing consists of converting CPU-bound problems to IO-bound ones, and solving them on appropriate hardware. And IO is what x86 hardware is worst at.
There is still a very healthy market for real supercomputers, and there are no signs of erosion.
It's the name ... (Score:1)
And again, the boss-weenies don't want to worry about support, he's job is on the line; most of the end users don't want to worry about making the @#$%^&$ computer work, they just want to switch it on and compute. Buying packaged systems is OK by them, be those systems TERA's or a conventional cluster.
Re:Tera not a new company, just a dogged one (Score:1)
I hope they keep the Cray name (Score:1)
tcd004
LostBrain [lostbrain.com]
Cray... Who cares. (Score:1)
crays G4s and beowulf (Score:1)
Call it CRATER (Score:1)
Supercomputing? Is the secrecy reqd? (Score:1)
Re:Tera not a new company, just a dogged one (Score:1)
Old logo... (Score:1)
Re:Cluster != Supercomputer (Score:1)
Let Cray rest in peace - sorry (Score:1)
Maybe Tera should acquire a better marketing dept. (Score:1)
> The architecture solves most, if not
> all, of the problems that have inhibited
> running applications in parallel on multiple
> processors, such as irregular memory access
> patterns, synchronization, and load balancing.
That's right, folks... no longer do you need
to synchronize your functions or watch
for threading collisions... Tera's
revolutionary new MTA takes the thinking
out of coding!
I want to sit on the Cray! (Score:2)
the corporate status symbol was to have a Cray
super-computer. These were polygonal towers
(good shape for minimizing slow wire lengths)
with a encircling bench at the base. The job
interviewers would invite you to sit on the bench
as the ultimate Geek perk.
A decade later, on a trip to China, companies
would show me their Galaxy supercomputers,
identical in shape to the Cray-1, but a little
larger. (Probably identical circuits too.)
Tera is not the new kid! (Score:2)
"old-age" in the computer industry.
Someone there has deep pockets,
beacuse they've only shipped a couple of
machines for revenue according to their reports.
Re:I hope they keep the Cray name (Score:2)
Here's hoping Tera/Cray at least re-evaluate some of his designs.
Re:Cray Computer != Cray Research (Score:2)
Re:Tera's multithreaded architecture (Score:2)
Apparently, a lot of people here like to shout "beowulf cluster" and don't understand much about the problems of massively parallel computing.
The bottleneck is inter-processor communication. If all you are doing is trying to brute force a cipher the processors are almost independent and easily reach the theoretical aggregate performance figures. But if you are doing complex physical simulations you can end up waiting for data most of the time and using only a fraction of your theoretical parallel power. Well spoken!
We've just been through an evaluation cycle where we pitted all major HPC systems and most Intel/Linux clustering solutions against each other. While I can't post numbers (yet), the results are that Intel/linux clusters come out at the bottom, mostly because of high latencies in the interconnect. I think it is not only the hardware per se, but immature drivers also.
We looked at Myrinet, Giganet and SCI. All of them were pretty close in performance, but don't come near more established SMP, NUMA or clustering technologies under traditional Unices. Our benchmark is molecular dynamics code with a reasonably large solvated protein.
Re:Names? Slogans? (Score:2)
Re:Supercomputers are dead (Score:2)
And once more: Big Iron still has its place. It's not anymore as prevalent as it used to be, but standard PC architecture has design limits that prevent it from ever being usable for certain tasks (basically, anything that requires low-latency, high-bandwidth I/O). The supercomputing center next door just got a new Big Iron that's now the most powerful computer in Europe. It cost about $30,000,000. Look at the specs [lrz-muenchen.de], especially the IO part. A Beowulf for the same money would be a huge, mostly useless pile of junk, when faced with the kind of problems this machine is designed to solve.
Re:Supercomputers are dead (Score:3)
Spoken well as someone obviously unfamiliar with the technology. IBM is aiming for a new peta-flop box - how many P-III/1GHz machines do you want to string together to:
On top of that, what are you going to connect the nodes together with? I don't think gigabit ethernet is going to handle the load you're putting on it. Ultra-high-speed bus handling is part of the magic that makes supercomputers truly super.
Beowulf is neat, and I'd love to experiment with it myself. It can even potentially replace a mainframe for transaction processing, although what OS you'd run for the task is beyond me. However, supercomputers are in their own unique class, and a few (million) Linux boxen aren't going to be able to compete in power, cost, usability, maintainability, or any other arena that requires the true Big Iron.
Re:Supercomputers are dead (Score:4)
As a concept, supercomputing is and will remain tremendously viable and important - we'll always (well, OK, for the foreseeable future) be able to engineer fatter, lower latency interconnects within a box than between them as some sort of network. Still, the network guys are closing the gap some, so much so that many former supercomputer customers are now using clusters of some sort or the other to replace supercomputers.
As a business proposition, supercomputing is suicidal. The tidal wave of Moore's law and decently performing commodity hardware and software is firmly against you, not to mention these things take a LOT of man-years to develop and sort out, making it much harder to present customers with a value proposition that makes them willing to part with megabucks. Years ago, it was true that custom silicon gave you a real advantage at the high end - but no longer. The bottom line is that lower cost solutions are becoming "good enough" for the hard problems most people are interested in, so the market is shrinking even as development and manufacturing costs soar. (Remember how much of the economic viability of technology is determined by volume, something these guys will never have, almost by definition.)
Cray has been a walking corpse for years (seriously, go back and see when was the last year they made money - I'll bet it was nearly a decade ago) - even most of their industrial customers have abandoned supercomputers, leaving only a few high end science projects and the spooks as their market. My guess is that if it weren't for the NSA et al, these guys would have been out of business 5-10 years ago.
Tera not a new company, just a dogged one (Score:4)
I was one of the designers of the CM-5 supercomputer at Thinking Machines Co. back in the '80s. One of the terrible things about designing a supercomputer is that it takes so long, Moore's law has moved the underlying technology out from under you before you're done. We had to throw away a mostly-completed microprocessor design when it became clear that we would be beaten by commodity microprocessors, and re-engineer the rest of the system to use a big pile of commodity SPARC chips instead of a big pile of our custom whiz-bangs. It was an enormous waste and really bad for morale (mine, anyway). So we ended up slipping one generation.
I find it hard to imagine what it must be like to work at Tera. I read Burton Smith's first paper, proposing what has now become the MTA, in 1985. Computing has gotten a thousand times faster since then, and they're still plugging away. Imagine the number of designs they have had to throw away! The've probably had to redesign the system every 18 months, just to stay even, without ever being able to build it, until they managed to deploy one machine, last July. It's an incredibly long slog, espescially for an industry like computers, that rewards sprinters, not sloggers.
Names? Slogans? (Score:4)
Anyone else see a vicious cycle?
Whaddya think they're goinbg to use for a slogan:
Cray: As cool as we used to be?
Cray: Not your mothers Supercomputer?
Tera: We put the T in T3E?
Cray: 100% SGI-free?
Cray: More expensive than a failed Pesidential campaign. (starring John McCain)?
Cray: We were chewing numbers when Beowulf was just a gleam in some cheapskates eye?
Cray: Buy Big Blue? We've got chunks of IBM in our stool!?
Cray: Stop drooling and buy one?
Cray: Just plain sexy.
Tera's multithreaded architecture (Score:5)
The bottleneck is inter-processor communication. If all you are doing is trying to brute force a cipher the processors are almost independent and easily reach the theoretical aggregate performance figures. But if you are doing complex physical simulations you can end up waiting for data most of the time and using only a fraction of your theoretical parallel power.
This is especially true of things that don't break down nicely into regular blocks. For a wind tunnel simulation you can assign different sections of the tunnel to different processors and each one communicates only with its neighbors with nice, predictable access patters. For something like a simulation of a car crash this will not work too well, though.
Tera's architecture is based around a high throughput communication fabric. The cost of this architecture is the latency - it can take many cycles from the time you request a piece of data stored in another processor until it traverses the switch fabric and the result comes back. To get around this problem each processor runs many threads with very fine granularity - it switches to a different thread every instruction cycle. By the time the next instruction for the same thread is scheduled for execution the results of a remote memory access are already available, without wait states. Each of these "virtual processor" threads is not particularly fast but the total throughput is very high.
This presents the programmer with a simple shared-memory multithreaded programming model. No need to reengineer your program to a specific message passing architecture supported by the target machine.
----