Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Silicon Graphics

Tera Completes Acquistion of Cray 41

dewey writes "Tera, a new kid on the supercomputing block, has successfully completed its acquisition of the supercomputing pioneer, Cray (formerly owned by SGI). The new company will take Cray's name. Tera has a press release from a month ago that spells out some of the details of the deal. "
This discussion has been archived. No new comments can be posted.

Tera Completes Acquistion of Cray

Comments Filter:
  • by Anonymous Coward
    Beowolf clusters of *any* machine can work if the particular problem you're trying to solve is embarrassingly parallel. If you can break your problem up into 14 sections, and send one to each node of the cluster, good for you. But if those 14 sections are reliant on ongoing results of those other sections, then the latency involved to communicate between nodes brings the efficiency of the entire system close to nil (I'm talking Real-World Peak Performance, not theoretical).

    Big Iron supercomputers such as the ones made by IBM, Cray/Tera, SGI, etc., have high bandwidth and low latency interconnects between processors/nodes for those jobs that *can* run on more than one processor, but still rely on each other. These interconnects, oh by the way, aren't as cheap as NICs either.

    As a followup, let's say I have a 200 processor Cray T3E. What would be smarter, buying 4 more 200 processor T3Es and linking them in a beowolf cluster, or buying 800 T3E Alphas and sticking them in my original box? Which solution is faster?
  • by Anonymous Coward
    There's always a limiting factor on the performance of an application on a given computer system. It might be floating-point or integer throughput on the processors, or the effective instruction issue rate, or memory hierarchy bandwidth, or I/O bandwidth, or something else. But there's always something, since serious applications take more than zero time to run.

    Apps run faster when their particular bottlenecks get widened, at least up to the point where some other constraint becomes the bottleneck. Some bottlenecks are more costly to widen then others. You don't want to build a machine with (say) huge memory bandwidth if some other factor will then limit performance and cause those expensive wires to stand idle.

    "Supercomputing" has become a hard term to define. In my book, you're supercomputing if your app's performance bottleneck is bandwidth to its big data sets, you're running on a system with huge amounts of bandwidth, and all of the other potential bottlenecks in the system have been overengineered so that that big bandwidth is still the most common bottleneck. This kind of computing is expensive, so the app must sure be important to somebody.

    This is why vectors are still cool after all these years. VLIW, superscalar, etc. are all fine, but they don't scale up to saturate big per-processor bandwidths without hitting some other limiting factor first.

  • I think you could safely take "beowulf cluster" and "about the problems of massively parallel computing" out of your first sentence, and probably be even closer to the truth.


    --
  • This may be true but even today after being neglected for a few years there is nothing that will compete with a cray. Intels proprietary prototype will indead beat it but it is really just a cluster of computers anyway. for more general tasks it would not perform well.

    The only thing that really competes with cray on the highend is the SGIO2k...hmmm, wonder where they got that technology?

    Binder
  • Nice idea.. but now that beowulf clusters are available en masse and target the same market as Cray, I suspect that they weren't interested in Cray for the supercomputing side. Maybe they're planning on making systems which supercool conventional chips and package them for the home user? That's a HUGE untapped market if they can hit the right price point.
  • Tera gets Cray for $15M cash (plus stock and loans)
    for which SGI bought for $780M.
    How the mighty have fallen!

  • The new company will take Cray's name. Tera has a press release from a month ago that spells out some of the details.

    The details, spelled out: T-E-R-A
    --

  • Intel dropped ot of the supercomputer market a whlie ago, right after they produced ASCI Red. Sure, their processors are used by other companies to build supercomputers, but the same is true for the Compaq Alpha, and soon for AMD's Athlon.

    There are some new kids on the supercomputer block, Linux-using kids, even. Too bad Slashdot gives the old, stodgy, non-open-source supercomputing companies lots more press ;-)


  • For a wind tunnel simulation you can assign different sections of the tunnel to different processors and each one communicates only with its neighbors with nice, predictable access patters. For something like a simulation of a car crash this will not work too well, though.


    No, car crashes are an example where clusters do just fine. While the Tera does do a bit better with load balancing on codes like that, cluster hardware is so cheap that you can just buy lots more cluster, and not worry much about load balance. That's the problem that has already made Tera's MTA not succeed in the market; it's not cost effective for any problem.


  • The only thing that really competes with cray on the highend is the SGIO2k...hmmm, wonder where they got that technology?


    Uh, SGI developed the O2k before they bought Cray, and there is lots of competition on the high end. For example, for the FSL weather forecasting bid, SGI bid an O2k, but the winner was a cluster running AlphaLinux [hpti.com].
  • First off - yes, they're keeping the Cray name. They'll be called "Cray Inc."

    You're getting a little confused with the CRAY-4. That was actually a different company - Cray Computer Corp. (as opposed to Cray Research) which went bankrupt in 1995

  • Well..At least on the power supplies...On the Tera, they run OpenBSD/mvme68k
  • Intel has been out of supercomputing for sometime. Their TFLOPS machine was the last and it was not a "product". (A one time prototype.) Their last product was the Paragon XP/S.
  • The real reason that supercomputing appears to be dieing is that there is no market. There is only one actual customer - the U.S. government. Many of what use to be supercomuting applications can now be served with standard commercial systems. Does anyone know if IBM really makes any money on their supercomputer systems ?? I bet not. Great vehicles for doing R&D however.
  • a beowolf cluster of Cray machin. . . oh, wait, that's the whole reason why they are in trouble.
  • The new company will take Cray's name.

    Ok, I admit I would be doing everything to get rid of a name like 'Tera' too, but this is insane: taking over another company just because you're not creative enough to make up a name of your own. Such people just shouldn't be having access to these amounts of money.
  • Nice idea.. but now that beowulf clusters are available en masse and target the same market as Cray

    Not by a long shot. There are a lot of problems for which Beowulf clusters are totally worthless. Real supercomputing consists of converting CPU-bound problems to IO-bound ones, and solving them on appropriate hardware. And IO is what x86 hardware is worst at.

    There is still a very healthy market for real supercomputers, and there are no signs of erosion.

  • Cray does still mean SuperComputer to many, escpecially those who now make budgetary decisions. While low cost clusters are a viable alternative, the more tightly coupled architechures have advantages for some applications. Memory latency and bandwidth has been a problem for a long time, Tera's MTA is just one more approach to getting around those problems.

    And again, the boss-weenies don't want to worry about support, he's job is on the line; most of the end users don't want to worry about making the @#$%^&$ computer work, they just want to switch it on and compute. Buying packaged systems is OK by them, be those systems TERA's or a conventional cluster.

  • Question for you: as a TERA shareholder, did you really gain solid intellectual property from Cray? Was it worth it, at the low-level? Is there a solid mind share with SGI now? Are you thinking of a broader market strategy than before? Thanks!
  • Tera, just doesn't sound as cool. I don't want Scarlett O'hara crying over my super-computer.

    tcd004
    LostBrain [lostbrain.com]

  • Who gives a hoot about cray anymore. They didn't go to multi processor systems early enough and lost it all. Now IBM and Intel are the big supercomputer powers. The old Crays were cool and ahead of ther time, but THey haven't done anything meaningful lately.
  • a G4 is as fast as an old cray (duh, I know) beowulf clusters running on alpha processors and IBM SP's are the supercomputers now.... I was just curious, if you used a bunch of G4's and yellow dog linux optimized as a beowulf cluster...what would you call it? A G-cluster, a G-spot, or a G-string? You are a geek if your network interface is in promiscuous mode more than you are. SP1
  • That would be a nice name! Yeah!
  • The article says that the combined company will have the best of Vector Processing from Cray and MTA from TERA. This I feel was something that could have happened much sooner and really didnt need a merger of companies. supercomputers are no longer a genies lamp ; they are pretty commonplace and their are lots of ways of building them. Heck even PC's are now multiprocessor and as a user has proposed a cluster of Beowulf might be a solution. While I reserve comment on that something similar has been done BY href="http://www.cdac.org.in/index.html" CDAC (The centre for development of advanced computing ) in India where after the US govt banned exports of CRAYS in 1980's they developed a supercomputer PARAM within 3 yrs using 486 chips(which of course the US govt had not banned the export of) by utilizing the massively parrallel concept
  • Thanks for the elucidation! Just a couple of comments: 1) Morale is excellent. 2) First multiprocessor (2P) machine was placed at SDSC in April of 1998, not last year. Last July was when the 8 processor machine was accepted. The 4P had already been there for 6 months.
  • Y'all are still using the old SGI logo. The new one suits both you and the current SGI better.
  • I agree that a cluster does not make a supercomputer - but claiming that a supercomputer has to use shared memory is something different. A supercomputer can range from a single processor vector machine, a shared memory machine with a moderate amount of processors (e.g. the gold old Y-MP) to a MPP shared nothing architecture (optionally with vector units). The difference between clusters and MPP shared nothing architectures is the underlying communication. Clusters usually utilize off the shelf equipment while those MPP beasts use a specialized communications architecture. (keywords: meshes, hypercubes, fat trees, full crossbars (if you have the money) etc.). So please don't confuse multiprocessor shared memory machines with supercomputers.
  • I hate to say that, but Cray had been in trouble for more than a decade. Already in the early '90s Cray had problems keeping up with the vector processors from the big 3 japanese companies (NEC, Fujitsu and Hitachi) - at that time their only advantage was that they had a lead in their software tools - raw processing power wise the NEC SX3 (I am not even talking about the Monte4 variant) ran circles around the Y-MPs then. So whenever someone (in the US mainly) dared to order a vector machine from someone else than Cray, they would start whining about "dumping" and unfair competition. The other problem which emerged for Cray was the advent of high performace workstations. In some cases these superscalar workstations could outperform any vector processor (not all algorithms vectorize). So their machines also were attacked from the low end. To make things worse the advent of MPP systems (sometimes using these aforementioned processors) put additonal pressure from the high end on Cray. But worst of all: the cold war ended and the majority of supercomputer customers did not have that much money to spend anymore. The remaining money went into the real hot (at that time) MPP machines (TMC's CM2 and CM5, Intel's Hypercube and Paragon, ...). But even TMC went out of business (although I still think the CM2 was one of the coolest machines at that time ;-) ). So finally (having lost the race to build the fastest vector processor) Cray came up with the T3D (the E came later). But using off the shelf CPU's doesn't make these machines any cheaper - now the communication is much more complex, the software (just to run those babies) is something entirely new and finally: not all problems can be parallelized! IMHO nowadays supercomputers can only be build by the big players in the market, who could afford to loose money on that: write off some to marketing and some to R&D. Cray would have been better off, if they were bought by some of the big players. I wish Tera/Cray good luck - but I am afraid it will be just another life extension for one of the most influential computer manufacturers of the past. Without serious amounts of money this will just defer the inevitable... Just a note: supercomputing had been in trouble since the late 80s - anyone remember the ETA10?
  • From Tera's site:

    > The architecture solves most, if not
    > all, of the problems that have inhibited
    > running applications in parallel on multiple
    > processors, such as irregular memory access
    > patterns, synchronization, and load balancing.

    That's right, folks... no longer do you need
    to synchronize your functions or watch
    for threading collisions... Tera's
    revolutionary new MTA takes the thinking
    out of coding!
  • When I was job hunting in the early eighties,
    the corporate status symbol was to have a Cray
    super-computer. These were polygonal towers
    (good shape for minimizing slow wire lengths)
    with a encircling bench at the base. The job
    interviewers would invite you to sit on the bench
    as the ultimate Geek perk.

    A decade later, on a trip to China, companies
    would show me their Galaxy supercomputers,
    identical in shape to the Cray-1, but a little
    larger. (Probably identical circuits too.)

  • They've been around since the mid-80s,
    "old-age" in the computer industry.
    Someone there has deep pockets,
    beacuse they've only shipped a couple of
    machines for revenue according to their reports.

  • Tera, just doesn't sound as cool.
    I thought that was partly the point of the takeover. Spend a few bucks and get a great name, even if it has been abused of late. As to why they might want Cray's Research, In 1994 the CRAY-4 was operating at a clock rate of 1 GHz before Seymour Cray had to abandon it.
    Here's hoping Tera/Cray at least re-evaluate some of his designs.
  • That machine was under development by the Cray Computer Corporation (CCC), which went bankrupt. What Tera bought was Cray Research. Two different companies, and Cray Research never bought the remainders of CCC. The two companies had split when they decided that they didn't have the resources to develop two new machines at the same time. Seymour Cray went with the new company to develop the Cray-4.
    Cheers! I was not aware of that.
  • Apparently, a lot of people here like to shout "beowulf cluster" and don't understand much about the problems of massively parallel computing.

    The bottleneck is inter-processor communication. If all you are doing is trying to brute force a cipher the processors are almost independent and easily reach the theoretical aggregate performance figures. But if you are doing complex physical simulations you can end up waiting for data most of the time and using only a fraction of your theoretical parallel power. Well spoken!

    We've just been through an evaluation cycle where we pitted all major HPC systems and most Intel/Linux clustering solutions against each other. While I can't post numbers (yet), the results are that Intel/linux clusters come out at the bottom, mostly because of high latencies in the interconnect. I think it is not only the hardware per se, but immature drivers also.

    We looked at Myrinet, Giganet and SCI. All of them were pretty close in performance, but don't come near more established SMP, NUMA or clustering technologies under traditional Unices. Our benchmark is molecular dynamics code with a reasonably large solvated protein.

  • Announcing the 2002 Cray Research model T3r4. If the name doesn't make you laugh, you should stop reading now. You have an obvious inability to truly appreciate the power of our newest offering. Go find someone who laughs and ask him to hit you with a clue-by-four. Good! The new T3r4 comes in three standard massivly para.....

  • The age of "big iron" has passed, and those few companies that continue to try and fight this trend are doomed to an ignoble failure.

    And once more: Big Iron still has its place. It's not anymore as prevalent as it used to be, but standard PC architecture has design limits that prevent it from ever being usable for certain tasks (basically, anything that requires low-latency, high-bandwidth I/O). The supercomputing center next door just got a new Big Iron that's now the most powerful computer in Europe. It cost about $30,000,000. Look at the specs [lrz-muenchen.de], especially the IO part. A Beowulf for the same money would be a huge, mostly useless pile of junk, when faced with the kind of problems this machine is designed to solve.

  • Spoken well as someone obviously unfamiliar with the technology. IBM is aiming for a new peta-flop box - how many P-III/1GHz machines do you want to string together to:

    1. Achieve the 1 peta-flop goal
    2. Overcome the massive computational overhead in networking that many nodes
    3. Store, cool, and power the cluster

    On top of that, what are you going to connect the nodes together with? I don't think gigabit ethernet is going to handle the load you're putting on it. Ultra-high-speed bus handling is part of the magic that makes supercomputers truly super.

    Beowulf is neat, and I'd love to experiment with it myself. It can even potentially replace a mainframe for transaction processing, although what OS you'd run for the task is beyond me. However, supercomputers are in their own unique class, and a few (million) Linux boxen aren't going to be able to compete in power, cost, usability, maintainability, or any other arena that requires the true Big Iron.

  • by dublin ( 31215 ) on Tuesday April 04, 2000 @08:21AM (#1152334) Homepage
    This discussion is confusing two important aspects of supercomputing: the viability of the supercomputing concept, and the viability of supercomputer companies.

    As a concept, supercomputing is and will remain tremendously viable and important - we'll always (well, OK, for the foreseeable future) be able to engineer fatter, lower latency interconnects within a box than between them as some sort of network. Still, the network guys are closing the gap some, so much so that many former supercomputer customers are now using clusters of some sort or the other to replace supercomputers.

    As a business proposition, supercomputing is suicidal. The tidal wave of Moore's law and decently performing commodity hardware and software is firmly against you, not to mention these things take a LOT of man-years to develop and sort out, making it much harder to present customers with a value proposition that makes them willing to part with megabucks. Years ago, it was true that custom silicon gave you a real advantage at the high end - but no longer. The bottom line is that lower cost solutions are becoming "good enough" for the hard problems most people are interested in, so the market is shrinking even as development and manufacturing costs soar. (Remember how much of the economic viability of technology is determined by volume, something these guys will never have, almost by definition.)

    Cray has been a walking corpse for years (seriously, go back and see when was the last year they made money - I'll bet it was nearly a decade ago) - even most of their industrial customers have abandoned supercomputers, leaving only a few high end science projects and the spooks as their market. My guess is that if it weren't for the NSA et al, these guys would have been out of business 5-10 years ago.
  • Contrary to what it says at the top of this page, Tera is not a 'new kid on the supercomputing block'. They've been around since 1987, and their design has been around since before that.

    I was one of the designers of the CM-5 supercomputer at Thinking Machines Co. back in the '80s. One of the terrible things about designing a supercomputer is that it takes so long, Moore's law has moved the underlying technology out from under you before you're done. We had to throw away a mostly-completed microprocessor design when it became clear that we would be beaten by commodity microprocessors, and re-engineer the rest of the system to use a big pile of commodity SPARC chips instead of a big pile of our custom whiz-bangs. It was an enormous waste and really bad for morale (mine, anyway). So we ended up slipping one generation.

    I find it hard to imagine what it must be like to work at Tera. I read Burton Smith's first paper, proposing what has now become the MTA, in 1985. Computing has gotten a thousand times faster since then, and they're still plugging away. Imagine the number of designs they have had to throw away! The've probably had to redesign the system every 18 months, just to stay even, without ever being able to build it, until they managed to deploy one machine, last July. It's an incredibly long slog, espescially for an industry like computers, that rewards sprinters, not sloggers.

  • by technos ( 73414 ) on Tuesday April 04, 2000 @06:50AM (#1152336) Homepage Journal
    Cray --> SGI/CRAY --> SGI --> Tera --> Cray

    Anyone else see a vicious cycle?

    Whaddya think they're goinbg to use for a slogan:

    Cray: As cool as we used to be?
    Cray: Not your mothers Supercomputer?
    Tera: We put the T in T3E?
    Cray: 100% SGI-free?
    Cray: More expensive than a failed Pesidential campaign. (starring John McCain)?
    Cray: We were chewing numbers when Beowulf was just a gleam in some cheapskates eye?
    Cray: Buy Big Blue? We've got chunks of IBM in our stool!?
    Cray: Stop drooling and buy one?
    Cray: Just plain sexy.
  • by XNormal ( 8617 ) on Tuesday April 04, 2000 @07:16AM (#1152337) Homepage
    Apparently, a lot of people here like to shout "beowulf cluster" and don't understand much about the problems of massively parallel computing.

    The bottleneck is inter-processor communication. If all you are doing is trying to brute force a cipher the processors are almost independent and easily reach the theoretical aggregate performance figures. But if you are doing complex physical simulations you can end up waiting for data most of the time and using only a fraction of your theoretical parallel power.

    This is especially true of things that don't break down nicely into regular blocks. For a wind tunnel simulation you can assign different sections of the tunnel to different processors and each one communicates only with its neighbors with nice, predictable access patters. For something like a simulation of a car crash this will not work too well, though.

    Tera's architecture is based around a high throughput communication fabric. The cost of this architecture is the latency - it can take many cycles from the time you request a piece of data stored in another processor until it traverses the switch fabric and the result comes back. To get around this problem each processor runs many threads with very fine granularity - it switches to a different thread every instruction cycle. By the time the next instruction for the same thread is scheduled for execution the results of a remote memory access are already available, without wait states. Each of these "virtual processor" threads is not particularly fast but the total throughput is very high.

    This presents the programmer with a simple shared-memory multithreaded programming model. No need to reengineer your program to a specific message passing architecture supported by the target machine.

    ----

Work is the crab grass in the lawn of life. -- Schulz

Working...