Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Cray Introduces Adaptive Supercomputing 108

David Greene writes "HPCWire has a story about Cray's newly-introduced vision of Adaptive Supercomputing. The new system will combine multiple processor architectures to broaden applicability of HPC systems and reduce the complexity of HPC application development. Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'"
This discussion has been archived. No new comments can be posted.

Cray Introduces Adaptive Supercomputing

Comments Filter:
  • by dapantzman ( 595438 ) on Friday March 24, 2006 @03:31PM (#14990023)
    LJ had a good article on this a few months back.

    http://www.linuxjournal.com/node/8368/print [linuxjournal.com]
  • Unless 'computing power' is different to 'combined processor speed', I don't understand what Cray are up to here..

    Well yes, they are very different. Processor speed is clock rate and tells you precisely jack shit about how much work can actually be done. Computing power is better measured in operations per second. Typically we measure integer and floating point performance separately. Even those benchmark numbers are usually pretty useless; hence we have the SPECint and SPECfp benchmarks which supposedly exercise the CPU in a way more similar to real-world use.

  • by TubeSteak ( 669689 ) on Friday March 24, 2006 @03:38PM (#14990091) Journal
    The only difference I see is that they're relying on an intelligent compiler to decide which bits to send to which processing unit, but I'm not sure how much faith can be placed there.
    If you read further into the article, you would have noticed TFA talks about a new programming language called "Chapel".
    Chapel was designed as a language for rapid development of new codes. It supports abstractions for data, task parallelism, arrays (sparse, hierarchical, etc.), graphs, hash tables and so on.
    So, they aren't relying on a just a compiler, even though they are going to support "legacy programming models."
  • by SillyNickName4me ( 760022 ) <dotslash@bartsplace.net> on Friday March 24, 2006 @03:47PM (#14990157) Homepage
    The story is interesting, but also full of almost going under, being bought, sold, parent companies going bankrupt and what not..

    The Cray we know now shares a name with the Cray that produced the famous Cray supercomputers of old, they also have some nice technology around, but there the similarities stop.
  • Re:Good Motto (Score:3, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday March 24, 2006 @05:14PM (#14990809) Homepage Journal
    There are FPGAs over 250MHz now. There are times when such a beastie is useful. There are times when they aren't. Not sure why the hell you'd want to put this in a commodity PC though. It couldn't possibly help more than a second processor, which would be cheaper - or a second core, which would be cheaper still.
  • Re:Buzz word. (Score:2, Informative)

    by scoobrs ( 779206 ) on Friday March 24, 2006 @05:32PM (#14990964)
    Did you RTFA at all?! The article is NOT about automatic parallelization by some special language. Most supercomputer customers are fully aware that writing applications which perform well for their supercomputer requires writing some form of parallel code. The issue at hand is that some specialized problems perform MUCH faster on one platform than another whether it's primarily scalar, vector, threading (hundreds, not two), and even FPGA. The goal is an intelligent compiler that can recognize code segments that perform much better in another architecture and utilize it across a single application in a hybrid system. That's no small task!
  • Re:Old Story (Score:2, Informative)

    by LookoutforChris ( 957883 ) on Friday March 24, 2006 @07:31PM (#14991671) Homepage
    Just what I was going to say! Project Ultra-Violet [sgi.com] is what they're calling it.

    SGI has a 2nd generation product [sgi.com] based on this: RASC, which is a node board with 2 FPGA chips that integrates (same access to shared memory) with the rest of the machines Itanium node boards.
  • by some damn guy ( 564195 ) on Saturday March 25, 2006 @12:42AM (#14992609)
    Cray already makes systems based on many thousands of opteron processors. You can't beat them for scalar processing power. But what they also make,and still excel at, is specialized vector machines that can work with them. It's two good, but different tools for different jobs. The improvement is to make the two even more integrated and more flexible.

1 + 1 = 3, for large values of 1.

Working...