Forgot your password?
typodupeerror

Cray Introduces Adaptive Supercomputing 108

Posted by ScuttleMonkey
from the adapting-to-complexity dept.
David Greene writes "HPCWire has a story about Cray's newly-introduced vision of Adaptive Supercomputing. The new system will combine multiple processor architectures to broaden applicability of HPC systems and reduce the complexity of HPC application development. Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'"
This discussion has been archived. No new comments can be posted.

Cray Introduces Adaptive Supercomputing

Comments Filter:
  • Good Motto (Score:5, Insightful)

    by ackthpt (218170) * on Friday March 24, 2006 @03:04PM (#14989792) Homepage Journal

    Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'

    That's a good motto, but how often do you bend the will of your application, needs or business to the limitations of the application? I've been sitting on something for a couple weeks after telling someone "You really should have accepted the information the other way, because this new way you want it is highly problematic (meaning: rather than rip it off with a simple SQL query, I'll have to do an app)"

    IMHO adapting to the needs of the user == customisationg, which also == money. Maybe it's not a bad idea at that! :-)

    In certain cases, at run-time, the system will determine the most appropriate processor for running a piece of code, and direct the execution accordingly.

    This assumes, of course, that you have X number of processors to chose from. If you can't do it, the answer is still 'throw more money at it, buy more hardware.'

    my head is still spinning from all the new buzzwords overheard at SD West 2006.

  • Re:Good Motto (Score:2, Insightful)

    by Kitsune818 (927302) on Friday March 24, 2006 @03:10PM (#14989846)
    They just left out the ending. It's really: 'The Cray motto is: adapt the system to the application - not the application to the system.'" Why? Because hardare costs more to change!"
  • by TubeSteak (669689) on Friday March 24, 2006 @03:26PM (#14989979) Journal
    After exhaustive analysis Cray Inc. concluded that, although multi-core commodity processors will deliver some improvement, exploiting parallelism through a variety of processor technologies using scalar, vector, multithreading and hardware accelerators (e.g., FPGAs or ClearSpeed co-processors) creates the greatest opportunity for application acceleration.
    So they're saying that instead of faster/more generalized processors, they want several specialized processors.

    Old ideas are new again.
  • And What If... (Score:4, Insightful)

    by Nom du Keyboard (633989) on Friday March 24, 2006 @03:46PM (#14990146)
    The new system will combine multiple processor architectures

    And what if I don't want multiple processor architectures, but instead just lots and lots of the single architecture my code is compiled for?

  • Re:And What If... (Score:4, Insightful)

    by flaming-opus (8186) on Friday March 24, 2006 @04:24PM (#14990428)

    The idea is that all the CPU types will be blades that all use the same router, and plug into a common backplane, and that the cabinets all cable together the same way. In all cases, I imagine there will be opterons around the periphery, as I/O nodes and running the operating system. Then you plug in compute nodes in the middle, where the computer nodes can be a bunch more opterons, or vector cpu's, or fpga's, or multithreaded cpus. There will certaintly be plenty of customers only interested in lotsa opterons on cray's fast interconnect, and they just won't buy any of the custom cpus.
  • Buzz word. (Score:3, Insightful)

    by mtenhagen (450608) on Friday March 24, 2006 @04:50PM (#14990624) Homepage
    While I must admit "Adaptive Supercomputing" sound like a realy cool buzz word, in practice the programmer still will need to adapt the application to the physical distribution of the systems. Or are they going to dynamicly rewire the switches?

    There have been several attempts (hpfortran, orca, etc..) to automate parallisme but most of them failed because a skilled programmer could create a much faster application within a few days. And remeber that a 10% performance boost in these applications means thousands of dollars saved.

    So I suspect this is just a buzz word.
  • Re:Buzz word. (Score:3, Insightful)

    by flaming-opus (8186) on Friday March 24, 2006 @06:10PM (#14991228)
    2 years behind in announcements, let's see who brings it to market first.

    Sadly the answer is that it's not even a race. SGI brought foreward their first step already, but won't get past that. You can now buy an fpga blade to put in your altix. While cray is just now announcing a unified vision for this, they've already had their fpga based solution since they bought octiga bay 2 years ago.

    As much as cray is suffering financially, SGI is in much worse shape, and they have about $350million in debt around their neck, which makes them an unlikely target for a buy-out, at least until they go through bankruptcy for a while. I doubt that SGI has any money to spend on long-term engineering efforts like a vector cpu. They hopped on the fpga bandwagon because they could buy them from xilinks, slap a numalink on them, and stuff them into an altix with relatively little investment. Thus far cray has had a great deal of luck porting bioinformatics codes to the fpga in the xd1. (smith-waterman alignment, if anyone cares.) This is a market much more in line with SGIs market strengths and somewhat new for cray, who is used to selling machines with an entry-level price of $2million.

    In any case, it's the logical path foreward for Cray's 4 product lines, even if noone combines vector, fpga, and multithreaded processors. They all benefit from being paired with opteron nodes, and from reducing the number of parts cray has to maintain. SGI is coming from the other direction, which is to add processor types to their interconnect foundation. It's still a good idea, but it's probably more capital-intensive than what SGI's capable off these days.
  • by hpcanswers (960441) on Friday March 24, 2006 @09:31PM (#14992143)
    Cray and SGI have both been losing money recently as more users flock to clusters, which tend to be cheaper and more flexible. Now both of them are offering this "adaptability" position. SGI is moving in the direction of blades so customers can choose their level of computing power; Cray will soon have a core machine that customers can build out from. What's interesting to note is that both of them are ultimately selling Linux on commodity processors (Itanium for SGI and Opteron for Cray) plus a proprietary network and a few other bells and whistles. It seems unlikely they'll be able to compete LinuxNetworx or even *gasp* IBM.

Surprise your boss. Get to work on time.

Working...