Forgot your password?
typodupeerror

Cray Introduces Adaptive Supercomputing 108

Posted by ScuttleMonkey
from the adapting-to-complexity dept.
David Greene writes "HPCWire has a story about Cray's newly-introduced vision of Adaptive Supercomputing. The new system will combine multiple processor architectures to broaden applicability of HPC systems and reduce the complexity of HPC application development. Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'"
This discussion has been archived. No new comments can be posted.

Cray Introduces Adaptive Supercomputing

Comments Filter:
  • by Eightyford (893696) on Friday March 24, 2006 @03:13PM (#14989868) Homepage
    Cray always made the coolest looking supercomputers. Here's an interesting bit of trivia:
    The Cray T3D MC cabinet had an Apple Macintosh PowerBook laptop built into its front. Its only purpose was to display animated Cray Research and T3D logos on its color LCD screen.
  • Go AMD (Score:2, Interesting)

    by dotfucked.org (952998) on Friday March 24, 2006 @03:19PM (#14989923) Homepage
    "All of these platforms will use AMD Opterons for their scalar processor base.'

    Im just loving the vendors picking up on AMD.

    Their idea seems very interesting in theory. It sounds like HPC's version of the math co-processor->crypto accelerator idea.

    And at least they are not basing the userland on Unicos :)
  • by gordyf (23004) on Friday March 24, 2006 @03:20PM (#14989939)
    It seems like the idea of combining multiple architectures into a single machine is already being done -- we have fast general purpose CPUs (single and dual core x86 offerings from AMD and Intel), paired with very fast streaming vector chips on video cards, which can be used for other non-graphical operations like a coprocessor.

    The only difference I see is that they're relying on an intelligent compiler to decide which bits to send to which processing unit, but I'm not sure how much faith can be placed there. Cray certainly has a lot of supercomputing experience, but relying on compiler improvements to make or break an architecture doesn't have a good track record. I'm curious to see how they fare.

  • by user24 (854467) on Friday March 24, 2006 @03:23PM (#14989959) Homepage
    It seems the bulk of the article is bemoaning how ineffecient single processor systems are, offering Cray's planned adaptive model as a solution, but surely we've already seen the way forward in regard to supercomputing, and that is distributed single (or dual) processor machines. As stated at zakon.org [zakon.org], "SETI@Home launches on 17 May (2001) and within four weeks its distributed Internet clients provide more computing power than the most powerful supercomputer of its time"
    Surely the computing environment hasn't changed so dramatically in 5 years as to make this type of achievement redundant?

    Unless 'computing power' is different to 'combined processor speed', I don't understand what Cray are up to here.. perhaps someone can enlighten me?
  • by aussersterne (212916) on Friday March 24, 2006 @03:29PM (#14990010) Homepage
    I always thought that Thinking Machines [wikipedia.org] deserved the award for most "I feel like I live in the future" cool in their computers with the CM5 [mit.edu].
  • by morgan_greywolf (835522) on Friday March 24, 2006 @03:34PM (#14990059) Homepage Journal
    Another interesting bit of trivia. Apple Macintoshes have been designed using a Cray [wikipedia.org]. What's even more ironic is that according to that same link, Seymour Cray used a Mac to design the next Cray.
  • by deadline (14171) on Friday March 24, 2006 @03:42PM (#14990121) Homepage
    Cray finally figured it out. I have been saying for years:

    HPC/Beowulf clusters are about building machines around problems

    That is why Clusters are such a powerful paradigm. If your problem needs more processors/memory/bandwidth/data access, you can design a cluster to fit your problem and only buy what your need. In the past you had to buy a large supercomputer with lots of engineering you did not need. Designing clusters is an art, but the payoff is very good price-to-performance. I even wrote an article on Cluster Urban Legends [clustermonkey.net] the explains many of these issues.

  • by flaming-opus (8186) on Friday March 24, 2006 @04:16PM (#14990372)
    They really aren't rellying on compiler improvements, so much as passing the code through their vectorizing compiler, and a tool for generating their fpga codes. If the code optimization for these 2 steps fails to optimize very much, you bail out and send it to the general purpose (opteron) processors.

    Your being fairly pedantic about the computer architecture anyway. Yes, pairing multipe processor types together is not new, but most mpp supercomputers use identical node types.

    The jist of this story is simpler than it sounds. Cray has 4 product lines with 4 cpu types, 4 interconnect routers, 4 cabinet types, and 4 operating systems. They would like to condense this down. The first step is to reuse components from one machine to the next. There are distinct advantages for keeping the 4 cpu types for various problem sets, but most everything else could be multi-purpose. From the sounds of things, it's using the next generation of the seastar router in all of the machines. Thus you use the same router chips, cabling, backplane, and frame for all the products. This reduces the number of unique components cray has to worry about. If they go to DDR2 memory on the X1 and mta, that further simplifies things, though I suspect they won't.

    Well, once you share parts, why not make a frame with a bunch of general purpose CPUs for unoptimized codes, and a few fpga or vector cpus for the highly optimized codes? It allows customers more flexibility, and introduces cray's mid-range customers to the possibility of using the really high-end vector processors currently reserved for the high-end X1 systems. It's also a win for the current high-end customers. On the current X1 systems, you have these very elaborate processors running the user's optimized application, but the vector cpu's also end up running scalar codes like utilities and the operating system. These are tasks the vector cpu's aren't terribly good at, and you're using a $40,000 processor to run tasks a $1000 opteron will do better. Even if the customer isn't interested in mix-n-match codes on the system, (which I'm skeptical any cray customer really is), you probably want to throw a few dozen opteron nodes into the X1's successor, just to handle the OS, filesystems, networking, and the batch scheduler.
  • by flaming-opus (8186) on Friday March 24, 2006 @04:19PM (#14990392)
    They used to, and the X1 still holds true to that. If you take the skins off, it is a marvel of stainless steel, plumbing, and just plain fantastic mechanical engineering. The Xt3 and mta, however, are just more rectangular racks. The xd1 is just a dull 3u rackmount.
  • Re:Good Motto (Score:5, Interesting)

    by dildo (250211) on Friday March 24, 2006 @04:19PM (#14990394)
    It is possible to build comptuers that are optimized for certain kinds of calculations.

    For example, Gerald Sussman of MIT (a computer scientist) and a Jack Wisdom (a physicist) decided they wanted to do long-term modelling of the solar system's evolution over time. Long time modelling of a multi-body system requires a fantastic amount of calculation. What is the best way to do it?

    Sussman and Wisdom came up with a crafty idea: build a computer that is specially configured at the hardware level to do the modelling. Sussman and his colleagues decided that with off-the-shelf parts they could build a computer that would be just as or more capable of modeling this system than a supercomputer would be. The result was the Digital Orrery, a relativlely cheap computer that gave great results. (It is now featured in the Smithsonian museum.)

    Think of it: if your computer is going to be doing the Fast Fourier Transform 6.02x10^23 times per day, why not build a superfast chip that does nothing but the FFT rather than express it as software? It's a pretty cool idea. I think this is the sort of thing that Cray computers claims to want to do with its motto.
  • by sketerpot (454020) <sketerpot@@@gmail...com> on Friday March 24, 2006 @04:20PM (#14990396)
    There are actually processors out there with compilers which can compile a few bottleneck C/C++ functions into hardware on an integrated FPGA. This expands the CPU instruction set in application-specific ways and can, in some cases, give absolutely enormous speedups.

    In other words, they're working on processors which are programmed in general-purpose languages, but which adapt their hardware to the specific program.

  • Re:Good Motto (Score:2, Interesting)

    by imgod2u (812837) on Friday March 24, 2006 @05:13PM (#14990802) Homepage
    Look on Xilinx's website. The Vertex4's (although currently having supply problems) go up to 500MHz (though you probably don't want to run anything at that speed considering that's probably the reg-to-reg limit). These things are literally better system-on-a-chip solutions than any ASICs could be considering what it offers. Integrated micro-processor, bus architecture, peripheral interfaces and non-volatile and volatile memory, with enough pins (BGA package) to expand with off-chip components. Actel even offers mixed-signal FPGA's where you can have your analog and digital circuitry all programmed onto one chip. These things are the future.
  • Re:Good Motto (Score:2, Interesting)

    by imgod2u (812837) on Friday March 24, 2006 @05:16PM (#14990832) Homepage
    FPGA's aren't the golden answer currently. Most if not all FPGA's have issues with being used this way. They are programmable, but they're not made or intended to be programmed in the field (despite their name). The majority have a programmable life of maybe 1000 flashes with flash-based FPGA's (ProASIC from Actel for instance) having a life of maybe 100 flashes. They're basically a poor-man's ASIC more than anything else. The technology would have to improve significantly in a much different direction than what the main FPGA market is targetted at before they can be used as adaptive circuit components while live.
  • by flaming-opus (8186) on Friday March 24, 2006 @05:51PM (#14991107)
    The X1 processor is already a coprocessor. Not in the sense that it's on a different piece of silicon from the scalar unit, but that the vector CPU's instruction stream is distinct from the scalar unit. In past cray systems, some cpu's used the same functional units for the scalar unit and vector unit, (T90) while some (J90) used distinct scalar units. The X1 is a vector unit bolted on the side of a MIPS scalar core, with synchronization logic, and multi-ported register files to support multi-streaming. I don't know what latency there is for the scalar unit reading/writing a vector register, but I can definately imagine a vector co-processor linked to an opteron with coherent-hypertransport. Maybe in black widow, rather than cascade. Cray has been cheering how much faster black widow will be at scalar codes, than the X1.

    The trick, of course, is how do you get the opteron and the vector processor to share access to memory? No way does hypertransport have enough bandwidth to feed the vector unit. You don't want the scalar unit to have to read the memory through the hypertransport through the vector unit. Do you give them distinct memories that are connected in some form of numa?
    The current X1 uses 32-channel rdram for 4 cpus. Assuming the black widow processor is twice as fast, and only 1 vector cpu per node: to provide the same bandwidth per flop, you need at least 12-channel xdr memory, or 4-channel xdr2. The opteron keeps going with dual channel ddr2, and has one hypertransport channel connecting the register files, one for numa memory, and one to talk to the seastar(s). Also, do the vector units and the scalar processors share the same interconnect controllers? You would want more than 1 seastar for each vector node, maybe 4.

    Hmm. I'm sure there are technical hurdles a-plenty, but it sounds good on paper.

We have a equal opportunity Calculus class -- it's fully integrated.

Working...