Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Cray Introduces Adaptive Supercomputing 108

David Greene writes "HPCWire has a story about Cray's newly-introduced vision of Adaptive Supercomputing. The new system will combine multiple processor architectures to broaden applicability of HPC systems and reduce the complexity of HPC application development. Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'"
This discussion has been archived. No new comments can be posted.

Cray Introduces Adaptive Supercomputing

Comments Filter:
  • Good Motto (Score:5, Insightful)

    by ackthpt ( 218170 ) * on Friday March 24, 2006 @03:04PM (#14989792) Homepage Journal

    Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'

    That's a good motto, but how often do you bend the will of your application, needs or business to the limitations of the application? I've been sitting on something for a couple weeks after telling someone "You really should have accepted the information the other way, because this new way you want it is highly problematic (meaning: rather than rip it off with a simple SQL query, I'll have to do an app)"

    IMHO adapting to the needs of the user == customisationg, which also == money. Maybe it's not a bad idea at that! :-)

    In certain cases, at run-time, the system will determine the most appropriate processor for running a piece of code, and direct the execution accordingly.

    This assumes, of course, that you have X number of processors to chose from. If you can't do it, the answer is still 'throw more money at it, buy more hardware.'

    my head is still spinning from all the new buzzwords overheard at SD West 2006.

    • Re:Good Motto (Score:2, Insightful)

      by Kitsune818 ( 927302 )
      They just left out the ending. It's really: 'The Cray motto is: adapt the system to the application - not the application to the system.'" Why? Because hardare costs more to change!"
      • So true, but you gotta admit, we've seen much worse bullshit fly.

      • Oh, I don't know about that... The new IBM Blade Server-H has swapable blades that can be hot swapped between Xeon and Power PC blades so you can reconfigure on the fly. Later on this year they will be adding Cell blades to the mix for calculation instensive requirements.
    • by TubeSteak ( 669689 ) on Friday March 24, 2006 @03:26PM (#14989979) Journal
      After exhaustive analysis Cray Inc. concluded that, although multi-core commodity processors will deliver some improvement, exploiting parallelism through a variety of processor technologies using scalar, vector, multithreading and hardware accelerators (e.g., FPGAs or ClearSpeed co-processors) creates the greatest opportunity for application acceleration.
      So they're saying that instead of faster/more generalized processors, they want several specialized processors.

      Old ideas are new again.
      • by sketerpot ( 454020 ) <sketerpotNO@SPAMgmail.com> on Friday March 24, 2006 @04:20PM (#14990396)
        There are actually processors out there with compilers which can compile a few bottleneck C/C++ functions into hardware on an integrated FPGA. This expands the CPU instruction set in application-specific ways and can, in some cases, give absolutely enormous speedups.

        In other words, they're working on processors which are programmed in general-purpose languages, but which adapt their hardware to the specific program.

    • This assumes, of course, that you have X number of processors to chose from. If you can't do it, the answer is still 'throw more money at it, buy more hardware.'

      Except that in this case, one of the options they are promoting is FPGA - Field Programmable Gate Array - which can literally be adapted to the problem by reprogramming them. They are, in effect, an infinite number of processors to choose from.

      Look to see FPGA's showing up as coprocessors on more prosaic AMD Opteron motherboards in the near future
      • Are FPGAs fast now? That was always the problem before: you could create a custom processor, but it wouldn't run faster than 15MHz. If your FPGA can't keep up with the data stream, then what's the point? You'd probably be better off trying to hand-roll some custom microcode for one of your commodity processors. It's not as flexible as a PGA, but it'll be faster, and may be more appropriate for certain classes of problems.
        • Re:Good Motto (Score:2, Interesting)

          by imgod2u ( 812837 )
          Look on Xilinx's website. The Vertex4's (although currently having supply problems) go up to 500MHz (though you probably don't want to run anything at that speed considering that's probably the reg-to-reg limit). These things are literally better system-on-a-chip solutions than any ASICs could be considering what it offers. Integrated micro-processor, bus architecture, peripheral interfaces and non-volatile and volatile memory, with enough pins (BGA package) to expand with off-chip components. Actel eve
        • Re:Good Motto (Score:3, Informative)

          by drinkypoo ( 153816 )
          There are FPGAs over 250MHz now. There are times when such a beastie is useful. There are times when they aren't. Not sure why the hell you'd want to put this in a commodity PC though. It couldn't possibly help more than a second processor, which would be cheaper - or a second core, which would be cheaper still.
          • Cray makes supercomputers. While some 'supercomputers' may contain commodity PCs, they are not commodity PCs, and are generally described as 'clusters', not supercomputers. And cray does not make supercomputers out of commodity PCs. You wouldn't want to put it in a commodity PC, which is why nobody implied anything about commodity PCs until you pulled that connection out of thin air.
          • How about if you want to do arithmetic on 704 bit integers? Sure, you can break it up into smaller chunks and do multi-precision arithmetic in software on a general purpose computer, but with an FPGA you can simply declare whatever bus width you want and define whatever processing architecture you need for a given task.
      • Re:Good Motto (Score:2, Interesting)

        by imgod2u ( 812837 )
        FPGA's aren't the golden answer currently. Most if not all FPGA's have issues with being used this way. They are programmable, but they're not made or intended to be programmed in the field (despite their name). The majority have a programmable life of maybe 1000 flashes with flash-based FPGA's (ProASIC from Actel for instance) having a life of maybe 100 flashes. They're basically a poor-man's ASIC more than anything else. The technology would have to improve significantly in a much different direction
        • Um, no... Most FPGAs (Xilinx and Altera being the leaders) are static RAM based so can be programmed an unlimited number of times.
        • dunno about other makes but i know the xilix spartan 3 fpgas are ram based and reprogrammed on every startup. I believe this applies to xilixes higher end fpgas too.

          the problem i see is that place and route processing of a design is SLOW so you would have to be running a pretty long job for customisations to be worth it.
    • This assumes, of course, that you have X number of processors to chose from. If you can't do it, the answer is still 'throw more money at it, buy more hardware.'

      Uh, sure. That would be the assumption. Just like back in the day you had math coprocessor for floating point operations, and now it's on the chip; or disk compression duties used to be handled by a Stacker board. Or today you have a graphics accelerator to handle 3D video.

      Sounds like a sound business too. Instead of having completely different
    • Re:Good Motto (Score:5, Interesting)

      by dildo ( 250211 ) on Friday March 24, 2006 @04:19PM (#14990394)
      It is possible to build comptuers that are optimized for certain kinds of calculations.

      For example, Gerald Sussman of MIT (a computer scientist) and a Jack Wisdom (a physicist) decided they wanted to do long-term modelling of the solar system's evolution over time. Long time modelling of a multi-body system requires a fantastic amount of calculation. What is the best way to do it?

      Sussman and Wisdom came up with a crafty idea: build a computer that is specially configured at the hardware level to do the modelling. Sussman and his colleagues decided that with off-the-shelf parts they could build a computer that would be just as or more capable of modeling this system than a supercomputer would be. The result was the Digital Orrery, a relativlely cheap computer that gave great results. (It is now featured in the Smithsonian museum.)

      Think of it: if your computer is going to be doing the Fast Fourier Transform 6.02x10^23 times per day, why not build a superfast chip that does nothing but the FFT rather than express it as software? It's a pretty cool idea. I think this is the sort of thing that Cray computers claims to want to do with its motto.
      • 6.02x10^23 times per day

        Avogadro's number. I get you.

        Also 1000 TFlops is still much much less than 10^23, which means the universe is very complex compared to computing power.

        Now, it is possible to set up the networking between processors to compute a specific calculation quickly. There is an architecture called systolic arrays that basically treats unrolls a loop into a long pipeline. If you arrange processors into a grid and you chain certain processors together depending on the job they each have to do,
    • 'The Cray motto is: adapt the system to the application - not the application to the system.'

      Wasn't it essentially that motto that once gave us the CISC architecture, which most people today agree was't such a great idea...?

      • Wasn't it essentially that motto that once gave us the CISC architecture ...?

        No. I admit I'm no expert, but this is my understanding of it. CISC has several very good reasons (at the time) behind it:

        1) Keep the code footprint as small as possible, since memory (disk and RAM) was expensive.
        2) Try to factor out the most common "overhead" instructions (i.e. load and store) which many thought were actually crowding out the important code.
        3) Make compiler writing easier (but not necessarily easier to optimize fo
        • The first argument I agree was a good one---it did reduce the code footprint.

          However, the way I see it, points 3 and 4 (and possibly 2, depending on what you meant with "overhead") are basically attempts to do in hardware what can be done in software. Perhaps these were valid points at the time when memory was expensive, as you say, but at least in the long run didn't turn out to be very effective. Yet, we are still stuck with CISC processors because of x86 backward compatibility (although they internally

    • Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'

      That's a good motto


      Yeah. But having spent a good part of my career sweating over scientific
      codes line by line to make them "vectorize" on the Cray, I wonder.
  • Cray? (Score:3, Funny)

    by Eightyford ( 893696 ) on Friday March 24, 2006 @03:10PM (#14989847) Homepage
    I didn't even know Cray still existed. Maybe it was Sony's "emotion engine" [wikipedia.org] that almost killed them. ;)
    • If you are going to link something, maybe it should at least reference something relating to what you are talking about in that context. What does the EE have to do with Cray? The wikipedia article does not mention them.

      For instance, John F Kennedy [wikipedia.org].. I'm sure he had emotions [bergen.com]. What does that have to do with anything?
      • Its a joke, before PS2 came out Sony was claiming the processors had crazy power and would be displacing super computers, etc.
  • by Eightyford ( 893696 ) on Friday March 24, 2006 @03:13PM (#14989868) Homepage
    Cray always made the coolest looking supercomputers. Here's an interesting bit of trivia:
    The Cray T3D MC cabinet had an Apple Macintosh PowerBook laptop built into its front. Its only purpose was to display animated Cray Research and T3D logos on its color LCD screen.
  • Go AMD (Score:2, Interesting)

    "All of these platforms will use AMD Opterons for their scalar processor base.'

    Im just loving the vendors picking up on AMD.

    Their idea seems very interesting in theory. It sounds like HPC's version of the math co-processor->crypto accelerator idea.

    And at least they are not basing the userland on Unicos :)
  • by gordyf ( 23004 ) on Friday March 24, 2006 @03:20PM (#14989939)
    It seems like the idea of combining multiple architectures into a single machine is already being done -- we have fast general purpose CPUs (single and dual core x86 offerings from AMD and Intel), paired with very fast streaming vector chips on video cards, which can be used for other non-graphical operations like a coprocessor.

    The only difference I see is that they're relying on an intelligent compiler to decide which bits to send to which processing unit, but I'm not sure how much faith can be placed there. Cray certainly has a lot of supercomputing experience, but relying on compiler improvements to make or break an architecture doesn't have a good track record. I'm curious to see how they fare.

    • The only difference I see is that they're relying on an intelligent compiler to decide which bits to send to which processing unit, but I'm not sure how much faith can be placed there.

      If you read further into the article, you would have noticed TFA talks about a new programming language called "Chapel".

      Chapel was designed as a language for rapid development of new codes. It supports abstractions for data, task parallelism, arrays (sparse, hierarchical, etc.), graphs, hash tables and so on.

      So, they aren't re

    • Cray certainly has a lot of supercomputing experience, but relying on compiler improvements to make or break an architecture doesn't have a good track record.

      Seriously. Just ask Transmeta. Or Intel (think Itanic).
    • by flaming-opus ( 8186 ) on Friday March 24, 2006 @04:16PM (#14990372)
      They really aren't rellying on compiler improvements, so much as passing the code through their vectorizing compiler, and a tool for generating their fpga codes. If the code optimization for these 2 steps fails to optimize very much, you bail out and send it to the general purpose (opteron) processors.

      Your being fairly pedantic about the computer architecture anyway. Yes, pairing multipe processor types together is not new, but most mpp supercomputers use identical node types.

      The jist of this story is simpler than it sounds. Cray has 4 product lines with 4 cpu types, 4 interconnect routers, 4 cabinet types, and 4 operating systems. They would like to condense this down. The first step is to reuse components from one machine to the next. There are distinct advantages for keeping the 4 cpu types for various problem sets, but most everything else could be multi-purpose. From the sounds of things, it's using the next generation of the seastar router in all of the machines. Thus you use the same router chips, cabling, backplane, and frame for all the products. This reduces the number of unique components cray has to worry about. If they go to DDR2 memory on the X1 and mta, that further simplifies things, though I suspect they won't.

      Well, once you share parts, why not make a frame with a bunch of general purpose CPUs for unoptimized codes, and a few fpga or vector cpus for the highly optimized codes? It allows customers more flexibility, and introduces cray's mid-range customers to the possibility of using the really high-end vector processors currently reserved for the high-end X1 systems. It's also a win for the current high-end customers. On the current X1 systems, you have these very elaborate processors running the user's optimized application, but the vector cpu's also end up running scalar codes like utilities and the operating system. These are tasks the vector cpu's aren't terribly good at, and you're using a $40,000 processor to run tasks a $1000 opteron will do better. Even if the customer isn't interested in mix-n-match codes on the system, (which I'm skeptical any cray customer really is), you probably want to throw a few dozen opteron nodes into the X1's successor, just to handle the OS, filesystems, networking, and the batch scheduler.
      • I wonder whether the Cascade vector processor will really be a stand-alone vector processor or actually a co-processor?

        Paradoxically, I believe that the major problem in making a really good vector processor is in pushing the envelope in single thread performance as well (Amdahls law and all that). In comparison, parallelism is easy. So in that sense, it would make sense for Cray to rely on AMD and x86 market volume to get good single thread performance very cheaply, and then concentrate resources on making
        • The X1 processor is already a coprocessor. Not in the sense that it's on a different piece of silicon from the scalar unit, but that the vector CPU's instruction stream is distinct from the scalar unit. In past cray systems, some cpu's used the same functional units for the scalar unit and vector unit, (T90) while some (J90) used distinct scalar units. The X1 is a vector unit bolted on the side of a MIPS scalar core, with synchronization logic, and multi-ported register files to support multi-streaming. I d
  • It seems the bulk of the article is bemoaning how ineffecient single processor systems are, offering Cray's planned adaptive model as a solution, but surely we've already seen the way forward in regard to supercomputing, and that is distributed single (or dual) processor machines. As stated at zakon.org [zakon.org], "SETI@Home launches on 17 May (2001) and within four weeks its distributed Internet clients provide more computing power than the most powerful supercomputer of its time"
    Surely the computing environment has
    • Unless 'computing power' is different to 'combined processor speed', I don't understand what Cray are up to here..

      Well yes, they are very different. Processor speed is clock rate and tells you precisely jack shit about how much work can actually be done. Computing power is better measured in operations per second. Typically we measure integer and floating point performance separately. Even those benchmark numbers are usually pretty useless; hence we have the SPECint and SPECfp benchmarks which suppose

      • ok, I shouldn't have said anything about processor speed. am I right in thinking that there's still no reason this adaptive approach will neccessarily be any better than a distributed project?
        • Depends on how parallelizable the problem is. Clusters work well on jobs that are highly parallelizable, and in which there's small inputs for large outputs. Supercomputers are better for jobs with large data sets that are not easily split up into smaller pieces that can be sent to cluster nodes. The major advantage is that all the processors have direct access to main memory instead of having to fetch data across the network.
        • No, you're not right.

          You're ignoring one *major* element of scientific computing vs. something like SETI@home. SETI requires *very* little communication over the network and *no* communication between the peers doing the computation. The kind of scientific computation that someone would do on a high end cluster or supercomputer (say, a weather model or a model of an airplane) requires a great deal of communication and, even more important, synchronization. Think about what happens if you have 10,000 proc

        • "ok, I shouldn't have said anything about processor speed. am I right in thinking that there's still no reason this adaptive approach will neccessarily be any better than a distributed project?"

          REally 2 different tools for 2 different problems. Distributed computing is really only useful for highly parallel tasks that require a LOT of computation on very little raw data. Furthermore, it has to be on data you don't mind shipping off to joe anonymous to be processed.

          Current and future crays are typically run
        • It depends on the problem that you are trying to solve. Distributed systems are great for problems that can be broken up into a lot of unrelated smaller problems. They aren't so good when that isn't the case, as in climate modelling or nuclear explosion simulations.
    • I don't understand what Cray are up to here.. perhaps someone can enlighten me?

      Imagine a beowulf cluster of Cray supercomputers!

  • by Anonymous Coward
    so that I can play Duke Nukem Forever on it.
  • by __aaclcg7560 ( 824291 ) on Friday March 24, 2006 @03:27PM (#14989992)
    So if I want to run Mine Sweeper, Cray will adapt one of their supercomputers to the requirements of this game? Sweet!
  • LJ had a good article on this a few months back.

    http://www.linuxjournal.com/node/8368/print [linuxjournal.com]
  • Re: (Score:1, Flamebait)

    Comment removed based on user account deletion
  • Cray as a company in general is amazing, they have been around forever and they dont sell bulk crap computers, go figure... The inspiration behind Cray is definitly worth a good study for future computer industry companies.
  • by deadline ( 14171 ) on Friday March 24, 2006 @03:42PM (#14990121) Homepage
    Cray finally figured it out. I have been saying for years:

    HPC/Beowulf clusters are about building machines around problems

    That is why Clusters are such a powerful paradigm. If your problem needs more processors/memory/bandwidth/data access, you can design a cluster to fit your problem and only buy what your need. In the past you had to buy a large supercomputer with lots of engineering you did not need. Designing clusters is an art, but the payoff is very good price-to-performance. I even wrote an article on Cluster Urban Legends [clustermonkey.net] the explains many of these issues.

    • That's not really true either.
      You only buy as much hardware as you need, but hardware is only half the cost of a computer like that. Infrastructure (physical, hardware, administrative, and management), software, and planning is a big part of the cost. Every time you install a capability-class machine, you plan for it several years in advance, make space/power/cooling available for it. Hire people to manage the machine. Port your applications to the machine. Install the machine, test and benchmark. Then admi
  • And What If... (Score:4, Insightful)

    by Nom du Keyboard ( 633989 ) on Friday March 24, 2006 @03:46PM (#14990146)
    The new system will combine multiple processor architectures

    And what if I don't want multiple processor architectures, but instead just lots and lots of the single architecture my code is compiled for?

    • Then buy one of the older designs built on a single architecture?
      This announcement is telling us that they are planning to make new designs available which is not the same as saying that the old designs are no longer available.
    • Re:And What If... (Score:4, Insightful)

      by flaming-opus ( 8186 ) on Friday March 24, 2006 @04:24PM (#14990428)

      The idea is that all the CPU types will be blades that all use the same router, and plug into a common backplane, and that the cabinets all cable together the same way. In all cases, I imagine there will be opterons around the periphery, as I/O nodes and running the operating system. Then you plug in compute nodes in the middle, where the computer nodes can be a bunch more opterons, or vector cpu's, or fpga's, or multithreaded cpus. There will certaintly be plenty of customers only interested in lotsa opterons on cray's fast interconnect, and they just won't buy any of the custom cpus.
    • Because, if you did, you'd realize that this environment has it's own tool chain and isn't for pre-compiled binaries.
    • Then you only buy one processor type (presumably the Opteron). As for precompiled binaries, assuming they meet the appropriate requirements (yet to be determined because predicting what API's will be necessary in a few years is pointless), they'll run just fine. However, optimizing for your target platform is likely to get you better performance regardless of whether the target comes from Cray or anyone else.
  • Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'

    That seems like a good idea, but you end up with a "one trick pony" that does only one thing really well. Once that application is end-of-life or no longer needed, your million dollar machine is worthless piece of lounge furniture unless it can be reconfigured for some other application.

    To me, it doesn't seem like a good investment. Then again, that's probably why I'm not building super
    • That seems like a good idea, but you end up with a "one trick pony" that does only one thing really well.

      From my understanding most supercomputers are built to do one task... Either it be fold proteins, simulate nuclear explosions, predict weather, simulate the big bang, or various other number crunching task. By the time they are done, they have to move to the next project... Mostly because of being out of date every 5 years.
    • i think a more precise interpretation would be:

      (iteratively) adapt the system to the application (mix)
    • I think the idea is to have pluggable "blades" that can be interchangable and reconfigurable for a task. So if you had a problem that required vector processing, you would remove all your scalar components (scalar based blades) and plug in components geared for vector processing (vector based blades). I assume the computer or compiler would figure out which part of the program to send to each component. So the idea is to be easily reconfiguration for whatever your problem happens to be.
  • Wow! I'd sure like to have a Beowulf Cluster of...uhm...wait a minute.
  • Buzz word. (Score:3, Insightful)

    by mtenhagen ( 450608 ) on Friday March 24, 2006 @04:50PM (#14990624) Homepage
    While I must admit "Adaptive Supercomputing" sound like a realy cool buzz word, in practice the programmer still will need to adapt the application to the physical distribution of the systems. Or are they going to dynamicly rewire the switches?

    There have been several attempts (hpfortran, orca, etc..) to automate parallisme but most of them failed because a skilled programmer could create a much faster application within a few days. And remeber that a 10% performance boost in these applications means thousands of dollars saved.

    So I suspect this is just a buzz word.
    • Re:Buzz word. (Score:2, Informative)

      by scoobrs ( 779206 )
      Did you RTFA at all?! The article is NOT about automatic parallelization by some special language. Most supercomputer customers are fully aware that writing applications which perform well for their supercomputer requires writing some form of parallel code. The issue at hand is that some specialized problems perform MUCH faster on one platform than another whether it's primarily scalar, vector, threading (hundreds, not two), and even FPGA. The goal is an intelligent compiler that can recognize code segm
      • So the compiler needs to decide which code to run on which cpu, that requires also deciding on which system data is stored and how communication is taking place. Basicly leading to the same idea as hpfortran.

        If cray gets this working that would be great ofcourse but I doubt it. Combining several different cpu's is a good idea but this is already occuring. As noted in another reply SGI calls this "Multi-Paradigm Computing" which I consider as a much better buzzword.

        Using 'adaptive' implies some sort of autom
        • Cascade is expected to include heterogeneous processing at the node level , with fast serial, vector and highly multithreaded capability, all in the same cabinet.

          They could've probably been more clear, since it was a bit of a dry read. The idea is a steady progression from heterogenous computing between nodes of different types to a tightly-coupled, single system with coprocessors on the same bus during Phase 3 for the government's Cascade research program due in 2009/2010. I believe that's nothing at

    • SGI Announced their version "Multi-Paradigm Computing" in July 20004. Cray is 2 years behind the game.
      http://www.sgi.com/features/2004/july/project_ultr aviolet/ [sgi.com]
      • Re:Buzz word. (Score:3, Insightful)

        by flaming-opus ( 8186 )
        2 years behind in announcements, let's see who brings it to market first.

        Sadly the answer is that it's not even a race. SGI brought foreward their first step already, but won't get past that. You can now buy an fpga blade to put in your altix. While cray is just now announcing a unified vision for this, they've already had their fpga based solution since they bought octiga bay 2 years ago.

        As much as cray is suffering financially, SGI is in much worse shape, and they have about $350million in debt around the
  • SGI Announced their version "Multi-Paradigm Computing" in July 2004. Cray is 2 years behind the game.
    http://www.sgi.com/features/2004/july/project_ultr [sgi.com] aviolet/ [sgi.com]
  • by n0mad6 ( 668307 )
    ...welcome our new adaptive supercomputer overlords.
  • Cray and SGI have both been losing money recently as more users flock to clusters, which tend to be cheaper and more flexible. Now both of them are offering this "adaptability" position. SGI is moving in the direction of blades so customers can choose their level of computing power; Cray will soon have a core machine that customers can build out from. What's interesting to note is that both of them are ultimately selling Linux on commodity processors (Itanium for SGI and Opteron for Cray) plus a p
  • Design the Cray to first read & analyze the software to be run + it's hardware requirements, then load it into a ROM chip specific to the Cray operation.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...