Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Self-wiring Supercomputer 256

redcone writes "New Scientist is reporting on an experimental supercomputer made from Field Programmable Gate Arrays (FPGA) that can reconfigure itself to tackle different software problems. It is being built by researchers in Scotland. The Edinburgh system will be up to 100 times more energy efficient than a conventional supercomputer of equivalent computing power. The 64-node FPGA machine will also need only as much space as four conventional PCs, while a normal 1 teraflop supercomputer would fill a room. Disclaimer: At this point in time, the software needed to run it, which is the key to the project, is vaporware. "
This discussion has been archived. No new comments can be posted.

Self-wiring Supercomputer

Comments Filter:
  • FWIW, this is not a new idea. FPGAs (i.e. dynamically reconfigurable processors) have been around for about 20 years now, and have allowed hardware developers to produce custom hardware in many situations. The key, you see, is that hardware designed for a specific task is almost always going to perform that task better than a general purpose processor. That's why the SaarCore [saarcor.de] can outperform a P4, and why your computer has a custom built GPU.

    As a result, the idea of runtime-dynamic hardware sounds great. Unfortunately, the issue that developers run into in developing a runtime-dynamic processor is the matter of knowing how to configure the chip. One tack is to allow programs to load chip designs themselves, thus creating specific hardware for that individual program. The down side to this tack is that someone must go through the time consuming task of manually writing the chip in a Hardware Design Language such as VHDL or Verilog. Most programmers aren't going to do this when they can get the program out faster with a general purpose CPU.

    This has led to another tack of using software to analyze a program and automatically create a machine to optimize it. This is conceptually similar to the Java JIT method, but is more complex by far. A lot of research is being done into this area (as this story shows), but I wouldn't hold my breath for now.

    Another design that makes a lot of sense is the concept of "hardware on demand". i.e. Imagine if you had a library of accelerator chip designs. Whenever a program needs a particular form of common hardware acceleration (e.g. GPU, Sound, DSP, etc.), the onboard FPGAs could be reconfigured to meet the demand. This wouldn't have the same punch as task-specific hardware, but it would provide an inexpensive method for obtaining a bundle of hardware that would otherwise be extremely expensive and use up a lot of bus space.
    • FWIW, this is not a new idea. FPGAs (i.e. dynamically reconfigurable processors) have been around for about 20 years now

      You are correct that this is not a new idea; however, I think the original idea [wikipedia.org] for this type of machine was developed in 1936 by Alan Turing [wikipedia.org].

    • by ikewillis ( 586793 ) on Wednesday June 01, 2005 @12:46PM (#12695528) Homepage
      Building a supercomputer that runs vaporware seems like a rather foolhardy exercise indeed.

      GenoByte [genobyte.com] has found a more novel use for FPGAs, which they call "evolvable hardware." Much like our own brains neural networks on the FPGAs reconfigure the way they interconnect on the fly; commonly used paths are reenforced while less frequently used ones atrophy.

      Here are some cool pictures:

      The CAM-BRAIN machine, a big box full of FPGA boards: http://www.genobyte.com/images/machine.jpg [genobyte.com]

      Neural network layout for the XC6216 FPGA: http://www.genobyte.com/images/chip.JPG [genobyte.com]

      All in all this approach is substantially faster than modelling large neural networks on a general purpose processor. In the GenoByte approach, the neural network is implemented as physical circuits.

    • How does this vapor supercomputing box compare to real hardware, such as a Blue Gene node, which is a big pile of interconnected custom PPC chips, all in a box already the size of a 30" TV?
    • First heard about reconfigurable computing here on /. The company Starbridge [starbridgesystems.com] is still going strong and long past the vaporware stage.
    • considering where consumers generally demand performance the most is in games, i can see this as being very helpful. i was just randomly thinking about it after looking at several images on the site you linked at, and then it hit me... if you combine the ray tracing core with a FPGA-like pixel shader, you could get some highly complex scenes with potentially much less overhead and speed requirements... thus making the chips cooler and smaller.

      and on a personal note, i would love to see that mixed with aug
  • by Scoria ( 264473 ) <`slashmail' `at' `initialized.org'> on Wednesday June 01, 2005 @11:36AM (#12694683) Homepage
    From this point forward, no Terminator references will be permitted. ;-)
  • Being Built (Score:3, Funny)

    by NETHED ( 258016 ) on Wednesday June 01, 2005 @11:36AM (#12694687) Homepage
    Well, if its being built, then they MUST have some sort of plan. Its not like we're in the tech bubble again.
  • some resources (Score:5, Informative)

    by professorhojo ( 686761 ) * on Wednesday June 01, 2005 @11:37AM (#12694701)
    the wikipedia article on FPGA: http://en.wikipedia.org/wiki/FPGA [wikipedia.org]

    great list of resources from WP on FPGA if anyone's interested in reading more:
  • by DoraLives ( 622001 ) on Wednesday June 01, 2005 @11:37AM (#12694704)
    these things will be a sort of pinkish grey with a funny convoluted surface appearance, weigh a few pounds, and float in tanks of clear liquid.
    • by Dr. Weird ( 566938 ) on Wednesday June 01, 2005 @12:07PM (#12695034)
      And as a result give consistently unreliable, biased answers for even the simplest of numeric tasks. I love progress.
    • I think the vast majority of comments are are missing the mark. There seems to be a lack of knowledge about FPGAs. It's not self-aware, and doesn't really configure itself per se. FPGAs are just easily-customizable hardware instead of spending $300k on an ASIC. You "program" the hardware with a Hardware Description Language like VHDL or Verilog. So if you can have custom processing blocks of HDL and dynamically load them onto the FPGAs, I guess it's like "wiring itself". But conceptually it isn't a lot more
  • by Anonymous Coward on Wednesday June 01, 2005 @11:37AM (#12694706)
    Longhorn to be released.
  • by Anonymous Coward
    And just before it starts sending people to the past
  • Mmmmmmm... (Score:5, Funny)

    by 10000000000000000000 ( 809085 ) on Wednesday June 01, 2005 @11:39AM (#12694725)
    field programmable gatorade...
  • Nup (Score:3, Funny)

    by Mattygfunk1 ( 596840 ) on Wednesday June 01, 2005 @11:42AM (#12694758)
    ... is vaporware.

    It's not vaporware until it doesn't arrive ;)

    __
    Laugh Daily funny free videos [laughdaily.com]

    • by ari_j ( 90255 )
      It's only vaporware because they haven't written a Common Lisp for the machine yet. After all, the perfect programming language to target a self-reconfiguring machine is one that can reconfigure itself to keep up with the machine.
      • I think Smalltalk is a veritable chameleon compared to Lisp. Its the class libraries doan'chyano.

        I like Lisp [first program I wrote in Lisp was code that accepted an EBNF syntax and generated a Lisp parser] but not that much.
  • by mjsottile77 ( 867906 ) on Wednesday June 01, 2005 @11:42AM (#12694761)
    The Cray XD-1 (http://www.cray.com/products/xd1/index.html [cray.com]) has already been on the market with FPGA 'application accelerators'. This isn't really new news.

    Besides, FPGAs have two issues that make them good only for a very specific set of apps. Number 1, they don't currently have great floating point performance - this is a killer for most scientific apps. Number 2, they are hard to feed because the rate they can compute at versus the rate memory can feed them is quite skewed. Regardless, they're still very promising. The reconfigurable computing team at LANL (http://rcc.lanl.gov/ [lanl.gov]) has done some very cool things with FPGA based systems.

    • Number 1, they don't currently have great floating point performance - this is a killer for most scientific apps.

      Eh? An FPGA is a blank slate. You can code an FPU into it. Are you referring to the underlying hardware performance being slow for FP calculations?

      Number 2, they are hard to feed because the rate they can compute at versus the rate memory can feed them is quite skewed.

      This is true of all computers except for custom designed supercomputers. Most general purpose CPUs sit and do nothing for a
      • Actually, you have to pick an FPGA "core" that supports an FPU. They are available, but the basic FPGA only supports fixed point math in most cases. And in many cases that is perfectly acceptable. Pipelined CPUs we developed to avoid the wait states for memory so that work can go on while waiting on Memory. Have a on-chip memory contoller which allows being able to access multiple memory locations at the same time also helps with this problem as well as using Dual-Port memory which can be read from/written
        • The basic FPGA only supports gate-level operations. When you pick a "core", you're talking about picking an ALU implementation, which really has nothing to do with the on-chip hardware. Some cores implement fixed-point (generally the simpler ones). Some cores implement float. But an FPGA in and of itself implements truth-table operations and block interconnects; it's way below the level of math operations.

          (embedded systems designer in a current life, including doing FPGA and ASIC design)
        • Actually, you have to pick an FPGA "core" that supports an FPU. They are available, but the basic FPGA only supports fixed point math in most cases.

          Ugh. My poor head is reeling at this confusing statement. Are you trying to say that you need to choose a proper software core with an FPU (I agree) or that FPGAs naturally have a given set of math functions? The former is true, the latter is not. Your FPGA is only as good as the processor core you load into it. If you load a multi-pipelined, FPU capable behem
          • Considering how much a Cray costs, I'd think they could splurge for a few Virtexes to load an FPU on.

            Well, the XD-1 isn't your normal Cray machine; it's the product of a Canadian company called Octiga Bay that Cray bought last year. The XD-1 is basically a blade-based Opteron cluster with a custom InfiniBand interface (renamed "Rapid Array" for inexplicable marketing reasons) and the capability to add a couple FPGAs per blade. It is not a mainframe with vector processors and huge amounts of memory b

    • The FPU will be as good as you design it to be.
      In comparison to custom built hardware an FPGA sucks, it's big, slow and power hungry. If you were to take a standard FPU and build it into an FPGA it would be useless.
      But the whole point is that you don't need a general use FPU that can do everything. You only have one that can do the one specific operation that you need. e.g. If you need to add three 158 bit floating point numbers you have a bit of logic that can do that. It's 158 bits wide, has 3 inputs and
  • by TinheadNed ( 142620 ) on Wednesday June 01, 2005 @11:44AM (#12694776) Homepage
    How does the playstation 3 manage 2.2 teraflops without being the size of a house then?
  • Skywhatnow (Score:3, Funny)

    by FidelCatsro ( 861135 ) <fidelcatsro&gmail,com> on Wednesday June 01, 2005 @11:46AM (#12694793) Journal
    This sounds like Skynet , a self wiring supercomputer that will go on to dominate the world ..
    Please everyone go to the place and dump the thing in some molten metal before its too late... We dont want another awfull Terminator sequal
  • Don't worry, just let the machine write its own software.
  • by amstrad ( 60839 ) on Wednesday June 01, 2005 @11:47AM (#12694806)
    Has it already wired itself to imagine a Beowulf cluster of itself?
  • by booyah ( 28487 ) on Wednesday June 01, 2005 @11:47AM (#12694810)
    I have this killer project going, its all opensource.

    its called Duck Nuckem Tournement 2012. its multiplayer, will run on the phantom console, and is all opensource.

    ofcourse the project itself is vaporware as of the time of this writing....
  • hypercomputer (Score:3, Informative)

    by AeiwiMaster ( 20560 ) on Wednesday June 01, 2005 @11:48AM (#12694823)
    "No one has ever tried to build a big supercomputer with these chips before," Parsons says.
    That is wrong star bridge systems
    http://www.starbridgesystems.com/ [starbridgesystems.com]
    have been selling the hypercomputer for some years now.
  • Turing and so forth (Score:2, Interesting)

    by Jesus 2.0 ( 701858 )
    It's been a while since I learned about this stuff, but hasn't it been mathematically proven that it's futile to try to write a program that, presented with an arbitrary (expressable) problem, will write a program to solve that problem?
    • Who said this is an arbitrary problem? It is a very specific one: converting code in some high-level language into hardware wiring.

      You're argument would apply equally well (or equally poorly) to compiling code: "since solving the arbitrary problem is futile, we can't convert a high-level set of instructions into machine code" but as before the problem is not arbitrary. The language --> hardware problem is more difficult than the language --> machine instructions, but not unsurmountably and ther

    • Sort of... (Score:3, Insightful)

      by fireboy1919 ( 257783 )
      What has been proven is that there are problems for which it is impossible to automatically write a program to solve. Further, this is an NP-hard problem, meaning that you can't even know for sure if you're ever going to get a solution, or how long it will take.

      However you can usually make a good estimate with approximate solutions of how close you are to the real solution, and how much longer it will take. Obviously this only works with programs that have some form of error evaluation criteria. This
      • What has been proven is that there are problems for which it is impossible to automatically write a program to solve. Further, this is an NP-hard problem, meaning that you can't even know for sure if you're ever going to get a solution, or how long it will take.

        What on earth are you talking about? You seem to be confusing the notions of recursive decidability, recursive enumerability, and NP-hard problems. I'll sort it out for you:

        A set of numbers is recursively enumerable if there is a recursive fu
  • by anzha ( 138288 ) on Wednesday June 01, 2005 @11:52AM (#12694867) Homepage Journal
    Once the 64-node machine is built, the designers will try to transfer several existing supercomputer programs onto the new hardware using these tools. "If we can get these [programs] to work, we'll know that we have a general purpose solution," Parsons says.
    [Emphasis added]

    So, this is still vapourware.

    LARC [nasa.gov], at NASA, built an FPGA supercomputer. Here's a link [nasa.gov] to a related paper from 2002. Note, its a PDF.

    Additionally, Cray builds an FPGA using supecomputer in its XD-1 [cray.com]. It's definitely a nonvapourware project since they've sold over 15 of them. Yes, yes, it also uses Opterons, but they're paired with FPGAs.

    Additionally, prior to Seymour Cray's death at the hands of a drunk driver, he was looking into FPGAs as his next stab at supercomputing.

  • by Pollux ( 102520 ) <speter AT tedata DOT net DOT eg> on Wednesday June 01, 2005 @11:52AM (#12694869) Journal
    Here's an idea...I think I'll go into the car manufacturing business. I'll build myself a brand new car with an extremely efficient engine that gets 400 miles to the gallon. It's a small engine and a lightweight car, but can still transport a family of four!

    Disclaimer: At this point in time, the software needed to run it, which is the key to the project, is vaporware. "

    Except there's one little problem...the gas needed to run it, which is the key to making this engine so efficient, hasn't been invented yet. But as soon as it is, we'll take the market by storm!
  • Will it be able to self-repair and come after Captain Blaze and Johnny Lunchpail and the rest of the Good Crew of the S.S. Intrepid with renewed and terrifying mechanical fervor?

    Why, Oh why do we build these mad inventions? When will we ever learn the folly of mocking Mother Nature?


    -FL

  • by Knightman ( 142928 ) on Wednesday June 01, 2005 @11:57AM (#12694928)
    There's a company that has been selling this type of system for a couple of years.

    They also have their own language called Viva to be able to program the computer.

    Link: http://www.starbridgesystems.com/ [starbridgesystems.com]
  • Could you imagine a Beowolf cluster of ... ah nevermind...
  • Stretch (Score:2, Interesting)

    by Kontinuum ( 866086 )
    As already mentioned, the biggest problem with FPGAs is the difficulty/time in writing the logic. While that's not necessarily a big problem for a major supercomputing center or a CS research center, it (along with cost) is a problem that prevents FPGAs from being routinely adopted by end-users such as people in the applied research community.

    One idea to get around this has been advanced by (among others), Stretch, Inc. [stretchinc.com]. The summary is that their compiler analyze your C-code and decide what can be mor
  • Here's my Slashdot fortune cookie for today as I was reading this article:

    The meta-Turing test counts a thing as intelligent if it seeks to devise and apply Turing tests to objects of its own creation. -- Lew Mammel, Jr.

    Coincidence?

  • I'm confused ... I thought the Xbox360 had 1 teraflop, and the ps3 had 2 teraflops of computing power. Now it says a teraflop machine takes a room? From the pictures it seems like the xbox and ps3 are both well under 1 ft^3. Floating point operations/sec isn't like MHz. Higher always means better ... if it can do more operations/sec that by definition means it is faster.
    • Now it says a teraflop machine takes a room? From the pictures it seems like the xbox and ps3 are both well under 1 ft^3.

      It might turn out that Sony and Microsoft's numbers were more marketing than machine. Remember, its not a lie if you think its true. That's why marketing droids are programmed to be callous, aggressive, and gullible.

      MD - Marketing Droid
      HE - Hardware Engineer

      MD: So how many Terrafowls will it do?
      HE: Terra-whats?
      MD: You know, how many libraries of congess can it process?
      HE: Tw

    • Game consoles, video chip makers mean something different when they talk flops than HPC community.

      Game consoles and video chips operate primarily in single precision (32 bit) mode, hence the high numbers.

      HPC generally requires double precision (64-bit) and that is the number used when discussing systems with that application in mind.
    • The majority of the "FLOPS" in the "computing power" numbers for the PS3 and Xbox 360 are GPU shader operations, not general-purpose CPU floating point operations.

      The CPUs of each system are both approximately 0.2 TFLOPS.

  • Slashdot ran previous [slashdot.org] articles [slashdot.org] on StarBridge Systems [starbridgesystems.com] and their Hypercomputers that are built on massive parallel FPGA processors. And their operating system/Programming Environment, Viva [starbridgesystems.com] is not vaporware. I can't find a reference to it, but I'm fairly certain the French department of energy already purchased one for researching nuclear blast yields.

    Despite the initial handwaving about having these on our desktops, I think it's going to be a while before that happens. Still, it's a very cool idea.
  • At this point in time, the software needed to run it, which is the key to the project, is vaporware. "

    They should start a SourceForge project - we'll all chip in, and send patches and code, won't we campers? Here's my contribution:

    #include <stdio.h>
    Who's next?
  • FTFA: The system under construction at the Edinburgh Parallel Computing Centre - part of Edinburgh University, UK - will use Field Programmable Gate Array (FPGA) chips instead of conventional microprocessors.

    Has any one noticed that when Scotland do well, we say that they are from the UK/GB. But if they did something bad, it would be "Edinburgh University, Scotland". e.g.:
    Coulthard is winning! This British driver is .....
    Coulthard has come last, and what a shame it is for this Scottish driver...

  • Consider how much computing resources (for multiprogramming systems) are spent now in context switching. How much more of that would be spent not only dumping and reloading the contents of the registers, like in current systems, but the instruction set architecture (ISA)?

    Unless there's some way to dynamically optimize and/or compromise between different running processes (which would inevitably include the OS kernel), this technology has a great potential to be much slower than the usual set-up (this may
  • Although using FPGAs for reconfigurable computing applications still has a number of drawbacks, utilizing FPGAs for embedded applications is some really cool stuff.

    For example, an entire system can be dynamically built right into the FPGA -- including processor, OPB, memory buses, and any other devices such as interrupt controllers, timers, etc. Aside from RAM and Flash, you almost have an entire embedded system built right into a chip.

    Earlier this spring I had the opportunity to work on a project that re
  • "No one has ever tried to build a big supercomputer with these chips before," Parsons says.

    Like a serious flu season, this sort of thing happens again every few years, when a new generation of grad students and faculty think it's a really neat idea. The thing is, when all is said and done, these things probably still are not cost effective right now. Sufficiently powerful FPGAs are expensive, and custom hardware is expensive. Furthermore, they are a pain to program, and even if they work perfectly they
  • Powerful (by the numbers) hardware, but no software, is the hallmark of general purpose FPGA computing. It was exactly the same story 15 years ago, when I wrote the host/client apps for an embedded FPGA/DSP image coprocessor in Silicon Valley. Still no one has changed the story. The reason is that FPGA is inherently parallel, but all complex application development is procedural. The underlying gates are programmed in Verilog, VHDL, or another hardware state language, which doesn't map well to the procedura
  • by AndyGasman ( 695277 ) on Wednesday June 01, 2005 @12:39PM (#12695435) Homepage Journal
    Ah, the fame and fortune...

    As a software design engineer at Nallatech, I'm pretty chuffed we came up on Slashdot.
    Not wanting to come across as a pedant...

    "software needed to run it, which is the key to the project, is vapourware"
    This is not the case, with Nallatech's software is capable of providing the intercommunication (DIMEtalk [nallatech.com]), the low level control (FUSE [nallatech.com]) and the Algorithm implementation (double [nallatech.com] and single [nallatech.com] precision floating point cores, as well as a new tool, currently in beta, to simplify their use by developers).

    "Nallatech, a company that makes software tools for FPGA programmers".
    This is true, however we do equal amount of hardware and firmware development.

    More info:
    Read our white paper [nallatech.com] about supercomputing for the oil and gas industry, reg required I'm afraid?
    The foot print of this thing could be tiny, as you can get 9 Virtex 2 pro FPGAs (Using BenBLUE-3 [nallatech.com] modules) on a BenERA [nallatech.com] Carrier card, and you can get 4 BenERAs into a cPCI rack, so to get 64 FPGAs you just need 2 standard cPCI racks. Since you can get 4 cPCI racks into you standard 19" server rack, which would kick out a massive 2 Teraflops.

    Though, I can't help but think Cell processors might kick our asses, at least a little bit anyway. Sorry about all the links to Nallatech, just pointing folk to the info. Oh, by the way, I think the 1 Teraflop for 64 FPGAs is a very conservative estimate.
  • Is is about time we started working on the Positronic Brain? http://en.wikipedia.org/wiki/Positronic_brain

    T oo bad Asimov was sooo wrong about them with regard to the so-called three laws. http://www.asimovonline.com/asimov_FAQ.html

    I wonder if you could buy insurance for "rapid positronic cascade failure"
  • Blue Gene is about 2.4 TFlops/ rack ( give or take half a terraflop ) . So room is a small kitchen closet then you have 5 Tflops in there. Most super computers are over engineered to be dynamically configurable, because they are so expensive but then you can add maybe 10% more cost of investment to have something that is even more dynamic. But nowadays you can have a bedroom ( 20ftX20ft ) filled with a rack of bluegenes getting you about 36 Tflops +. Now with newer supercomputers with dual cores and QDR, a
  • It can only be a matter of time [kuro5hin.org] now. Although, in all likely hood, it will wind up spending most of its time optimizing Gentoo settings.
  • I've already got a piece of hardware that consumes a lot less power and can be reconfigured to solve ANY provably solvable problem. It also unfortunately relies on software "under development".

    It's called a PENCIL.

    Next week, I plan on holding a press conference when I announce my future-proof technology update, called PAPER. Existing PENCIL software will be fully compatible with PAPER, however document transfer from the previously used TABLETOP and CLAY-TABLET will require third-party software known as
  • "But can you do it?" cried Loonquawl. . . .

    "No, . . . But I'll tell you who can," said Deep Thought.

    "I speak of none other than the computer that is to come after me," intoned Deep Thought, his voice regaining its accustomed declamatory tones. "A computer whose merest operational parameters I am not worthy to calculate - and yet I will design it for you. . . . And I shall name it also unto you. And it shall be called ... The Earth."
  • here we come!
  • We're all suffering from the Von Neuman bottle-neck. We've all had pipe-dreams of a new, much more efficient way of doing things. I had mine back in 1981-1982, and I call it the BitGrid [blogspot.com], your name and specifics may vary, but it's probably also non Von Neuman.

    If you can express the algorithms you need in a non-serial form, and get them to operate in a data-flow or other architecture which can operate on all of the data at the same time, you can really kick up your compute performance.

    Of course, as long as

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...