Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Star Bridge FPGA "HAL" More Than Just Hype 120

Gregus writes "Though mentioned or discussed in previous /. articles, many folks (myself included) presumed that the promises of Star Bridge Systems were hype or a hoax. Well the good folks at NASA Langley Research Center have been making significant progress with this thing. They have more info and videos on their site, beyond the press release and pictures posted here last year. So it's certainly not just hype (though $26M for the latest model is a bit beyond the $1,000 PC target)."
This discussion has been archived. No new comments can be posted.

Star Bridge FPGA "HAL" More Than Just Hype

Comments Filter:
  • by Anonymous Coward on Saturday February 15, 2003 @01:31PM (#5308972)
    According to this this presentation [nasa.gov], NSA are involved with two projects.

    Going from 4GLOPS in Feb'01 to 470GFLOPS in Aug'02 for ten FPGAs, that's 120 times faster in little over a year. Not bad.

    Any thoughts on what this means for crypto cracking capability?

  • by $$$$$exyGal ( 638164 ) on Saturday February 15, 2003 @01:36PM (#5308989) Homepage Journal
    Here's the google search for only the word: "hypercopmuter [google.com]"

    Your original search: hypercopmuter returned zero results.
    The alternate spelling: hypercomputer returned the results below.

    Here's a Feb'1999 Wired Article [wired.com] that explains what Star Bridge considers a hypercomputer.

    --naked [slashdot.org]

  • More information (Score:5, Informative)

    by olafo ( 155551 ) on Saturday February 15, 2003 @01:38PM (#5308991)
    More technical information is found in MAPLD Paper D1 [klabs.org] and other reports [nasa.gov]. NASA Huntsville, NSA, USAF (Eglin), University of South Carolina, George Washington University, George Mason University, San Diego Supercomputer Center, North Carolina A&T and others have StarBridge Hypercomputers they are exploring for diverse applications. The latest StarBridge HC contains Xilinx FPFAs with 6 million gates compared to the earlier HAL-Jr with only 82,000 gates. Costs are nowhere near $26 Million. NASA spent approx 50K for two StarBridge Systems.
  • FPGA experiences (Score:5, Informative)

    by goombah99 ( 560566 ) on Saturday February 15, 2003 @01:48PM (#5309035)
    I've brushed up against reconfigurable computing engineers in various applications I've had over the years. The last one was for trying to process laser radar returns coming in at gigabits per minute so we could do real time 3-D chemical spectoscopy of the atmosphere at long range. The problem with conventional hardware was the busses were too slow and the data rate too fast too cache, and too much to archive on disk. you could not effieicently break the task into multiple CPU since just transfering the information from one memory system to the next would become the bottleneck, breaking the system.

    FPGAs worked pretty well here because they could handle the fire hose data rate from front to back. Their final output was a small nuumber of processed bytes so that could then go to a normal computer for display and storage.

    the problems the engnieers had was two fold. first in the early chips there were barely enough gates to do the job. and in the later ones form xylinx there were plenty of transistors but they were really hard to design properly. the systems got into race conditions were you had to use software to figure out the dynamic proerties fo the chip to see if two signals would arrive at the next gate in time to produce a stable response. you had to worry where on the chip two signals were coming from. it was ugly and either you accepted instability or failed prootypes or you put in extra gates to handle synchronization--which slowed the system down, and caused you to waste precious gates.

    still my impression at the time was WOW. here is something that is going to work, its just a matter of getting better hardware compilers. Since then Los Alamos has written a C compiler that compiles C to hardware and takes into account all these details it used to take a team of highly experienced engineers/artists to solve.

    Also someone leaked a project going on at National Instruments that really lit up my interest in this. I don't know what ever became of it, maybe nothing. but the idea was this. National instruments makes a product called "labview" which is a graphics based programming language whose architechute is based on "data flows" rather than procedural programming. in data flows, objects emitt and receive data asynchronously. when an object detects that all of its inputs are valid data it fires, does its computation (which might be procedural in itself, or it might be a hierarchy of data flow subroutines hidden inside the black box of the object) and emitts its results as they become valid. there are no "variables" per se just wires that distriuted emitted data flows to other waiting objects. the nice thing about this language is that its wonderful for instumentation and data collection, since you dont alwayd know when data will become available or in what order it will arrive from different sensors. Also there is no such thing as a syntax error, since its all graphical wiringing, no typiing, thus it is very safe for industrial control of dangerous instruments.

    anyhow the idea was that each of these "objects" could be dynamically blown onto an FPGA. each would be a small enough computation that it would not have design complications like race conditions and all the objects would be self timed with asyncronous data flows.

    THe current state of the art seems to be that no one is widely using the C-code or the Flow control languages. instead they are still using these hideous dynamical modelling, languages that dont meet the needs of programmers because they require to much knowledge of the hardware. I dont know why. maybe they are just too new.

    However these things are not a panacea. For example, recently I went to the FPGA engineers here with a problem in molecular modeling of proteins. I wanted to see if they could put my fortran program onto an fpga chip. the could not, because 1) there was too much stored data required and 2) there was not enough room for the whole algorithm. So I thought well maybe they could put some of the slow steps on to the fpga chip. for example, given a list of 1000 atom coordinates, return all 1 million pair wise distances. This too proved incompatible for a different reason. When these fpga chips are connected to a computer system the bottleneck of getting data into and out of them is generally worse than that of a cpu (most commerical units are on PCMCIA slots or the PCI bus). thus the proposed calculation would be much faster on a ordinary microporcessor since most of the time is spent on reads and writes to memory.! there was however one way they could do it faster and that was to pipeline the calculations say 100 or 1000 fold deep. so that you ask for the answer for one array, and then go pick up the answer to the array you asked about 1000 arrays ago. this would have complicated my program too much to be useful.

    these new FPGAs are thus exciting because they are getting so large and have so much onboard storage and fast internal busses that a lot of the problems I just mentioned may vanish.

    My knowlege of this is about year out of date so I apologize if some of the things I said are not quite state of the art. But I suspect it reflects the commerially avialable world

  • by KingPrad ( 518495 ) on Saturday February 15, 2003 @01:53PM (#5309055)
    There is a lot of work being done with adaptive computing involving a combination of a general CPU and an FPGA. The CPU takes care of normal work, but processing-intensive repetitive tasks can be pushed onto the FPGA. The FPGA is basically reconfigured as needed into a dedicated signal processor which can churn through a set of complex instructions in a single step rather than a few dozen clock cycles on a general purpose CPU.

    The way it works then is that a board is made with a normal CPU and an FPGA next to it. At program compile time a special compiler determines which algorithms would bog down the processor and develops a single-cycle hardware solution for the FPGA. That information then becomes part of the program binary so at load time the FPGA is so configured and when necessary it processes all that information leaving the CPU free. The FPGA can of course be reconfigured several times during the program, the point being to adapt as necessary. The time to reconfigure the FPGA is unimportant when running a long program doing scientific calculations and such.

    It's a pretty nifty system. Some researches have working compilers and they have found 6-50x speedup with many operations. The program won't speed up that much of course, but it leaves the main CPU more free when running repetitive scientific and graphics programs.

    You can find information in the IEEE archives or search google for 'adaptive computing'. It's a neat area with a lot of promise.

  • by zymano ( 581466 ) on Saturday February 15, 2003 @02:52PM (#5309262)
    From what i have read about reconfigurable chips at Sci American and other websites is that while they can do wonder for certain applications they still can't match the wiring of a 'vector processor' . Vector chips are very efficient. I have always wondered why they industry has turned it's back on them. The linux/intel solution is not as efficient as everyone thinks. Too much heat and networking the chips has it's difficulties. The japanese nec vector supercomputer is way ahead now of USa. If you don't believe me then go here and learn what top US scientists say , good article. Go down 3 articles . NewsFactor Portal [newsfactor.com]

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...