Field Programmable Gate Arrays at MIT 122
Rhys Dyfrgi writes "There is an article in this month's Scientific American about the Raw microchip. Based around field programmable gates arrays, they claim it will reach speeds between 10 and 15 gigahertz by the year 2010. Because it's a FPGA, it can be instantly reconfigured to perform any task. It is one of the central items for the Oxygen Project. "
You can buy this stuff now (Score:1)
Not MIT (Score:1)
Pardon the hype (Score:1)
...say, that's one nice "handy" you got there. (Score:2)
Joe admits to being flattered, maybe a little curious, but is happily married and suggests you put your handy away before somebody sees the two of you and starts spreading rumors.
From 1996, a story (Score:1)
Maybe not incredibly technical (this is a _story_ not a proposal), but the idea isn't unheard of. It's a question of being able to make use of a vast number of extra gates- very much a neural net problem rather than a Von Neumann architecture.
I suspect the 'maybe logic' I went on about in the story might be as important a concept: it fascinates me that in _all_ the digital circuits we depend on, there's the capacity for non-boolean logic values. This simply depends on analog characteristics of the digital circuits, which in some cases is quite predictable and in other cases not- but the resolution is phenomenal and there's no delay time for calculating relationships- I've been meaning to torture some random CMOS logic chips with non-logic values and see what comes out the output. Has anybody done this? So far I only know that inverters are relatively linear, which is hardly surprising
Re:Don't post crap until you read this!!! (Score:2)
Re:Don't post crap until you read this!!! (Score:2)
"The glorious MEEPT would like to bring all the divided factions of linux into one big divided faction." - The Glorious MEEPT!!
"Since I have done been..." (Score:1)
Re:You forgot one! (Score:1)
also: I hate Jon Katz(and variations thereof)
Don't post crap until you read this!!! (Score:2)
1) Does it run Linux?
2) How about a Beowulf cluster(an alteration goes like "Damn, wouldn't a Beowulf cluster of (insert computer chip, iBook, Red Hat Stock, anything really, in here) be sweet?")
3) This isn't news for nerds!
4) FIRST POST
5) MEEPT
6) MS Sucks
7) Apple sucks
8) Where can I get the source code?
9) No source code? Damn this thing is a piece of shit.(note, source code is required even if the item in discussion does not have source code, it's a freedom thing)
If I have missed any, please feel free to contribute to the standard Slashdot response. Once we have a good list, we should work on making a program that will automatically go to a story and post one of these comments at random, saving valuable time for the people who would have had to spend that time letting their brain rot while they typed.
Compiler, Handy 21 example, multitasking... (Score:1)
One thing that troubled me about the Handy 21 example: the author states that he has a pager, a cell phone and a Palm Pilot. He could tell his Handy 21 to become one of those devices. Wouldn't he rather have a device that did ALL of those things? Ummm... I think Motorola has a CDMA Star Tac with a Rex clip on that does all of that and more. Today. Not in the "near future".
Along those same lines, can this architecture be used for general purpose computing? Or for that matter, multitasking? What's the use of having an architecture that can be highly specialized if you're trying to do non specific things with it?
I really didn't buy the article. It all seemed way too pie-in-the-sky with no real accomplishments and nothing new to report.
Re:Starbridge Systems is also working on this (Score:3)
or not not... (Score:4)
missing the point.
Creating a chip architecture/micro-architecture is a function of 4 fundamental tradeoffs:
Cycle time, Work per cycle, Area, and Time to market.
FPGA have chosen low work per cycle. In the past, CPUs chose high work per cycle.
Now, they are going in the direction of lower work per cycle (deeper pipelines, more latency).
Just a question of what you want.
Clock rate is just one choice of many, and has little to do with some magic FPGA architecture.
In fact with today's fpga, 200MHz is fast, compare that to your 450MHz pentium III...
The main architectural advantage of FPGAs is that a block of logic only needs to exist when you
are using it. This is simply a form of caching. Instead of having all the HW there (but slower),
you have only the subset you need (so it's faster). However, if you factor in the "misses"
(the time where logic has to be reprogrammed), it's a much more complicated problem which
doesn't have such an "obvious" solution...
Just like there are data sets that blow a CPU cache, there are probably algorithms that make
re-programmability a liability.
On the issue of efficiency, FPGA just have underused programmability and routing logic
instead of underused HW functional units in other architectures. Depends on the problem you are
trying to solve...
-slew
ooooohh how promising this technology is.... (Score:1)
Somehow if and when this eventualy reaches the marketplace I forsee a system that is competative within itself and with another, and where the features are broken due to an upgrade in some obscure module. I could forsee an AOL-Oxygen, but if you decided to breath that then you might poison another person who breaths MS-Oxygen if you try to talk together, but then again, AT&T-Oxygen is really poisonous as they own the network....
The scientists who give these speaches always make it sound so cool......
Last weeks news (Score:1)
SA had something about chips that could rewire themselves on the fly about a year ago as well.
It's the memory (Score:1)
Of course this means that you need to invent a new microarchitecture for every problem you want to solve, and that is why reconfigurable computing has not caught on. Very few people have the skill to create an efficient microarchitecture, and even for experts it takes a great deal of time. Software rocks because you already have the microarchitecture defined, you know what the rules are. This gives most folk enough structure to solve their problem.
In regard to "caching", I've yet to seen an application that actually benefits from dynamic reconfiguration at run-time as far as performance goes. In regard to cost, there are many shipping comercial applications of FPGAs that choose the FPGA configuration at boot time, or between operating modes. This isn't the same as reconfiguration as part of the exeuction of an algorithm. If an algorithm does reap a benefit from reconfiguration, it will be because of the FPGA's proximity to external memory, not because of the wacky logic you can build.
FPGAs may rock (as in world-record performance) for certain computing tasks, but for time-to-market DSPs still rule.
FPGA's have been around for a while (Score:1)
Re:FIRST POST!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! (Score:1)
Re:Don't post crap until you read this!!! (Score:1)
Hype (Score:1)
with only promises and no products. I mean,
I know it's research project, but they don't even
have a good compiler and they claim they'll
be THE chips of the future. Now I know why the
horrible monster called X spread around - it must
be MIT flare for hype.
Re:Genetic algorithms and FPGA (Score:1)
This is NOT fluff. (Score:1)
Starbridge Systems' HAL (Score:1)
What's next? (Score:1)
21 (Score:1)
Re:Lacking a little imagination? No. (Score:1)
I've known about those for awhile now, but I've just heard of fgpa recently. Since fpga is so much closer to a reality than those are, I'm wondering what else I've missed.
Re:Don't post crap until you read this!!! (Score:1)
Yeah, what the hell is MEEPT? (Score:1)
been bugging me. Is it some MS related thing?
Just a thought... (Score:1)
From what I'm seeing in the above comments, fpga's aren't really that good at general-purpose computing. Great. How about we use fpga's as add-on co-processors, programmed by the software that supports their use. Such a system could be used for video acceleration, sound processing, algorithmic acceleration (think: encryption, simulation, etc.), and it would be worlds better for performance, b/c there is still a traditional cpu in the machine.
Anyways, just my US$.02
-- ioctl
Does it run Linux? (Score:1)
How about a Beowulf cluster? (Score:1)
This isn't news for nerds (Score:1)
FIRST POST (Score:1)
MEEPT (Score:1)
MS SUCKS (Score:1)
Apple sucks (Score:1)
Where can I get the source code? (Score:1)
Had to be said (Score:1)
This isn't new (Score:1)
Re:Starbridge Systems is also working on this (Score:1)
FPGAs can do some neat things, but you are not going to build a fast general purpose computer out of FPGAs. They are relatively slow and make inefficient use of silicon. They do a good job on control and glue logic, plus you can fix design errors and add features without having to rework the hardware.
This is Necessary, But We Need 3D Archetectures (Score:1)
The comments I am going to make are not about what we will need in 5 or in 10 years, but what we will need at some unnamed future time when we finally need it. And they are not about the raw archetecture, but about archetectures that follow its general theory.
And Yes, I want quantum computers, but lets leave them out of this discusion.
End Disclaimer
We need this type of chip, because, realy, our current chip archetectures can only scale so far. They have internal bottlenecks, and IO bottlenecks, and though we keep squeezeing more and more out of them, WE CANT KEEP IT UP FOREVER.
So, what is the best archetecture theory we have? What is the most we can squeeze into and out of a processor of a given size?
Well, ultimatly, that is a question of IO. Personaly, I think this form of chip is the best 2D approach (though maybe some wacky fractal aproach might be better), but even it is limited by it's own IO, and keeping all the processor units busy becomes harder and harder with each row and collumn you add (at a damn fast rate).
So, what I am ranting about is that we need this, but we need it to have (at some future time when we can build such a thing) Depth as well, so that we have a smaller bottle-neck, because the area/perimiter ratio is much worse than the volume/surface ratio.
ex:
a square processor with 1,000,000 cells has 3,996 External cells for IO, though the 4 cells in the corners only pipe to other external cells, and aren't really useful (though you leave them in, just in case)
a cubic processor with 1,000,000 cells has 58,808 external cells for IO, with 8 corner cells.
The average distance between cells is also MUCH smaller, allowing for more efficient internal communication.
which one is going to have an easier time connecting to the outside world?
my $0.02
-Crutcher
Optical COmputers / Coolant (Score:1)
On Cooling, silicon based chips could still be built 3d, if one laid a lattice of a VERY heat conductive material through them (like gold or platinum) and posibliy dunked them in some nitrogen.
In short, lets get some hairy golf balls into our computers.
-Crutcher
Re:hmmm... (Score:2)
Re:Genetic algorithms and FPGA (Score:1)
Re:Starbridge Systems is also working on this (Score:1)
Starbridge Systems is also working on this (Score:2)
Starbridge Systems (Score:1)
Starbridge Systems [starbridgesystems.com]
http://www.starbridgesystems.com/home/mainpage.
Starbridge Systems (Score:1)
http://www.starbridgesystems.com/home/mainpage.
Reprogramming (Score:2)
In addition to the lower native gate speed and inefficiencies of cell-based logic of FPGAs vis a vis full-custom processors, there's a serious problem with the time it takes to reprogram an FPGA. To put this in perspective, let's say that the time to perform computational work can be expressed as AX+B, where A is the time to perform an operation, X is the number of times the operation is repeated before moving on to a different operation, and B is the time to program that operation into the processor. For a traditional processor, B is zero. For an FPGA, A might be smaller than it is for the traditional processor, but B is very large. It doesn't take a rocket scientist[1] to figure out, therefore, that FPGAs win when X is large, i.e. when a task is very highly repetitive. There are a lot of tasks that fit this mold - audio and video processing, discrete-element simulations, etc. - but many of the most common everyday computational tasks you and I might face do not. For those cases, reprogramming overhead would be a killer.
Is there hope? Yes, absolutely. Lots of people are working on faster reprogramming, because it's known to be the One Big Problem in reconfigurable computing. Even better, work on partial reprogrammability is increasing. This is really cool because it would essentially allow you to dedicate part of the processor to functions you always need[2], and then use the rest to cache logic very much as data is cached now. In its simplest form, this could mean that all the parts of a traditional processor except for the actual functional units are permanent, and the cached items are instructions much like the instructions we have today. Need a population-count instruction? Allocate logic space and an opcode, reprogram the space, and voila! When you no longer need that instruction it'll fall out of the cache to be replaced by another instruction you do need. Of course, when the von Neumann model itself becomes the bottleneck then maybe the cached items would have interfaces other than instruction opcodes and register files, but defining those interfaces to allow the sort of logic-caching I've described is still a major conceptual problem worthy of a doctoral thesis or two.
[1] What's so special about rocket scientists, anyway? There are plenty of professions nowadays requiring greater knowledge and skill.
[2] The permanent part could even be implemented full-custom style, while the reprogrammable part remains cell-based. Altera had something called an SPGA which was like this, but I can't find it any more.
Re:"Instruction Set Architecture" sic (Score:2)
Re:This is NOT fluff. (Score:2)
That is incorrect. Gate speeds for FPGAs are _lower_ than for full-custom silicon.
Reconfigurable computing (Score:1)
Re:Genetic algorithms and FPGA (Score:1)
When they finished the training and cracked open the black boxes, they found that the networks were nearly identical to the corresponding parts of the brain. The visual systems had organized into layers and hemispheres, while the linguistic network was organized geographically by word type--verbs in one area, nouns in another--and by sub-type (object/subject, proper and improper nouns, etc). AFAIK, they haven't figured out how it happened.
There are some interesting implications in all of this.
Re:hmmm... (Score:1)
forget this.... (Score:1)
Yeah, those and the "if you're a devoted fan of [insert company, "geek" celebrity, etc.] you would have seen this already" and/or "/.'s so slow.. " blah, blah, blah
Also, this seems inevitable when some new software is released, or in response to the d.net cracking situation:
"I tried to tell them already, but they didn't want [my input]/[to give me credit]/[to add another name to their credits list]".. the been-there-done-that attitude.
Re:From 1996, a story (Score:1)
> linear, which is hardly surprising
There are problems, though. Standard CMOS logic (hc and friends) is not suprisingly optimized for binary transitions. They have a small linear range at about 2.5 volts, but draw significant current. This current is termed "class A current" and arises when the input hovers around the logic threshold. This current is quite high because the two output transistors on the chip are fighting to pull the output in opposite directions. When you do your experiment you probably want to put a resistor on the power line going into the chip, about 1k, so you don't fry it. The package crystal oscillators work this way. For most things you'll be better off with an opamp though
Re:Sounds a bit too good to be true. (Score:2)
One thing you can say for them: like the company in the earlier article, they have no shortage of self-confidence!
Re:Sounds a bit too good to be true. (Score:1)
DSP Chips? (Score:1)
I remember at that time DSPs were exhalted for their reprogramability and speed. It seems that all most consumers got out of it were software-driven WinModems.. (although I know there are a lot of specialized DSP applications out there).
Will FPGA chips be relagated to similar specialized tasks (like video compression or speech recognition) or will they truly be useful for general purpose computers?
Re:Fluff (Score:1)
Re:Not MIT (Score:1)
-awc
not really feasible yet (Score:1)
Don't get me wrong. FPGAs are great for prototyping, but for real speed, ASICs will always be the best.
Re:DSP Chips? (Score:1)
WinModems use the x86 processor to do DSP operations, something they're not very good at. Other modems have a DSP chip that is optimized for just those sorts of operations, but less reprogrammable than most microprocesors.
Jim
Speculation on a fast Linux+FPGA platform (Score:1)
Ask yourself the question: What is the fundamental architectural difference of an open source environment versus the traditional proprietary-binary environments of the last two decades? Are there any new assumptions in this new era that enable something different? Well, obviously now the OS and hardware can reasonably expect, if not mandate, access to the source code, not just the binary code.
Putting aside all the well-characterized FSF/Raymond reasoning for a sec., could you take advantage of this source-code availability to somehow build a faster, more efficient platform?
I've wondered whether this is indeed possible with FPGAs.
Since with FPGAs, you can actually configure the circuits to perform a specific algorithm which might be faster than performed by general-purpose circuitry, and since the FPGA can be programmed as often as needed by having a C compiler generate the appropriate netlist and send it to the FPGA, why not build a run-time Linux environment that on-the-fly recompiles the FPGA circuitry for the specific tasks (processes) being executed?
I've glossed over the various reasons why this isn't a cake-walk as anyone with FPGA expertise would realize, but I would be interested in an FPGA expert's assessment of either A) why this will never work, or B) what the top barriers to overcome would be. Is the gap between general purpose CPU clock rates and FPGA clock rates too great to ever realistically be surmounted in such a scheme? There's a big payoff for FPGA vendors if they could ever figure out how to make a competitive general-purpose platform; from an open source perspective, a platform that requires open source to deliver faster performance than Wintel would likewise be quite attractive.
--LP
Re:Sounds a bit too good to be true. (Score:1)
Hopefully low-level commands will be implemented as libraries, rather than being built into each compiler, as that will make it easier to change libaries if a faster/better one comes out. With any luck, we'll end up using a nice, high-level language, at least for most apps.
---
Re:DSP Chips? (Score:1)
NO. FPGA is specifically made so that it can be used for any application. How it ends up being used will depend on the cost/performance.
---
Re:Don't post crap until you read this!!! (Score:1)
---
Re:This is Necessary, But We Need 3D Archetectures (Score:1)
To make cubic CPUs using current materials, it would need to be cooled a LOT, and use materials through which heat flows quickly.
---
Re:the problem with 3D chips... (Score:1)
---
Re:Don't post crap until you read this!!! (Score:1)
---
Re:DSP Chips? (Score:1)
I dunno about that, but there is no way DSP is as versatile as FGPA. IIRC, FGPAs effectively _physically reconfigure themselves_, so you could, for example, tell it to be a ZX80 one minute, and a 6502 the next, or even both on the same substrate with something else controlling them.
Can't do that with a DSP.
Re:Don't post crap until you read this!!! (Score:1)
Genetic algorithms and FPGA (Score:3)
The cooler part is that no one can figure out how the chip works; he didn't implement a clock -- one evolved using fewer components than the simplest example given in any engineering text. There are a few components in there that don't seem logically neccessary, but their removal results in a non-functional chip. Theoretically, using the evolutionary procedure allowed the chip to utilize subtle properties of the materials used in its composition, like the small resistance changes caused by heat or electromagnetic induction.
It's a good read if you're interested.
Spooky... (Score:1)
http://www.sciam.com/1999/ 0899issue/0899agarwalbox1.html [sciam.com]
;)