Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Field Programmable Gate Arrays at MIT 122

Rhys Dyfrgi writes "There is an article in this month's Scientific American about the Raw microchip. Based around field programmable gates arrays, they claim it will reach speeds between 10 and 15 gigahertz by the year 2010. Because it's a FPGA, it can be instantly reconfigured to perform any task. It is one of the central items for the Oxygen Project. "
This discussion has been archived. No new comments can be posted.

Field Programmable Gate Arrays at MIT

Comments Filter:
  • by Anonymous Coward
    It's called "reconfigurable computing" and has been around for years. Take a look at http://www.embedded-solutions.ltd.uk/Projects/stor m.htm
  • by Anonymous Coward
    It's not all of MIT. At first, it was just the Media Lab, humiliating the rest of us. Now, LCS, once a good, strong, proud lab, picked up Dertouzos as leader. Dertouzos has been doing everything he can to turn LCS into another Media Lab. It gets funding, but it makes us embarrased of our school. In the long run, it'll reduce our school to crap.
  • by Anonymous Coward
    As an MIT student, I have to say I'm really embarrased about this. Traditionally, MIT didn't churn out this type of crap hype. Then the Media Lab showed up, and showed us they were willing to do anything for funding. Now LCS picked up Dertouzos as it's head, and it's slowiy starting to do the same thing. Most of LCS is still good, and the AI Lab is still clean, but part of LCS and Media Lab have been infested with this hype machine.
  • I like the scenario they describe: "You are on business in Paris. You excuse yourself from the meeting, go outdoors, pull out your handy and ask it to contact Joe."

    Joe admits to being flattered, maybe a little curious, but is happily married and suggests you put your handy away before somebody sees the two of you and starts spreading rumors.
  • The First Computer Person [airwindows.com]
    Maybe not incredibly technical (this is a _story_ not a proposal), but the idea isn't unheard of. It's a question of being able to make use of a vast number of extra gates- very much a neural net problem rather than a Von Neumann architecture.
    I suspect the 'maybe logic' I went on about in the story might be as important a concept: it fascinates me that in _all_ the digital circuits we depend on, there's the capacity for non-boolean logic values. This simply depends on analog characteristics of the digital circuits, which in some cases is quite predictable and in other cases not- but the resolution is phenomenal and there's no delay time for calculating relationships- I've been meaning to torture some random CMOS logic chips with non-logic values and see what comes out the output. Has anybody done this? So far I only know that inverters are relatively linear, which is hardly surprising :)
  • Unfortunately, there's a bit too little of #5 lately, and too much of all the other crap.
  • MEEPT!! is an eccentric user of slashdot who has been posting for nearly two years or so. He used to post a lot more in earlier times, but ever since Slashdot killed his account, forcing him to create a new one, he seems to be a less frequent visitor. His posts are usually insightful, but he presents his arguments/information in a humorous or unusual way, such as haikus and other poems, leading many people to misunderstand him. He's also not a Linux zealot, which gets him in trouble with many of the fanatics around here. I personally find him humorous.

    "The glorious MEEPT would like to bring all the divided factions of linux into one big divided faction." - The Glorious MEEPT!!
  • What a way to start, the lesson is: preview before posting
  • ah yes, or the, damnit you posted this one last week

    also: I hate Jon Katz(and variations thereof)
  • Since I have done been researching Slashdot for over 20 years now with a broad range of test subjects from around the world, I have discovered that these comments are posted, without fail, no matter what the article, and usually before the poster has read the article. I will cover them here so no one needs to say them and waste more vaulable space:

    1) Does it run Linux?
    2) How about a Beowulf cluster(an alteration goes like "Damn, wouldn't a Beowulf cluster of (insert computer chip, iBook, Red Hat Stock, anything really, in here) be sweet?")
    3) This isn't news for nerds!
    4) FIRST POST
    5) MEEPT
    6) MS Sucks
    7) Apple sucks
    8) Where can I get the source code?
    9) No source code? Damn this thing is a piece of shit.(note, source code is required even if the item in discussion does not have source code, it's a freedom thing)

    If I have missed any, please feel free to contribute to the standard Slashdot response. Once we have a good list, we should work on making a program that will automatically go to a story and post one of these comments at random, saving valuable time for the people who would have had to spend that time letting their brain rot while they typed.
  • I think the big caveat is at the end of the article which seems to suggest that a compiler for this kind of architecture might be really difficult to develop and have it meet the theoretical performance gains by going to this new architecture.

    One thing that troubled me about the Handy 21 example: the author states that he has a pager, a cell phone and a Palm Pilot. He could tell his Handy 21 to become one of those devices. Wouldn't he rather have a device that did ALL of those things? Ummm... I think Motorola has a CDMA Star Tac with a Rex clip on that does all of that and more. Today. Not in the "near future".

    Along those same lines, can this architecture be used for general purpose computing? Or for that matter, multitasking? What's the use of having an architecture that can be highly specialized if you're trying to do non specific things with it?

    I really didn't buy the article. It all seemed way too pie-in-the-sky with no real accomplishments and nothing new to report.


  • by substrate ( 2628 ) on Monday July 26, 1999 @02:16AM (#1784899)
    Right now I hold Starbridge Systems [starbridgesystems.com] in as much esteem as I hold American Computer [accpc.com]. Starbridge systems makes provably bogus performance claims. They're not overstated nor are they misleading, they're bogus. Maybe they do have something but their marketing department is overzealous and/or stupid, either way having such obviously false information sure makes it look like a scam.
  • by slew ( 2918 ) on Sunday July 25, 1999 @08:04PM (#1784900)
    People who have programmed FPGA realize that the claim FPGAs run at a faster clock rate are
    missing the point.

    Creating a chip architecture/micro-architecture is a function of 4 fundamental tradeoffs:
    Cycle time, Work per cycle, Area, and Time to market.

    FPGA have chosen low work per cycle. In the past, CPUs chose high work per cycle.
    Now, they are going in the direction of lower work per cycle (deeper pipelines, more latency).
    Just a question of what you want.

    Clock rate is just one choice of many, and has little to do with some magic FPGA architecture.
    In fact with today's fpga, 200MHz is fast, compare that to your 450MHz pentium III...

    The main architectural advantage of FPGAs is that a block of logic only needs to exist when you
    are using it. This is simply a form of caching. Instead of having all the HW there (but slower),
    you have only the subset you need (so it's faster). However, if you factor in the "misses"
    (the time where logic has to be reprogrammed), it's a much more complicated problem which
    doesn't have such an "obvious" solution...

    Just like there are data sets that blow a CPU cache, there are probably algorithms that make
    re-programmability a liability.

    On the issue of efficiency, FPGA just have underused programmability and routing logic
    instead of underused HW functional units in other architectures. Depends on the problem you are
    trying to solve...

    -slew
  • It's amazing how whenever I read about this stuff I get a warm fuzzy feeling. All of this great technology close at hand, a computer that uses a natural interface, human voice, and one that helps bring information closer to you at the beckoning of your voice. Then reality slams in.
    Somehow if and when this eventualy reaches the marketplace I forsee a system that is competative within itself and with another, and where the features are broken due to an upgrade in some obscure module. I could forsee an AOL-Oxygen, but if you decided to breath that then you might poison another person who breaths MS-Oxygen if you try to talk together, but then again, AT&T-Oxygen is really poisonous as they own the network....
    The scientists who give these speaches always make it sound so cool......
  • There's a whole lot of computer stuff in the current SA, but it came out a week ago. Did no one post it 'cause everyone assumed that someone else had, or are submissions really that backed-up.
    SA had something about chips that could rewire themselves on the fly about a year ago as well.


  • The main architectural advantage of FPGAs is that a block of logic only needs to exist when you are using it.
    Really the architectural advantage of FPGAs for computing is the custom memories you can build in FPGA's, not the operators. Most problems that require high-performance also require high memory bandwidth. Think of image processing, compression, and recognition tasks. By building custom blocks of on-chip memory, the FPGA let's you tailor the microarchitecture to the problem.

    Of course this means that you need to invent a new microarchitecture for every problem you want to solve, and that is why reconfigurable computing has not caught on. Very few people have the skill to create an efficient microarchitecture, and even for experts it takes a great deal of time. Software rocks because you already have the microarchitecture defined, you know what the rules are. This gives most folk enough structure to solve their problem.

    In regard to "caching", I've yet to seen an application that actually benefits from dynamic reconfiguration at run-time as far as performance goes. In regard to cost, there are many shipping comercial applications of FPGAs that choose the FPGA configuration at boot time, or between operating modes. This isn't the same as reconfiguration as part of the exeuction of an algorithm. If an algorithm does reap a benefit from reconfiguration, it will be because of the FPGA's proximity to external memory, not because of the wacky logic you can build.

    FPGAs may rock (as in world-record performance) for certain computing tasks, but for time-to-market DSPs still rule.

  • But this application of them looks promising. I was in the DSP business for a while, and we used FPGA's on our boards. Unfortunately it is still pretty expensive technology, but give it time.
  • 'Fraid not. I don't understand the obcession with those, anyway. Maybe you should actually check the comment board before saying that. The main site said there were no comments yet, and the "read more" section said there were 4. 5 now, at least.
  • Not quite so common but, "The Patent Office sux!!!"
  • Wow, that was an unashamed self-promotion
    with only promises and no products. I mean,
    I know it's research project, but they don't even
    have a good compiler and they claim they'll
    be THE chips of the future. Now I know why the
    horrible monster called X spread around - it must
    be MIT flare for hype.
  • Also, it wouldn't work when hooked up to another computer...it seemed to be using some feedback from the er emitted by his box.
  • Guess no one else has been paying attention. FPGA's can run at much higher clock frequencies then current CPU's because of fundamental architectural differences. This is what the new HAL hyper-computer (and others) are based on. I read the SA article and it's ground-breaking ideas with a reasonable plan to enable them. Amazing syncronicity that Transmeta's latest patent application would also come out today; read the fine print and you will see that it is also about software changing hardware. All these advances have in common increasing the association of data with processing (the operands with the operators).
  • Isn't this *exactly* what the HAL machines being produced by Starbridge Systems are doing?
  • What's the next (r)evolution in processors after this? How many technologies are possible that'll keep pushing the limit?

  • Is the 21 postfix a reference to the next century?

  • Well yeah, but those are all obvious. I'm mean what's next for the silicon beyond the quantum, microminiaturization, and biological/chemical stuff.

    I've known about those for awhile now, but I've just heard of fgpa recently. Since fpga is so much closer to a reality than those are, I'm wondering what else I've missed.

  • You got that right - Thanks!
  • I only started reading /. in January and this has
    been bugging me. Is it some MS related thing?
  • Correct me if I'm wrong, but it seems the article was implying that fpga-based systems would replace current tech for cpu's. This all seems like a bunch of hype to me.

    From what I'm seeing in the above comments, fpga's aren't really that good at general-purpose computing. Great. How about we use fpga's as add-on co-processors, programmed by the software that supports their use. Such a system could be used for video acceleration, sound processing, algorithmic acceleration (think: encryption, simulation, etc.), and it would be worlds better for performance, b/c there is still a traditional cpu in the machine.

    Anyways, just my US$.02

    -- ioctl
  • Ah, but does your comment run Linux?
  • OK, so who's up for a Beowulf cluster?
  • It's old. It's been said before.
  • Well, not quite.
  • Whatcha getting at?
  • Yep. They sure do.
  • But not as much as microsoft.
  • Ah. So where can I get the source code for your article? It sure is a freedom thing. Oh and while your at it, where can I get the source for my bacon sandwich?
  • Get my point? You just spent several minutes pointing out the "frequently occuring posts". YOUc could be finding something better to do.
  • Oxford University have been doing this kind of stuff for ages - go to the oxford University Wolfson Research Laboratory [ox.ac.uk]. I don't have the full link off hand, but you'll find their research in the Parallel Computing department.
  • Oh yes, the SBS HYPE-Computer. These were the turkeys who were comparing 4 bit integer addition on their system to floating point arithmetic on a real computer.

    FPGAs can do some neat things, but you are not going to build a fast general purpose computer out of FPGAs. They are relatively slow and make inefficient use of silicon. They do a good job on control and glue logic, plus you can fix design errors and add features without having to rework the hardware.

  • First, A disclaimer:
    The comments I am going to make are not about what we will need in 5 or in 10 years, but what we will need at some unnamed future time when we finally need it. And they are not about the raw archetecture, but about archetectures that follow its general theory.

    And Yes, I want quantum computers, but lets leave them out of this discusion.
    End Disclaimer

    We need this type of chip, because, realy, our current chip archetectures can only scale so far. They have internal bottlenecks, and IO bottlenecks, and though we keep squeezeing more and more out of them, WE CANT KEEP IT UP FOREVER.

    So, what is the best archetecture theory we have? What is the most we can squeeze into and out of a processor of a given size?

    Well, ultimatly, that is a question of IO. Personaly, I think this form of chip is the best 2D approach (though maybe some wacky fractal aproach might be better), but even it is limited by it's own IO, and keeping all the processor units busy becomes harder and harder with each row and collumn you add (at a damn fast rate).

    So, what I am ranting about is that we need this, but we need it to have (at some future time when we can build such a thing) Depth as well, so that we have a smaller bottle-neck, because the area/perimiter ratio is much worse than the volume/surface ratio.

    ex:
    a square processor with 1,000,000 cells has 3,996 External cells for IO, though the 4 cells in the corners only pipe to other external cells, and aren't really useful (though you leave them in, just in case)

    a cubic processor with 1,000,000 cells has 58,808 external cells for IO, with 8 corner cells.

    The average distance between cells is also MUCH smaller, allowing for more efficient internal communication.

    which one is going to have an easier time connecting to the outside world?

    my $0.02
    -Crutcher
  • For our FASTEST computers, we cannot go with pure optical computers, because there are a conditions in which an optical switch is going to be SLOWER than an elctrical one, and have to work on higher energy levels. And If we reach the level where we NEED 3d chips, that will be a liability.

    On Cooling, silicon based chips could still be built 3d, if one laid a lattice of a VERY heat conductive material through them (like gold or platinum) and posibliy dunked them in some nitrogen.

    In short, lets get some hairy golf balls into our computers.

    -Crutcher
  • what did you think of the book?
  • Unfortunately, the "subtle properties" aren't consist from chip to chip. That is, the chip manufacturer guarantees that certain properties are the same from chip, and this search process wound up using properties that weren't in the set.
  • So do I (their website also looks very amateuristic), but it's interesting anyway (to me); it's not just the speed that makes this technology interesting.
  • I believe Starbridge Systems [starbridgesystems.com] has already sold a supercomputer with a FPGA processor and is developing what they call "a new personal supercomputer that will change the PC industry forever". This has already been posted on /. before. Still very interesting though. They also say "SBS's Hypercomputer systems can emulate virtually any hardware, including other supercomputers", so running Linux or *BSD on it shouldn't be much of a problem :)
  • Also try looking here, I think they were mentioned on /. a while back. They claim to have a "Hypercomputer" that is rated at an overall sustained performance of 100 TFLOPS (100 trillion 32-bit floating point operations per second). Also they say that it takes up 4 cubic feet of space, weighs 150 lbs., and plugs into a 110-volt wall outlet. Pretty cool.

    Starbridge Systems [starbridgesystems.com]

    http://www.starbridgesystems.com/home/mainpage.h tm
  • I'm pretty sure this [starbridgesystems.com] is the company you remember.

    http://www.starbridgesystems.com/home/mainpage.h tm
  • [Apologies if this sounds familiar to long-time readers; I've said this before and I believe it bears saying again.]

    In addition to the lower native gate speed and inefficiencies of cell-based logic of FPGAs vis a vis full-custom processors, there's a serious problem with the time it takes to reprogram an FPGA. To put this in perspective, let's say that the time to perform computational work can be expressed as AX+B, where A is the time to perform an operation, X is the number of times the operation is repeated before moving on to a different operation, and B is the time to program that operation into the processor. For a traditional processor, B is zero. For an FPGA, A might be smaller than it is for the traditional processor, but B is very large. It doesn't take a rocket scientist[1] to figure out, therefore, that FPGAs win when X is large, i.e. when a task is very highly repetitive. There are a lot of tasks that fit this mold - audio and video processing, discrete-element simulations, etc. - but many of the most common everyday computational tasks you and I might face do not. For those cases, reprogramming overhead would be a killer.

    Is there hope? Yes, absolutely. Lots of people are working on faster reprogramming, because it's known to be the One Big Problem in reconfigurable computing. Even better, work on partial reprogrammability is increasing. This is really cool because it would essentially allow you to dedicate part of the processor to functions you always need[2], and then use the rest to cache logic very much as data is cached now. In its simplest form, this could mean that all the parts of a traditional processor except for the actual functional units are permanent, and the cached items are instructions much like the instructions we have today. Need a population-count instruction? Allocate logic space and an opcode, reprogram the space, and voila! When you no longer need that instruction it'll fall out of the cache to be replaced by another instruction you do need. Of course, when the von Neumann model itself becomes the bottleneck then maybe the cached items would have interfaces other than instruction opcodes and register files, but defining those interfaces to allow the sort of logic-caching I've described is still a major conceptual problem worthy of a doctoral thesis or two.

    [1] What's so special about rocket scientists, anyway? There are plenty of professions nowadays requiring greater knowledge and skill.

    [2] The permanent part could even be implemented full-custom style, while the reprogrammable part remains cell-based. Altera had something called an SPGA which was like this, but I can't find it any more.
  • Actually, the use of ISA to mean "Instruction Set Architecture" may well predate it use to mean "Industry Standard Architecture". In any case, both meanings have been around for a long time and we might as well get used to them. It wouldn't be the only time the same acronym had two very distinct meanings. ATM, anyone?
  • >FPGA's can run at much higher clock frequencies then current CPU's because of fundamental architectural differences.

    That is incorrect. Gate speeds for FPGAs are _lower_ than for full-custom silicon.
  • Sure FPGA's are incredible things - rewire them do be whatever you want. Same thing goes for erector sets. Hell, I'd bet you could build a interstellar rocket out of erector sets if you had enough of them, and knew how to do it....
  • Something similar occasionally happens with artificial neural networks. I've read about two seperate projects, one involving vision and other language processing, where the researchers trained the networks to mimic (simplified) human abilities. Pattern recognition and visual attention on the one hand, some understanding of grammar on the other. In both cases the networks were more or less randomly wired at the outset and trained in the dark (without direct intervention by the researchers.)

    When they finished the training and cracked open the black boxes, they found that the networks were nearly identical to the corresponding parts of the brain. The visual systems had organized into layers and hemispheres, while the linguistic network was organized geographically by word type--verbs in one area, nouns in another--and by sub-type (object/subject, proper and improper nouns, etc). AFAIK, they haven't figured out how it happened.

    There are some interesting implications in all of this.
  • It was an okay book. I prefered Shakespeare, but it wasn't bad.
  • ah yes, or the . . .you posted this one last week
    Yeah, those and the "if you're a devoted fan of [insert company, "geek" celebrity, etc.] you would have seen this already" and/or "/.'s so slow.. " blah, blah, blah


    Also, this seems inevitable when some new software is released, or in response to the d.net cracking situation:
    "I tried to tell them already, but they didn't want [my input]/[to give me credit]/[to add another name to their credits list]".. the been-there-done-that attitude.
  • > So far I only know that inverters are relatively
    > linear, which is hardly surprising :)

    There are problems, though. Standard CMOS logic (hc and friends) is not suprisingly optimized for binary transitions. They have a small linear range at about 2.5 volts, but draw significant current. This current is termed "class A current" and arises when the input hovers around the logic threshold. This current is quite high because the two output transistors on the chip are fighting to pull the output in opposite directions. When you do your experiment you probably want to put a resistor on the power line going into the chip, about 1k, so you don't fry it. The package crystal oscillators work this way. For most things you'll be better off with an opamp though :-)
  • Yes, it reminds me of another programmable gate array article posted on slashdot a couple of months ago. That one was a bit difficult to believe also. Personally, I wonder about the complexity of the "compiler" that will run these things.

    One thing you can say for them: like the company in the earlier article, they have no shortage of self-confidence!
  • Mmm. I've programmed verilog before. While it puts a C-like interface over programming at the hardware level, that doesn't make the job of creating a TCP-IP driver for a NIC card *that* much simpler. You're still dealing with things at the level of 'if you get this bit, change this register' and that's a bit too low for most programmers to be happy.
  • Wasn't this the promise of DSP chips when they were introduced a few years ago?

    I remember at that time DSPs were exhalted for their reprogramability and speed. It seems that all most consumers got out of it were software-driven WinModems.. (although I know there are a lot of specialized DSP applications out there).

    Will FPGA chips be relagated to similar specialized tasks (like video compression or speech recognition) or will they truly be useful for general purpose computers?
  • 2010? I think it might be a little sooner.
  • Transfer to CMU =)

    -awc
  • FPGAs with high gate counts (e.g. Flex 10kXXX [altera.com] series) are expensive, and physically quite large. The other problem involved with this is the compilers for FPGAs pretty much suck at laying out the wiring between gates. You end up with layouts that not only waste most of the space available on the FPGA, but are also an order of magnitude slower than you want.
    Don't get me wrong. FPGAs are great for prototyping, but for real speed, ASICs will always be the best.
  • DSP chips are optimized mainly for speed in operations like multiply and accumulate. They are incredibly more programmable than the analog circuits they replace, but far from suitable for any general computing application such as an OS, e.g.

    WinModems use the x86 processor to do DSP operations, something they're not very good at. Other modems have a DSP chip that is optimized for just those sorts of operations, but less reprogrammable than most microprocesors.

    Jim
  • In one of my less-paranoid, more-creative moments a few months ago I found myself thinking about Open Source and Linux -- that there might be some potential for the Linux platform enabled by FPGAs that I haven't seen described or pursued elsewhere.

    Ask yourself the question: What is the fundamental architectural difference of an open source environment versus the traditional proprietary-binary environments of the last two decades? Are there any new assumptions in this new era that enable something different? Well, obviously now the OS and hardware can reasonably expect, if not mandate, access to the source code, not just the binary code.

    Putting aside all the well-characterized FSF/Raymond reasoning for a sec., could you take advantage of this source-code availability to somehow build a faster, more efficient platform?

    I've wondered whether this is indeed possible with FPGAs.

    Since with FPGAs, you can actually configure the circuits to perform a specific algorithm which might be faster than performed by general-purpose circuitry, and since the FPGA can be programmed as often as needed by having a C compiler generate the appropriate netlist and send it to the FPGA, why not build a run-time Linux environment that on-the-fly recompiles the FPGA circuitry for the specific tasks (processes) being executed?

    I've glossed over the various reasons why this isn't a cake-walk as anyone with FPGA expertise would realize, but I would be interested in an FPGA expert's assessment of either A) why this will never work, or B) what the top barriers to overcome would be. Is the gap between general purpose CPU clock rates and FPGA clock rates too great to ever realistically be surmounted in such a scheme? There's a big payoff for FPGA vendors if they could ever figure out how to make a competitive general-purpose platform; from an open source perspective, a platform that requires open source to deliver faster performance than Wintel would likewise be quite attractive.

    --LP
  • It's true, the coding paradigm will be different, on the low level, since it will have to deal directly with logic gates, rather than having some set commands. This also means that it would be easy to implement emulation of any given processor, since the low-level command set can simply be exchanged. I'm sure that when it is released, low-level libraries will be released with it, to make it easier to code for. After all, who wants to write telecommunications suites in AND gates?

    Hopefully low-level commands will be implemented as libraries, rather than being built into each compiler, as that will make it easier to change libaries if a faster/better one comes out. With any luck, we'll end up using a nice, high-level language, at least for most apps.
    ---
  • Will FPGA chips be relagated to similar specialized tasks (like video compression or speech recognition) or will they truly be useful for general purpose computers?


    NO. FPGA is specifically made so that it can be used for any application. How it ends up being used will depend on the cost/performance.
    ---
  • Indeed. The only way I can think of, off hand, would be an optical computer, using holograms to place the paths, etc. This is being researched in several places already. And, the heat produced by light is low enough that little cooling is needed.

    To make cubic CPUs using current materials, it would need to be cooled a LOT, and use materials through which heat flows quickly.
    ---
  • To make 3D chips: use holography. It'll only work for optical computers, but there were articles on it two years ago, so it's probably been researched for longer. AFAIK, it's not near production yet, but with some of the recent advances in optical computing (transistor, for instance), it may be just around the corner.
    ---
  • Okay, now I know who MEEPT is. What still confuses me is why people post it. Are they saying that the article is hard to understand?
    ---
  • "Wasn't this the promise of DSP chips when they were introduced a few years ago?"

    I dunno about that, but there is no way DSP is as versatile as FGPA. IIRC, FGPAs effectively _physically reconfigure themselves_, so you could, for example, tell it to be a ZX80 one minute, and a 6502 the next, or even both on the same substrate with something else controlling them.

    Can't do that with a DSP.
  • by crackd ( 71957 ) on Sunday July 25, 1999 @10:02PM (#1784966)
    There's a really interesting article on New Scientist, Creatures from Primordial Silicon [newscientist.com], on the application of genetic algorithms to FPGA chips. In this case, the researcher was able to let natural selection design a chip that was capable of producing a 5v output when he said "Start" and stop producing the output when he said "Stop".

    The cooler part is that no one can figure out how the chip works; he didn't implement a clock -- one evolved using fewer components than the simplest example given in any engineering text. There are a few components in there that don't seem logically neccessary, but their removal results in a non-functional chip. Theoretically, using the evolutionary procedure allowed the chip to utilize subtle properties of the materials used in its composition, like the small resistance changes caused by heat or electromagnetic induction.

    It's a good read if you're interested.
  • Does this remind you of the Cyberdyne Systems chip from Terminator 2 or what?

    http://www.sciam.com/1999/ 0899issue/0899agarwalbox1.html [sciam.com]

    ;)

"Pok pok pok, P'kok!" -- Superchicken

Working...