Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Soft Processors in FPGAs? 26

cybergibbons asks: "We're students in the Department of Electrical and Electronic Engineering, Imperial College, and are carrying out some research for Altera into FPGAs, softcore processors, and hardware software co-design. Most embedded systems are a combination of hardware (for performance) and software (for versatility), and the design of these systems is getting more and more complex. Previously, the hardware and software was partitioned at the early stages of design, leading to sub-optimal solutions. New languages such as SystemC and Handel C aim to merge the hardware and software design with one common language combining the high level algorithm design and low level RTL design ? the ultimate goal being to allow conventional C++ programs to be synthesized directly into working systems, without any human intervention. However, what we seem to have found is a lot of marketing spiel and conceptual papers with no practical ideas. Is anyone using any of these new tools? Are any of the current co-design tools any good? Do you think a computer can partition designs effectively into hardware and software? What features would you like to see in future tools? Do you envision any amazing new applications for FPGAs using new co-design tools?"
This discussion has been archived. No new comments can be posted.

Soft Processors in FPGAs?

Comments Filter:
  • by Tumbleweed ( 3706 ) on Thursday July 10, 2003 @07:33PM (#6410996)
    The Commodore-One Reconfigurable Computer [c64upgra.de]

    From the About page:

    The Commodore One computer started off as a 2002 enhanced adaptation of the Commodore 64 -the most sold of any computer model (Guiness book of World Records) While retaining almost all of the original's capabilities the Commodore One adds modern features, interfacing and capabilities. The C-One fills a gap in the hobbyist computer market.

    During development, it evolved into a re-configurable computer, a new class of computers where the chips do not have dedicated tasks any more. The two main chips carry out different tasks, depending on the needs of the program. The technology used is called FPGA - field programmable gate arrays. These chips can be programmed to do the tasks that the chips of the C-64 or other computers have done. It's no emulation, but it's a re-implementation of the chips that are no longer available since many years.

    The one thing that is not contained in the FPGAs is the main processor - it would take too much space, resulting in too high cost. To maintain flexibility, the CPU resides on a card that can be exchanged by the user - as simple as plugging in a PCI card.

    After a cold start, the FPGA programs are loaded from a mass-storage device like harddrive, disk drive or a compact flash card. What's described in one short sentence is a giant leap in computer technology: The hardware can be altered by the user without even opening the computer. The FPGA programs - so-called 'cores' - turn the C-One into clones of famous 80's computers like the C64, VIC-20, plus/4, TI-99/4a, Atari 2600, Atari 400/800 series, Sinclair Spectrum, ZX81, Schneider CPC and many more. It can of course also be a completely new computer with specs unknown to these milestones in computer history. That's what the C-One 'native mode' will be - read more on the Specifications page

    The estimated price will be about 249,- EUR. The user will need to supply an ATX style case, ATX power supply, drive(s), PS/2 keyboard, mouse and SVGA capable monitor.

    ---

    Too cool!
  • by stevew ( 4845 ) on Thursday July 10, 2003 @07:42PM (#6411058) Journal
    I think you'll find that Handle C , System C, etc are not likely to be very popular with the run-of-the-mill designer. Most of these folks (me included) don't see the need for yet another design language. We get along fine with Verilog or VHDL.

    There HAS been a need identified for verification constructs in the languages - Vera and E are examples of stand-alone verification languages that are used in conjunction with the RTL design in Verilog/VHDL. With the advent of Superlog/SystemVerilog these languages are being included within Verilog itself. This features the best of both worlds, i.e. old IP still runs, and new constructs are added to make verification easier.

    So while there are a few adopters of SystemC, etc. I suspect they will fall by the way-side as SystemVerilog becomes a reality. It seems the EDA industry is going to back SystemVerilog above the other choices, and most designers seem to feel this is the best solution.
  • by brejc8 ( 223089 ) * on Thursday July 10, 2003 @07:47PM (#6411096) Homepage Journal
    System/Handel C[++] are languages with two good points.

    They allow software engineers to design hardware with minimal training and secondly they allow fantasticly fast simulations. The ultimate system of you feed in an MP3 decoder writen in C and you get a player with software at the other end is years (I think more than 10) away.

    C is not a natural language to describe hardware. It creates large slow designs with very little transparency to the generated design. Transparency is important as a small looking piece of C code will generate a large slow design while a larger code will generate a smaller faster design. While trasparency is partly implicit in computer programs (You have a vague idea as to what the compiler will generate from your code) in hardware it is very easy to be well off.

    FPGA's are getting more and more popular and powerfull. There are allready numerous CPU designs available [opencores.org] and the current methods of creating them (mainly verilog and VHDL) seem to be generating much better results than system C ones.

    As for soft computers I really like the idea [man.ac.uk]. I would not be surprised to see some FPGA parts on the next 3d cards or CPUs. They allow hardware structures to replace complex code (e.g. I was trying to write code which effectively can be dome with a piece of CAM. hash tables are just a method of emulating CAM in programs).

    To conclude, Yeah C based methods will become more popular but only because menagement like them. They produce appauling designs but as silicon area becomes nearly free and in areas where speed does not matter and you need to do a billion simulation runs to test it then yeah it will show it self more attractive to software engineers trying to do hardware. But at the end of the day this is all cheating. If you want to design hardware you really have to learn what structures there are at the bottom level and only then when you know what your compiler will produce for you can you effectively make use of such languages. If companys can affoard to get proper engineers to make hardware they may fetch some C monkey to do it instead but I think if you want to become one then you are selling your self short.
    • I respectfully beg to differ ;-)

      I think the argument that high level languages can't be used to design hardware rests mainly on the premise that software is sequential, whereas hardware is completely parallel. Handel C (and perhaps SystemC?) uses communicating sequential processes, much like VHDL, to allow the expression of both parallel and sequential algorithms. I don't agree with your assertion that software engineers can learn Handel C and become hardware engineers, because it requires a change of ap

  • by cait56 ( 677299 ) on Thursday July 10, 2003 @08:10PM (#6411237) Homepage

    What I find intriguing about these systems is the possibility of starting with some custom hardware, and lots of C code to perform logic, and interface with the custom hardware.

    Now, if I knew that interface was solid. I'd just want a processor and a custom systems chip that did the usual memory/peripheral interfacing and whatever my special functions required. If I wasn't all that worried about chip count I might even put the special logic entirely on the memory bus as a seperate device.

    But more frequently I have some complex logic that I probably want to do in hardware, but I'm not certain and I'm not 100% certain that the algorithm is correct. For speed of development and refinement I need to express that logic in a very conventional language such as C.

    But planning ahead, I still define the function to fit within a state machine, to have defined inputs and outputs, and to avoid doing things that won't implement in hardware very well (like complex wanderings through main memory)

    I now have a module that is suitable to be implemented in hardware or software. Since I can do either, I take the first draft in software.

    By the time the code stabilizes I'll be confident that the algorithm is correct and I'll know how important optimization is to the overall performance of the system. I can now prioritize which modules are most worthy of translating into hardware.

    With a soft core I can shift resources between "software" and "hardware" without having to re-layout the board, design new pinouts, etc. In other words, I actually can shift resources. Try getting a board layout changed on a successful product just to optimize things sometime.

    If the "hardware"/"software" boundaries ever stabilize a lot, I can even consider going back to a conventional processor core and full hardware solution. More likely the market will demand totally new features by then.

  • Bottom line:
    FPGA's can't handle the complexity for the programs we write today for general computer use
    FPGA's are much more expensive in terms of power and cost than an equivilant processor/ASIC/specific purpose chip
    FPGA's are exceedingly slow. In order to make them worthwhile, the hardware algorithm they contain must be massively parallel, or not require speeds that modern processors can attain - which leads back to the FPGA's cannot represent extrememly complex systems yet.
    Those people who have an in
    • Some of your points make me question weather you understand the concepts here.

      FPGA's can't handle the complexity for the programs we write today for general computer use

      Sure they can! You can implement a general purpose CPU inside an FPGA; it would be slow and expensive compared to a real CPU, but it would be able to run any general purpose algorithm you can think of. There is no fundamental reason why you can't do anything with an FPGA, it all comes down to the PARTICULAR task at hand, and how best t


    • FPGA's are exceedingly slow.

      FPGA's can be exceedingly slow if not applied properly. For a particular pat pending project I'm working on, there isn't a standard CPU or DSP engine available in the free world that's fast enough to do what an admittedly rather large FPGA can do easily. (Lots of parallel RF DSP stuff). There is a lot to be said for choosing the right tool for the job.

      • Well, after designing this wizbang custom DSP, for which FPGAs are pretty slick, wouldn't it be implemented as and ASIC for the final product and be ever so much faster. (Can anyone say exactly what the differance is? Like if Softcore A is implemented on and FPGA and as HDL at 1GHz, the FPGA will perform like and HDL at 100MHz, or 1MHz, or even less?)
        • I'm not sure if I interpret your question correctly, the last sentence is hard to parse. ;-)

          But if you have an FPGA clocked at 100MHz and an ASIC clocked at 100MHz then they will be equivalent in speed, naturally. (If they have the same logic.) The difference is that an ASIC can be clocked a lot faster than an FPGA (FPGAs are typically limited to a few 100 MHz currently AFAIK) and that the ASIC can be made smaller and draws less power.

          A big reason for not getting an ASIC done is because it's extremely exp

        • It depends on the production volume and whether or not your product must be easily field-upgradable. An ASIC is not amenable to field upgrades; the config memory for an FPGA can be flashed in the field by <gasp>the user!</gasp>


  • FPGAs would always be slower, more power-hungry and require more transistors than other RISC processors at the same cost. Economically, one vendor would release a "standard" architecture, all developers would aim for that platform and the software base would require everyone to use that architecture. This is true of the x86, and to a lesser extent the powerpc, sparc and 68060. If an FPGA chip can "become" any of these chips at a minimal loss of performance, there is great business potential, but theres gr
    • OT: Code Morphing (Score:3, Interesting)

      by MBCook ( 132727 )
      I've always wondered why Transmeta hasn't extended code morphing to other cores. Why not release a core than runs PPC code? Why not make a core than can run BOTH PPC and x86 code. Think about this. Apple could use Transmeta processors (since in the past they haven't been as fast as x86 processors anyway) but software like VirtualPC could run at the equivelent of full speed by using different codemorphing. Or you could do things the other way on the PC side. It would allow people to program on one platform a
      • Re:OT: Code Morphing (Score:2, Interesting)

        by bmac ( 51623 )
        I've wondered why you can't target the direct transmeta ML. It has to exist as its own, separate entity, right? Why not have an OS that targets it directly? And, lastly, why would you create a chip and not write an OS for it in its native code?

        More questions than answers here...
        • Linus has commented on this before (when someone asked basically the same question as you on LKML) - here's his reply:

          The translations are usually _better_ than statically compiled native code (because the whole CPU is designed for speculation, and the static compilers don't know how to do that), and thus going to native mode is not necessarily a performance improvement.

          Another problem he mentioned was that the Crusoe core can't be 'reflashed' to emulate another CPU in the production versions, so you'd
  • An aproach with I think is promissing for co-design is to use Constraint Programming. You build a declarative model (the algorithm you want to partition) and the intelligent system find through search the optimal partition, much in the same way as layout is currently done.

    If you have access to IEEE papers you can read an article about codesign with Constraint Satisfaction [computer.org], or if not you can always ask Google about the subject [google.ie].

  • by Muad'Dave ( 255648 ) on Friday July 11, 2003 @08:38AM (#6413907) Homepage

    ...softcore processors...

    Are these the processors they sell on late-night infomercials on ski^H^H^H Cinemax?

  • The Free-IP [free-ip.com] project, put together by a friend of mine, aimed to do just this. They have a 6502 design, and a RISC processor as well. Unfortunately, other contributions were sparse, and Real Work has gotten in the way of him creating and giving away any other designs.
  • System Verilog [eedesign.com]

    Pretty much a HDL of the future that's still being developed, has all the best features and none of the bloat of VHDL and some other HDL languagues.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...