Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Open-Source Processors 125

clay pigeon writes "This EE Times article covers the development of open-source processors. No doubt exciting news for hardware hackers and those with a need to know about every last detail of their systems, but how will this effect the hardware industry? Can open-source hardware duplicate the success of the open-source software movement?" I'm not holding my breath. Fabrication facilities are a lot more expensive then a CD-ROM presses (or more accurately, internet connections). But I still hope it happens. It would be an interesting market if everyone worked together on the designs, but built their own chips.
This discussion has been archived. No new comments can be posted.

Open-Source Processors

Comments Filter:
  • by Anonymous Coward
    Field Programmable Gate Arrays (FPGAs) provide a cheap way to discover and experiment with hardware design. (You don't need access to a chip fabrication plant to produce your design, and you can easily re-program your chip instead of fabricating a new chip for each design.)

    If open-source hardware is ever to become as widespread as open-source software it is likely going to be via the use of FPGAs.

    But just as gcc helped spur open-source software, we need open-source tools for hardware design (tools for Electronic Design Automation (EDA)).

    This includes schematic capture tools, hardware description language (HDL) parsers, HDL simulators, and circuit synthesis tools.

    http://www.opencollector.org [opencollector.org] has a list of some open-source EDA software under development.

    Once there is an infrastructure of open-source EDA tools, this will spur on the development of open-source hardware building blocks or cores (see http://www.opencores.org [opencores.org]).

    Looking forward to the day open-source hardware will have as vibrant a community as open-source software...

  • by Anonymous Coward
    When FPGAs get cheap enough, this is exactly what will happen. See e.g. Xilinx [xilinx.com] who have an impressive product line up to several million gates. Unfortunately, the big ones cost hundreds of dollars right now.
  • The point is NOT to make your own CPU. At the hobbyist level that IS out of the pricerange. The point is to make designs available that multiple people may want to take advantage of, especially if the group is small enough that commercial development is not feasible (or if it is, only at very high prices because of low volumes).

    My current project is an example of that. I'm working on a cockpit for a model airplane controller. The model airplane controls are usually just two sticks and some buttons on a standard radio. I'm interfacing full rudder pedals, and throttles to the controller via microcontroller. There is a video feed from the model going into the display unit (looking for a cheap fresnel for it). Stage 2 is to feed telemetry back over the audio channel, and use a small FPGA to do video overlay and drive the instruments. If I have room, I may ditch the microcontroller and move it all to the FPGA after it's all working.
  • The point is not to displace Intel or AMD.

    The point is to learn and have fun.

    Hardware designs for FPGAs can be shared with other FPGA designers and enthusiasts. There is no requirement for mass production.

    Following your reasoning, there would be no point in Linus Torvalds writing Linux. If he had anything to contribute to OS design, he should have gotten a job at Microsoft or Sun or Apple or wherever. (Well, arguably he didn't contribute anything new to OS design... but why would you discourage him from trying to write Linux in the first place??)

  • > You dont see processors which are *designed* at a hardware level to only work for a particular operating system.

    Only partly true. I'm guessing you haven't seen the "Winboard". (I think that's what they were called...) Much like the famous "winmodem", these are motherboards which have been stripped down to make the hardware cheaper, with the guts of the controllers moved into software. Due to the fact that the drivers are (of course) proprietary, these computers will run only under Windows!

    To be exact, I think the CPU's are general purpose, but the chipsets, and perhaps even the memory, do not have standard, open interfaces, and therefore cannot be used under any other "alternative" OS.

    Neat stuff, eh?

    --

  • >Designed chips is very much like programing.

    Yeah, if you want gigantic, slow, hot, wasteful chips.

    Of course, when it comes to performance and efficiency, designing chips is not very much like programming (I assume, I'm not a chip designer). But that also applies to high performance software like e.g. kernel scheduler or memory management. Then programming isn't very much like programming either.

    For the most time you will get away with standard knowledge and not looking beyond the horizon of your high level language. When it comes to performance then you have to know stuff about pipelining and cache lines and help the ignorant compiler by modifying your source to fit the wanted assembler output or directly include a few lines of assembler in the critical parts. So, in that sense, designing chips is like programming.

  • There are lots of people here talking how open source won't work for hardware designs since it's so damn a) cost expensive and b) time expensive (does not allow experimenting) to get a chip design through a fab. So what? Reconfigurable hardware is out there and has been for quite some time.

    You get a standard programmable logic device from some company like Altera (there are more, this is just the one I remember right now) and then you can program it with any chip design you want (within the complexity boundary set by the chip in question). These can do from 25MHz up to 100MHz, so you won't be able to replace your 1GHz Athlon with it, tough. Still it's usable for a lot of things. Remember, PCI runs at 33MHz or 66MHz. An MP3 decoder can be done with a lot less than 25MHz.

    The software (Windows, since the control interface of the PLD is usually proprietary) compiles your Verilog or VHDL code into the required form which can be uploaded into the chip. So hardware can be designed much like software with the same round trip times. And no, you won't have to write an adder using NAND gates, you just say in VHDL "a And once you have a nice working design you can use the same source to get it through a fab.

    But often you don't want a fab at all. Fabs are expensive and production has a long latency (something like 4 to 6 weeks). You can just give the PLD a finished ROM and use that as a production system. One MP3 player was only two months on market before being replaced by a successor, so it was shipped with PLDs. Only if you have really large numbers to produce, the product stays long enough on the market or the speed of a PLD isn't enough is a fab really cost efficient. I guess we'll see a lot more PLDs in shipped products in the future.

    There are quite a number of CPUs for PLDs available already. Some PLD companies license CPU designs for use with their devices, e.g. I have an Altera APEX device here in front of me (not mine, a bit expensive) which comes with a NIOS softcore. Which is configurable: choose 16 or 32 bit, the number of address bits, how many registers, number of positions the shifter can shift in one cycle, ... depending on how much room you need on the device for your own design (I'm not a hw designer myself (yet), I have to program the NIOS).

    Other designs are freely available under open source / free licenses (lots of stuff, not restricted to CPUs), including a SPARC CPU with peripherals by the European Space Agency under LGPL. If your PLD is large enough, you can put a whole computer on that chip (CPU, ROM, RAM, serial port etc.).

    And for starters, you can get a complete development board with a PLD and software for about $150. A small CPU (no 32 bit thing with lots of registers) would fit, and it's enough for some hacking to get up to speed.

    Some links for open source hardware: OpenCores [opencores.org], Free IP [free-ip.com], Google Web Directory - Computers > Hardware > Open Source [google.com], LEON-1 [estec.esa.nl] (the ESA SPARC core).

  • The startup costs for me to start making open source software are very low (assuming I have a computer)

    Startup costs to start making open source hardware: use a simulator or/and get a board with a programmable logic device (less than two hundred dollars). Low cost, assuming you have a computer (with a parallel or serial port for downloading to hardware).

    the cost of failure/screwup is low (an hour or so of my time to compile).

    Cost of failure/screwup: time to fix and recompile. Then simulate or download to PLD. Low cost.

    What are you going to start with? A chip that beeps out 'hello world' in morse over a pizeo?

    Apparantly making CPUs is rather easy so this what most textbooks on hardware design in, say, VHDL give as the first complete example that goes beyond some simple ALU. Go look at the source of an 8 bit processor that is available for download. It's amazingly simple in a high level language like VHDL.

  • It's an interesting idea but who exactly would pay for testing of these devices? It's (relatively) inexpensive to lay out the device on a computer, but getting the idea to silicon is a massive, expensive undertaking.

    Whereas with software it's just electronic bits and bytes being worked with, the expense of working with tangible materials seems to me to be cost prohibitive.

  • Who cares? If you want an Ogg Vorbis accelerator, there's nothing to stop you from implementing that in an FPGA RIGHT NOW. Go ahead. Isn't it open source? You only have to port it to VHDL or Verilog.
    So now you've done that. You got a working FPGA. What are you going to do with it? Riiiight...we should all go about building our own audio hardware accelerators.

    Well, it might work if you actually make CPU's, where people can just plug it into their motherboards. Even then, it's really kind of pointless. If I have the "code" for a SPARC, and I give it to you, that's still almost worthless.
    1) You have to Fabricate this damn chip. Do you have $250,000US kicking around? That's about how much you'll need for a 0.25um mask set (yup, not even cutting edge). Minimum 6-8 weeks to get this done.
    2) You want to make changes to a working design? HAHA! Another quarter million to Fab that change. And another 6-8 weeks.
    3) Ooops! The new design didn't quite work. Why not? Do you know how to debug? Let's say yes. Do you have access to a high-speed IC tester to check it out? $5million a pop. Or maybe you can rent one. That's about $300/hour.

    By this time, you should be thinking "Why don't I just walk over the the local computer shop 10minutes away and plunk down $200 for a new K7?"

    I mean, if I think I had something to contribute to CPU design, I might (wild idea), GO GET A JOB at AMD, or Intel or wherever...

    There's not too much value in open-sourcing your CPU design. The Verilog or VHDL code may be verified to be 100% correct, but that code doesn't mean that your silicon is 100% correct.
  • For the people who don't know how System On Chips are being made.

    You start of with Intellectual Property (ie, the design, documentation etc) of parts of a system. That means buses, CPU's (ie ARM, MIPS...), memory controllers, peripheral IO. You mungle it together, and voila. Because you have to pay for the IP, the chip costs more. ARM Co., Ltd is waging war against copycats of their instruction sets. An open standard would help, in other words.

    Btw, licensing costs of blocks of IP for large numbers are about $1 each (order of).
  • Some of the good high-speed research (like the multi-GHz 64-bit F-RISC stuff that has been going on at RPI) uses the new fabs from IBM and the like... it's hard to find a school fab that can do SiGe or GaAs...
    --
  • Some of the most successful chip makers in the world do not own a single fab. They simply outsource their manufacturing to a foundry. C'mon people, haven't you heard of PMC-Sierra?

    I also don't agree with the complaint about chips being too complicated for a self educated "hacker" to design. You can break chips down into functional units that are easy to understand, and create them using a high-level design language. The interesting part is that some self-educated kid might actually find a better way of doing something than the industry "standard".

    jc
  • This is useful for small companies which are designing a system where a processors is an insignificant part of the hardware design. Just like in SW where you can use other peoples lib's these cores are designed to be a subset of a larger project which is implemented on an FPGA or actually fabed. Its nice to be able to focus on the unique design aspects of a project rather than buying a license from Intel, Motorola, Arm etc for a CPU core to embed. These cores for the most part aren't for joe linux user to play with. They can be a useful learning tool in an arch class in college or played around with to gain experience but their real value is to the two man engineering shops.
  • You're not changing the CPU

    I never said anything about changing the CPU...I was replying to a comment that "...the last thing I want to do is recompile my CPU each time there is an upgrade". Recompiling is software, not hardware. And my reply was clearly talking about how Transmeta recompiles their source

    I never said it was Open Source hardware. It's not.
  • Yah, i rember reading that on slashdot not too long ago. Might of been lumped together with other stuff but I have seen it here before.
  • Now that's something people could contribute to and use a variety of projects..
  • Obviously you've never made a chip before.

    The logic portion is exactly like writing
    code, you're just writing in verilog.

    And there's plenty of room for self-expression
    and style.
  • There is a list of open source projects and benchmarks at
    http://www.ics.uci.edu/~sumitg/CadPages.html#Ben ch

    Open source projects are big boon for research groups. The biggest problem that research groups have is that they do not have access to industry standard processors and do no have the resources to design one from scratch. So a whole set of these in public domain will mean a lot to the research community.

    Also, to all the software people here, who do not know much about hardware design: open source hardware designs are just as exciting to the hardware community as open source software is to the software community.

    To clarify/correct what some posters have said - we do NOT fiddle around with gates and low level stuff like that. That is only done to optimize design (its like writing assembly line code for core kernels). A lot of processor blocks are designed in high level hardware description languages, such as VHDL and Verilog.

    Increasingly, people are beginning to use C or variants of C (see http://www.cecs.uci.edu/~specc)
    for hardware design and simulation. We are working on a high level synthesis tool that generates hardware starting from C - check it out
    at
    http://www.cecs.uci.edu/~spark

    Sumit
  • I am a student at Texas A&M University and we are doing a Senior Design Project with an FPGA

    doh.. how the hell did I hit a 'g' instead of a 'd'? Thats what I get for rushing.. oh well.. no Aggie jokes plz ;)

    JOhn
  • You don't need a whole system to develop hardware. I have an Altera FPGA sitting right here which I can play with. I have personally made a 5-bit processor with addition, division, multiplication, blah, blah.. on board. No there is no OS which supports my design. Thats not what it is for. The FPGA has inputs and outputs. I can do the rest.

    Think about it. Instead of using a crate ICs consisting of AND gates, NOR gates, etc.. to implement digital logic just put them in a nice FPGA and you're done. In addition, I have even seen some FPGAs which mount on breadboards.

    JOhn
  • In CpE school, we designed and tested CPUs and subsystems at little cost (mostly paper and power). You don't fab something in the dark to see how it works!! That's obsurd. Of course the development cycle would be way too expensive. You design and test using simulators (like HSPICE). If it works in the simulator, then it will probably work when fab'ed. You fab it to see just how fast you can clock it or how hot it really gets (the software will even give you good guesses for these, too). The logic gates and even dynamic components are easily simulated. The CAD software does the work of figuring out if the traces are too close, or if things will fit. The simulator tests inputs to outputs and clocking.

    Having the fab in the debugging and rough optimization cycle is crazy. That's what software is for.

    The fab is further down the road than some people think. It's for nailing down the 'real world' results.

    I hope this help clarify some things, and perhaps those who know more can elaborate.

    --Phil
  • The good folks at sparc.org [sparc.org] are happy to share the inner working of the SPARC architecute with you. Of course, there's a licensing fee to manufacture the SPARC chip commercially...

    It always amuses me when I read some pundit going on about about the Sun "proprietary" architecture. Yeah, right. And Intel is an OpenSource company.

    Anyway, there's a good repository of documents [sparc.org] on the SPARC architecture available for download.

  • Wired magazine, January 2001 raves desktop manufacturing by 2005.
  • Not quite right. The chip is designed by Sun and fabricated by TI.
    --
  • You're not changing the CPU, you're changing the way it responds to instructions. That could almost still be called an OS. It's still software.

    Agreed, it would be cool, but it's not open-source hardware.

    ----------
  • Using the same logic, cooking is the same as sheep breeding. I mean, anyone can microwave a TV dinner, but to prepare a good meal you need to know your pans, ovens, spices, and meats.

    In the same sense, anyone can get lock a couple of sheep in a barn and wait for them to do what the nature intended them to do, but to do it more efficiently you have to pretty up the sheep to make it attractive to the ram.

    Wait.. what was my point again?

    Oh, yes. I think your point is very moot. You don't need much skill to get simple, yet effective, software to work (VB), while you do need a lot of skill to get a chip to work at all. Even if you get past the HDL writing, the synthesis and physical compiler tools are not nearly as easy to use as VB. There is no "Build" button. It takes skill to produce something that can be sent to the fab.

    Now, FPGAs usually come with VC-like IDEs, and it's possible to make very simple designs with only basic HDL/digital logic knowledge. The software equivalent in complexity would be a CGI counter, though. Hardly useful for a future open-source hardware community.


    ----------
  • Nobody is trying to hide (to a degree) the design of the latest CPUs from less developed, or hostile, countries. Just look up a magazine like IEEE Micro, and you can see a lot of details on how most of the popular CPUs work.

    US is trying to keep the manufactured CPUs away, though. Because that's the real difference between a high-tech country, and a low-tech one -- the capability to produce these chips. The bulk of the expense is the manufacturing cost.

    Open-source is not going to help a poor country much. They could use it to educated people, maybe, but they still wouldn't be able to produce any designs they come up with.

    ----------
  • The designs are run through simulations, on many different levels, thousands of times before they even get close to a fab. It would be rather inefficient if a chip would have to be created physically after every bug fix :).

    The simulation tools are a little on the pricey side. Students can get cheap, but stripped down, tools from FPGA companies, like Altera, though. Enough to play around, implement and simulate simple designs.

    ----------
  • You can't just "compile" your scematics and test them.

    Actually, yes, you can, and that's in fact how it works :). You run a compiler, and then you run a test in a simulator to see if it works.

    You have a point regarding the rest, though. The cost of actually producing a chip is prohibitive to enthusiasts.

    I imagine very few people on the street could tell you how to create an XOR operation with a simple set of logic gates.

    Not to sound nit-picky, but XOR is a simple logic gate :).


    ----------

  • Several research labs are now working on flexable transitors, including some that can be "printed" using an ink jet like printer. The idea is a future where you can "download" processors. Seems like the perfect match for an open source processor design.
  • I am a hardware engineer. Moreover, I am a logic design engineer. I sit in front of my computer and write Verilog RTL all day.

    Writing a verilog model of a piece of hardware in an open source style with a community of designers is completely possible. In fact, it would be very easy, but there are issues that have prevented this from actually taking off in any type of widespread fashion.

    Getting the model and code together is not the problem. Organizing that code into a CVS structure for the open source community is also not a problem. It works just like software.

    ISSUE NUMBER 1

    I have worked with FPGAs from both Xilinx and Altera, and I don't think you are considering the drawbacks of these systems.

    The main one of these is memory. Most FPGA design kits do not have any type of off-chip storage. (ie, no SRAM on the FPGA board.) This means that you are forced to store any variables and data either

    • in FPGA-based RAM arrays
    • in the host system's (ie, PC) memory

    Anyone who has worked with FPGAs knows that you begin to lose a lot of your available resources as soon as you try to implement any kind of RAM in the gate array. This makes it not an attractive option.

    The only other option then is to store the data used by the custom hardware in the system memory/disks. This will work, but it will means a hell of a lot of system CPUFPGA transactions. Slow and cumbersome, but workable.

    Lets back off of the whole FPGA thing for now, because we don't really need any FPGA to come up with the design for some hardware.

    It is possible to build the model only in Verilog (or VHDL). This can be simulated and debugged on a computer with the right tools. (There are free tools available out there for Linux for all of this.)

    But... even though we could do this... what then? The project would have to be useful anough and draw enough interest for someone to foot the manufacturing bill. Either that, or it gets sold as a non-efficient FPGA-based design that the user has to compile and upload to the FPGA board.

    The performance of this hardware would suck. It would need to built as an ASIC to work optimally.

    ISSUE NUMBER 2

    Even if someone does foot the bill for manufacturing, the performance would be horrible. The major reason for this is that the project is going to built on some type of silicon. What type? That all depends on who does the manufacturing.

    Anyone who works in hardware design knows that many pieces of circuitry have to be tweaked depending on the electrical characteristics of the silicon. This would be a hinderance to an open source hardware project for which the contributers don't know anything about the silicon that will eventually house this device.

    ISSUE NUMBER 3

    Hardware clocking? Obviously, this thing will need some sort of clock in it to push it along and make it operate. What kind of clock? How fast?

    This (again) is something that will depend on:

    • silicon characteristics
    • system architecture
    If you don't know how much time you have to accomplish (for example) a binary comparison of two values, then you don't know how to build this comparator. Do you have enough time to do it all in one clock cycle? Does it need to accomplish the task across four clock cycles? I don't know. Neither will anybody else.

    SUMMARY

    Opensource designs are possible, but not easy, and not likely. There are so many aspects of the design that are directly influenced by the final materials, packaging, and system board, that it becomes too problematic with trying to develop a useful, stable, robust device.

    A piece of software is not affected by the type of disk that it is stored on. A piece of hardware is influenced dramatically by the type of silicon it will be stored on.

    Opensource hardware will never be.

    SirPoopsalot
    To send me an email, remove the SPAM's and replace the -at- with @.

  • How much does it cost to recompile a piece of code after modifying the source? Basically nothing. How much does it cost to fab a new chip after modifying the design? Millions of dollars.

    I'm not saying that there aren't advantages to open source hardware (and there are precedents [vhdl.org]). But the advantages aren't as big as for software because not as many people can realistically participate. Let's look at some of the classic stated advantages of open source software:

    • Anybody can fix it when it breaks. Not so with hardware. Even if you could identify the problem in the hardware description (which is often very difficult, even for people with an intimate knowledge of the design and more debugging resources than the Joe Average Hacker), writing a patch gets you nothing. You're not going to pay to fab a new run of wafers, so you still have to wait for the vendor. Plus, they will still have to verify that your patch works in all cases and doesn't cause new problems, and design verification takes more time than design itself.
    • You can tweak it for your needs. Again, are you going to pay to fab custom chips with that hardware gnutella client? I don't think so. There's also the problem of verification, which is really difficult and time consuming.
    • "Given enough eyeballs, all bugs are shallow" (Linus' law). Given that you don't have either the above two advantages, how many of you are going to take the time to a) learn Verilog/VHDL and b) get to know the code well enough to debug hardware failures? Also, how many of you have a logic analyzer, oscilloscope, etc.? I'm thinking the number of eyeballs involved from the general hacker community will be within epsilon of zero.

    Of course if you're talking about reconfigurable computing [io.com] then everything changes.

    [Just in case you're wondering, I work in the Alpha mciroprocessor group at Compaq [compaq-alpha.com]. Read into that whatever you like.]

  • Mayhap, then, simpler software? Back-to-Basics? Do we really nead all the overhead Windows requires? And... maybe a processor would be simple, but what if you made *banks* of them? What if you could roll them off on a plastic tape, as many as you needed?
  • I'm gonna just guess you havn't ever done any hardware layout. The verilog code is just one itsy-bitsy step. Take, for example, a full adder.

    module FA(s,cout,a,b,cin);
    output s,cout;
    input a,b,cin;
    NAND2 n1(na1,a,b), n2(na2,xr1, cin), n3(cout,na1,na2);
    XOR2 x1(xr1,a,b), x2(s,cin,xr1);
    endmodule

    This is so far from actual hardware. You would never even use nand and xor gates for adders, you'd design it at transistor level. A mirror adder, for example, uses something like 28 CMOS transistors.

    But that is still so far from actual hardware, you have to do layout. Sizing, routing, it's a lot of work. And verilog code has next to no correlation with it. (Quick google search turns up this guys [berkeley.edu] assignment, which is a good example.)

    God does not play dice with the universe. Albert Einstein

  • This was a big part of the push for 3d printing, and you may not be that far off..

    http://slashdot.org/articles/00/09/28/175207.shtml
    This is where I read about it.
  • I think the idea of open-core design is pretty much useless at this stage due to the high cost of fabrication. However, what about designing chips that use a large amount of programmable logic that can be flash-upgraded? I'm sort of thinking along the lines of the Crusoe but perhaps taking the idea several steps further. If a significant amount of the chip can be 're-designed', then perhaps there would be some incentive for open-source development.. such as tweaking the chip for different tasks.
  • This is absurd..
  • This is exactly what the x-puter does. It transforms software instructions into hardware configurations.
    http://xputers.informatik.uni-kl.de/ [uni-kl.de]
    http://xputers.informatik.uni-kl.de/xputer/index_x puter.html [uni-kl.de]
  • We were doing a term paper here at the ETH. This involved designing our own chip. The problem I see with Open Source chips (at least ASICs): the software is too expensive! (because there is no Open Source software for those problems (silicon compiling, place & route, clock distribution, ERCs, etc.)) We were using about 4 different software tools which were provided from the coresponding companies for free (free as in beer) to use in student projects. As far as I understand the working of these programs they are really complex. A lot of routing, placing, etc (NP problems).

    In hardware design you can't use 'hacking' in the regular sense: you have to do testing (simulation) too and this is where another problem arises: that software uses a lot of experience (both human and in electronics, like test runs, which are even more expensive). Only then, if you are completely sure that your design (in VHDL, Verilog, etc.) is correct, it goes out for production. If the chip doesn't work, you loose a lot of money and time (about 3-4 months). I doubt that anybody would invest that much money (about 250000$ for one ASIC) and then release the result to the Community!

    As for Open Source: a year ago I searched the web and found next to nothing (please correct me if I am wrong). As I said above, those software tools have to be very clean and bugfree (betas are next to useless!). So you'd have to use non-free software (which is very non-free, like very expensive) to create a free design. I doubt that the Open Source gurus would welcome that :).

    Oh yes, if you are talking about a CPU: in those projects are about 300 people and to be honest: I don't think that the bazar aproach works here, because: some parts are done in hand layout, testing is very important (and who wants to be testing? I am not talking about compiling, trying some new window manager, etc. I mean testing of waveforms, interpretation of those results, etc. that testing is just a pain in the ...).

    I don't know too much about programmable chips, but I assume that the situation will be much better here...


    Oh, one more thing: I hope you don't think I tried to flame the Open Source Community or something like that --- I tried to state my experience...
  • Making processors is EXPENSIVE!!! g++ is free, a full blown silicon die fabrication plant costs in excess of US$1 billion.

    So maybe if every linux user on the planet contributed $10 we could get enough money and... oh wait... never mind.

  • Maybe it would work. All it takes is one person to have a stroke a geinus and boom.We have a chip on par with the AMD and Intel, its just a thought though.
  • http://www.opencores.org/ Open source hardware. It's always going to be aimed at the people who buy semiconductor IP off vendors like ARM, and incorporate it into their designs. If they can get the VHDL source for free, rather than having to pay royalties every time they manufacture... Of course it's not aimed at joe@public.com.
  • the notion of open source hardware coupled with conductive polymer technology could have big implications. You could download a chipset blueprint and print it using plastics. Of course, this won't be suitable for the microprocessor market, but it will definitely be big in things like small personal display units and other electronics like cell phones.
  • The only problem I see with reprogramming is the heat output. What happens when dnetc and the GIMP get into reconfiguration wars?

  • He's still right. Behind every great piece of open-source software are a bunch of programmers being paid by a corporation, university or non-profit to work on software between the hours of 9am and 5pm.

    Show me a coder who really codes for free, and I'll show you a derelict bum without a house, decent clothing, healthcare, and food, let alone a computer.

  • The simple fact is that there are only a few processor manufacturers, and this will not change as the market is not big enough to support more.

    The problem with the chip market is not the size, it's trying to break your way in when you have such high initial infrastructure costs. Secondly, people seem to be a bit leary of buying silicon from an untrusted producer.

    Therefore, holding copyright on the design of a chip is largely irrelevant, especially when the manufacturers essentially operate as a cartel. Have you seen Motorolla designing an x86 chip recently? Didn't think so. With a few public exceptions the processor manufacturers largely stay away from each other.

    Exactly what defines an "x86 processor"? I would contend that the instruction set is mainly responsible for setting one processor apart from another. If you want to do something different and offer programmers something different from x86 (e.g. RISC), you use a different ISA more suited to your objectives, and the result is by definition not an x86. If, however, you're like AMD and want to break into the consumer market, you emulate the market leader and give customers what they're used to, i.e. the x86 ISA. That doesn't mean the core of your part has to look anything like a vanilla x86. In fact, you'll want something better and therefore something different, otherwise why would people bother switching to you?

    However, they will be receptive to open source as it would allow them to downgrade their expensive design engineering teams, outsource and up the shares dividends accordingly.

    Hmmm. Most people involved in OSS seem to do so because they can see something going back into the community, although I don't mean to generalize across the entire group of OSS developers. I don't see how you could get that with processors, unless the manufacturers start giving away chips for free. Either way, the differences between hardware and software production are too great to make any predictions at this stage.

    I think Open Source designed processors will be higher quality, but it won't make any difference to price or competition in the field.

    Well, we'll see :). The fact that you will have a lot of ideas about processors just floating around should see a big reduction in the R&D budgets of any startup chip manufacturers, which should ameliorate the problem outlined at the beginning of this post. I have a hard time predicting the viability of open source hardware when the verdict is still out on open source software.

  • The problem is processor manufacturing.
    You can implement a processor in a FPGA, of
    course, but such an implementation wouldn't
    run at speeds above the 33 MHz range.

    (I'm actually just doing this, so I know that's a
    realistic number.)

    In order to achieve higher speed you need custom
    chips - for which the plant will charge you
    in the $10k range (that's for getting *one*
    prototype made). Even that is still the ASIC
    range, for full-custom chips you need to pay more.

    Something which is in the Intel/AMD league
    requires access to specialized plants - such as
    Intel and AMD have. ;)

    This is were the true challenge in processor
    design is, too - not the architecture (cache,
    ALU, floating point units...) but the manufacturing
    technology. To make chips which can transport
    data from a registers through some logic and
    into another register in 2 ns!

    So if you consider how close you are in making
    your own RAM chip as an open source project, that's
    about the same challenge as making an open source
    processor.

    It's a different story if you're looking for
    embedded processors which go on a FPGA - but
    what's the benefit to the user? Presumably
    you want to get something out of it, not just
    saving some chip design company a bit of money...

  • ...as long as you aren't looking for top-of-the-line. If you are a hobbyist or student, there are tools for you to use.

    You mention Altera. If you are a student you can get the evaluation board (for 5k to 70k gate FPGAs--maybe larger now because the technology is so rapidly advancing) for very cheap--like $200--and it is still a good deal less even if you are not a student.

    Furthermore, you can freely download (as in beer, not in speech--the source is closed) the basic MAX+PLUS and Synopsis software [altera.com] for use with the evaluation board. This is perfect for the hobbyist. In fact, these boards are ideal for implementing simple microprocessor cores plus added custom logic, and certainly beats wire-wrapping a bunch of 74hct-series gates and registers and stuff.

    The good thing is that FPGA technology is moving so incredibly fast that eventually many more hobbyists could get involved. Plus, VHDL is supposed to be modular, so even if you can't synthesize an Athlon on your FLEX70K you could test and synthesize modules that perform some of its functions at a slower speed. Then if a bunch of hobbyists combined their modules they could come up with one or more open-source VHDL-based microprocessor designs.

    If companies with the resources or money to access fabs like the design, with open source they would have ZERO design costs. This isn't so far fetched--I'd say that the Pentium III and PowerPCs could be the Windows 2000 and Mac OS X of the hardware world, and it could be possible to do a Linux or BSD of the hardware world as well. Sure, fab costs are very high, but for a project of that scale, engineering/design costs also play a gigantic factor. I think it could result in lower cost, more interoperable hardware designs. The only resistance is the closed attitude of chip makers (even their support software is closed source despite being downloadable on the net--wouldn't Altera gain a huge amount of support and boost sales of their FPGAs if they allowed the open-source community to develop software tools for their products?)
  • First off - My knowledge on the topic of hardware is quite limited, so if any of this is erroneous, point it out.

    If this idea of opensource Processor designs takes off, wouldn't it be possible to test (atleast the Instruction Set Architecture) on Transmeta's Crusoe (or something similar)? It would seem to me that having an environment where people could design and implement their designs in software for testing would allow people alot more flexibility.
  • If CPUs will be open source, everyone will release their own version of it which is probably bound to have some that work differently. Programmeres may have to take into account for a lot of "Cyrixes" out there, who knows what people will do to modify the CPUs to better suit their neads yet cause a widespread incompatibility issue which software developers would have to compensate for. Look how many distros of Linux are out there and then look how many times you have to release a package, creating different ones that work with each distro. I don't want to have to deal with that, and I don't want to start seeing "Required: AMD, Intel, or VIA/Cyrix CPU" and my CPU would not happen to be on that list... Look how bad-off Super 7 was... Granted they will have the source available so they won't have to resort to reverse-engineering, but it just shows that no matter how close you are you can be so far off as to cause annoying incompatibilities with hardware and software.
  • I attended a presentation sponsored by my college's ECE department on what the speaker called "Elastic Core Microprocessors". The idea is that if a microprocessor were made with programmable gate arrays, it could change its design during program execution. This way, various functional blocks of the microprocessor could be created on the fly, as needed. If I'm doing a lot of floating point calculations, I can create a few extra floating point units. If I'm not, those gates could be used for other purposes. Heavily used functions could even be implimented in the chips logic gates rather than in software.

    Effectivly, this would make a CPU-Burner possible, although I think it would be a while before we can make high-end processors this way.
  • I think that for this to be viable for frequent on the fly reprogramming, you would have to have a RAM based PGA. As far as configuration wars, if you had a RAM based PGA, I would imagine that you could swap areas of the PGA to disk or other storage just like we swap memory from RAM to a swap file/partition. When GIMP requested resources for image processing, if there wasn't enough space available, you would simply swap out blocks that haven't been used lately. When those processes go to use those blocks again, they would be swaped back in at the expense of some other not-used-lately portion of the PGA.

    But than again, I only saw an hour long talk on the subject, so I don't have any idea how practicl e these ideas are.
  • Yep, and you can download a VHDL description of a 32 bit sparc (ready for synthesis with synopsys) from the site of ESA (European Space Agency). No hardware floating point or integer division though. And if I'm not mistaken, the code is GPL'ed.
  • I think this sort of ties in with the saying, "Thinking digital rots the brain." :)

    I think that you might consider the whole HDL scene similar (though not at all the same) to code "generation" tools. I've seen several tools which generate code, but you could never use what it generates directly. It was either very suboptimal or sub-correct :) or both, and that means that software expertise is necessary to get meaningful results from the process.

    Perhaps as the state of the art progresses, these tools will become smart enough to be "trusted" like a compiler, so engineers can focus on other things. Although this is a danger, too. We might just have to focus on good designs, or (*gasp*) sane process methods.

  • I hate to trot out this cliche again, but it's possible that in 20 years a desktop MEMS construction set could perform low-production-rate personal manufacturing of custom chips from standard blanks.

  • And I would like to take credit for making that wonderful site. Here [abe-and-os-breweries.com] is another link to it, incase you're really lazy.
  • I think that this has more to do with students perhaps, than with the average working joe. Students do have access to the fab rooms and so forth(in your better research-minded universities). Imagine developing an open standard for these core designs in a way that could conceivibly turn out blazing chips from university labs. That's where I think this project would like to go...
  • Um, since you don't have a link, I can only assume you are talking about the "WinBoard" that showed up on Slashdot last April 1. I don't think we'll be seeing Winboards anytime soon, simply because of the chicken-and-egg problem. How do you load stuff from disk into memory (your OS) when the disk and memory controllers are part of your OS?

    Down that path lies madness. On the other hand, the road to hell is paved with melting snowballs.
  • It's more accessible than some of the other responses would lead you to believe. The tools are quite good, and the devices themselves project a simple synchronous digital world abstraction.

    Excellent basic synthesis and FPGA implementation tools are now $0-$55, several proto boards are $100-$200, the devices themselves are <$20, and with them you can build 16- and 32-bit processors that run at 33-67 MHz and beyond. Perfect for many embedded systems projects.

    Visit our web site, FPGA CPU News [www.fpgacpu.org] for further information. And/or join the mailing list [www.yahoogroups.com/group/fpga-cpu].

    See my Circuit Cellar magazine article series, "Building a RISC CPU and a System-on-a-Chip in an FPGA" [www.fpgacpu.org/xsoc/cc.html]. The corresponding free kit (which requires a $100 FPGA proto board from another company) is at [www.fpgacpu.org/xsoc/index.html].

    Or see my recent DesignCon'2001 paper on "Building a Simple FPGA-Optimized RISC Processor and System-on-a-Chip" [www.fpgacpu.org/soc-gr0040-001201.pdf]. This latter paper includes the annotated synthesizable Verilog source code of a simple CPU -- less than 200 lines of code.

    Learning to design digital systems with a hardware description language, and then progressing to design your own peripherals, and perhaps processors, is not a trivial undertaking, but neither is it anymore an exclusive and unobtainable art, practiced only by high priests in well funded semiconductor companies with in house fabs.

    Jan Gray, Gray Research LLC
  • The reason open sourced software development works so well is because many people can help out on the project and put as little or as much time as they want. With open sourced hardware, not nearly as many people will be able to contribute to the cause due to lack of hardware resources. Don't get me wrong, it's a great idea, but I don't think it will get the same success as open sourced software.

    You are wrong, it will succeed to the same extent as open source software for all the same reasons. Remember, this hardware is just a different kind of software, it's software for silicon. It's cost is asymtoticly approaching zero, just like sofware. The design sofware is now free, and fabbing costs a tiny fraction of what it used to, due to competition. You can load it into fpga's if you want to, and especially, you can simulate it.
    --

  • As I have tried to post twice, MIT Tech Review has an article regarding the ability to use an inkjet printer to print a semiconductor chip.

    It is a very interesting read and even speaks to the possibility of this allowing open source cpu's in the future. Imagine, downloading and printing the latest version of your cpu before upgrading to the n.n kernel it is targeted at :)


    The problem with this technology is that even if you can shrink an inkjet printer head to the point where you're laying down 0.18-micron dots, you'll have one heck of a time keeping the environment clean enough for your fabbed chip to work. This is one of the main reasons why semiconductor fabs are so expensive (though other factors definitely contribute).

    If you're printing at anything other than deep submicron linewidth... your chips will be useless. Maximum operating frequency, to a rough approximation, goes up as the inverse square of the linewidth. Make the linewidth 100 times wider, and your chip will run about 10000 times slower. You're not going to be playing quake on an inkjet-printed chip any time soon.

    This technology is extremely useful for making things like active-matrix displays and cheap "smart" devices, but that's about it.

    Now, the silver lining. You _can_ fab moderately complex chips using Field Programmable Gate Arrays. These have been around for a long time, and are frequently used for prototyping. You'll get something a tenth the complexity and a fifth the speed of a real chip, but that's still enough for many applications (program yourself a sound card with nifty new features, or emulate an older gaming system or computer in hardware). This will also give you a training tool for gearing up for the real thing (true IC design and fabrication).

    Fabbing ICs is impractical for individuals, but possible for organizations. It'll cost you about $500k or so once you have a finished, debugged design to produce a few hundred thousand chips (a couple of wafer runs; the first one or two runs will be buggy alpha silicon which you test and revise).
  • I'd be interested to know though, for anyone who has experience with FPGA, if the best FPGA tech would be capable of creating even a general purpose embedded CPUs like the Z80.

    Sure, easily. Current top-of-the-line FPGAs let you synthesize designs with on the order of a million gates or so.

    People have been writing papers about chips with reconfigurable parts for many years. The idea continues to show promise, but the practical problems with actually _using_ something like this are great, and the benefits aren't amazingly spectacular.

    You can buy modules containing integrated processor cores and FPGA logic already from Altera and the other big FPGA companies.
  • One of the biggest issues might be fabrication costs as somebody else has pointed out. The bigger issue is finding a simulator and synthesis tool to do the job for less than current costs. I think more effort should go into Open Source simulators so that you don't have the minimum ~$5000 entry price (from Fintronic). Forget about synthesis. Tools from Synopsys cost upwards of $100k a license. If I knew how to write C and pure software I'd think about writing those tools. But for now I'm working full time designing ASICs.

    FPGAs are an alternative, but still not that cheap and they too have their drawbacks. Ever tried routing one so that all the registers get their data on time? Not very fun. Ever tried pushing one past 50MHz without having to pay $1000 a chip? (If my knowledge on the current state of FPGAs is incorrect, somebody please correct me.)

    If you have a small design, yes, you can use an FPGA. As a matter of fact, it's a neat idea. Too bad simulators cost so much (that $5000 thing, though Altera and Xylinx tools might be cheaper). If you want high-volume - another poster correctly stated - $250k NRE (non-recurring engineering) charge just to set up, plus a per-unit charge for each ASIC (usually a couple of hundred dollars).

    Don't think that it's all that easy. In some ways Open Source could probably help - on a module level. As in, hey, I just wrote a really cool, high speed SDRAM controller. But on a full-chip level I think you'll find that it's a bit more difficult.
  • Right, it's like reprogramming the micro-instructions on the old Vax and PDP-11 machines... a little microprogramming, some wire-wrap... it's all good.
    --
  • I wouldn't put the CueCat in the same class of hardware as anything as complex as... oh, my alarmclock (granted, it is an X-10 controller, too). After all, the hardware on the CueCat could replicated with a 556 and some other chips from <A HREF="http://www.radioshack.com">The Smack</A> (minus of course the propritary tracking bits, which we didn't want anyway). Handheld scanners aren't all that hard to implement, though theyare a lot cheaper if you get them free...

    The *real* hardware industry (pick a few uProcs, DSPs, comm chips, etc) generally does a really good job with design... The fact that Winmodems are crippled hardware isn't much more than limited requirements, with low cost being the driving factor. Processors, in most cases, can run any number of operating systems, given the right motivation on the part of the programing teams. Most O/Ss need the same basic hooks, so it usually isn't a big stretch (except for those recodes of the asm).

    The most important "openness" that hardware companies can provide is full interface specs on their products... register sets, timings, etc... with that information, a person or group could do almost whatever they want with a given chip (even those winmodems).

    As for bz2 on modems... software compression is nice, and easily added (of course both phone-link endpoints need to support it). The hardware compression offered by (for example) USR/3Com has been pretty impressive, relatively speaking, since you have a limited block size that you will work with. The adaptive compression used in bz2 may or may not be suited for that task, but that's another debate 8^)

    --
  • Flabdabb Hubbard said: "What will happen if the notoriously poor quality control standards of 'open source' reach the hardware level ?"

    What are you talking about? Free Software is often of quality as good as or better than proprietary software. Some companies (such as Red Hat) have put out poor products such as RH7.0 (but, remember, the file handle leak was caused by a piece of new Red Hat software which had not been released and subjected to the standard Free Software "many eyes"). This ought not to reflect poorly on the movement as a whole - the software they chose (such as the notorious beta compiler) was simply not the best of free software.

    FH continues: "Anyway, companies like Sun have tried this with sparc international, (no doubt someone will whinge about the license) but this is essentially an open source CPU."

    I have no idea what the license was, but it's almost certainly not anything like open source. Open source does *NOT* mean "you can look at the source code". It has a *SPECIFIC* meaning, which you can find at www.opensource.org. This *is* important. That you make this mistake indicates a deep lack of understanding of Free Software and Open Source software.
  • Sorry about the earlier post - here is the post again with html tags - slashdot is so slow today !
    (maybe its the dilbert zone time)


    list of open source projects and benchmarks is at
    http://www.ics.uci.edu/~sumitg/CadPages.html#Bench [uci.edu]


    people are beginning to use C or variants of C (see http://www.cecs.uci.edu/~specc [uci.edu])
    for hardware design and simulation.


    synthesis tool that generates hardware starting from C at
    http://www.cecs.uci.edu/~spark [uci.edu]

  • I am a student at Texas A&M University and we are going a Senior Design Project with an FPGA. It would be nice to see an open core for an SDRAM Controller or any other chip for that matter.

    Altera has many designs which you can download as MegaFunctions, but you can't see how they work. You just incorporate them into your design by feeding in inputs into the black box and reading outputs.

    I can't see why people would spend lots of time and money into developing open cores though unless they wanted quicker adoption by downstream developers. Perhaps a reference core which other people could tweek. It would be nice to see GPL of some sort make sure the cores stay open... cuz if it doesn't stay open then whats the point? You could just be doing the fun grunt work for some other putz who will probably come along and market your work without giving you credit.

    Anyway, you guys can also check out opencores.org [opencores.org] for more info.

    JOhn
  • is that creating an OS core is that it's a lot harder and more expensive to make up a CPU design and run it thru a fab than it is to hack up a piece of software and post it on the net.

    Software you can do at home - hardware takes infrastructure that most people don't have access to (fabs, NRE, synthesis and layout tools etc)

    After all to build a real high-end CPU you need a the work of maybe 100 people for a couple of years ...

    For these reasons I think that things like OpenCores make sense for companies but not so much for individuals (however much I wish it were otherwise :-( )

  • Wouldn't it be cool if a chip plant were the size of a CDROM drive! Instead of buying CPU's you'd download designs and burn them! NaCPUster!
  • I see little need of open sourced hardware. The hardware industry, unlike the software industry, has done a very good job in providing a powerful, reliable, flexible, and inexpensive product, and has done so without complex usage rules (read: license terms) that arbitrarily remove your rights to use the product under Fair Use rules.
  • This is indeed the bottom line. Even if you outsource the manufacture, you still need to pay several million dollars to manufacture each stepping to test it. A production CPU will only go through a half a dozen steppings before it's a product, so you have to be very careful on each one. Software programming (and, particularly, open source programming) is 'hacking'; you just recompile and program until it works. With hardware you can't do that, because you can only 'compile' it a couple of times.
  • Nobody seems to have gotten it yet.

    The basic idea here is that high-volume embedded devices today usually consist of a CPU and various application-specific elements on a single chip. When designing an embedded system, therefore, you don't buy CPU chips, you buy the right to replicate a CPU design on your own chip. Since in many cases all you need is a rather vanilla CPU, having an open-source description of some reasonable CPU reduces product cost.

    This isn't for desktop systems. It's for appliances, handhelds, and such.

  • but rather for things like OpenHardware [openhardware.org]. Where hardware is built like it always has been but we just get the specs so drivers etc. can be written and improved.
  • But hasn't MIPS been doing something like this for a while now?

    Lemme see, a open standards based processor that can be individually implemented by the individual companies, many of whom are willing to release the very details specifications of how their processor works?

    Because isn't that what we want? I'm no electrical engineer, I don't understand how the implementation goes, but as long as it supports a common set, and I can get precise documentation on how to bring the board up, thats pretty much all I care about.

    What befuddles me is that the MIPS processor, (which is so well documented, easy to use, and widely available from companies that don't have a monopoly) has fallen so far behind the others, both in Linux kernel implementation and its wide range of usage. Any opinions as to why?

  • Yeah, if you want gigantic, slow, hot, wasteful chips. If you want high-performace chips, like we're talking about here, you would at least have to design the datapath by hand. And have you ever synthesized memory, by chance? Probably not. I bet you would say "just get a memory compiler". What about clock generation and distribution? What about high speed digital signaling?

    Motorolas new coldfire is 100% Synthesized, including the on chip cache. Of course they have probably helped the router a bit while placing the cache blocks and clock distribution. But the core byself is a sea of gates.

  • Come On! What an insane idea - it takes FACTORIES to build chips, complicated multi-billion dollar factories with clean rooms, tons of technicians operating custom built machinery, and not to mention distribution methods, QA testing, supporting manufacturers to deliver compatible motherboards and components.

    No, I haven't read the story, but who's going to dedicate factory time to building a processor that is open source versus factory time to build a proprietary processor designed with YEARS of experience.

    Also, don't forget that the lifecycle of chips and software are drastically different. Upgrades to software tend to come every 6 months to a year - with patches in between. Upgrades to processors come more like every 3 to 5 years. Who's going to back the open source chip when they find a bug that calls for a major replacement?
  • yeah, like a CueCat. And WinModems, too. Although that's more a fault of the distributer, not the hardware creator.

    All humor aside, I agree, sort of. You dont see processors which are *designed* at a hardware level to only work for a particular operating system. This is demonstrated best by the fact that dreamcasts run linux, amongst other computing devices, which were only ever designed to run the SEGA proprietry operating system. Admittedly some hardware peripheral companies are being less cooperative with releasing drivers for other operating systems... perhaps *there* we might be better off open sourcing hardware. Opensource modems, for instance, might get better compression with bz2, and we could get up to 64k on a standard phone line (I dont need someone to tell me I'm talking out my ass, its a pie in the sky example). But processors? I think the companies there are already doing a pretty good job.

    ---
  • The reason open sourced software development works so well is because many people can help out on the project and put as little or as much time as they want. With open sourced hardware, not nearly as many people will be able to contribute to the cause due to lack of hardware resources. Don't get me wrong, it's a great idea, but I don't think it will get the same success as open sourced software.

  • Designed chips is very much like programing.

    Yeah, if you want gigantic, slow, hot, wasteful chips. If you want high-performace chips, like we're talking about here, you would at least have to design the datapath by hand. And have you ever synthesized memory, by chance? Probably not. I bet you would say "just get a memory compiler". What about clock generation and distribution? What about high speed digital signaling?

    On the other hand, attitudes like yours are good for people like me who design chips for a living. You see, when people that think "designing chips is like programming" start trying to design chips, that is true job security for people who actually know what they're doing.

  • Development of software and the development of hardware are radically different processes. Software development really costs nothing but time invested in design, research, coding, and testing. Development of hardware on the otherhand takes more than just time. You can't just "compile" your scematics and test them. There's exprensive fabrication processes that requires large amounts of money for equipment (both for production and testing). The amount of time is also a factor. It takes a LOT longer to design a processor than to produce even an operating system.

    Because of these factors, who would be willing to open up their processor designs? Also, if processor designs were open, how many people would realistically be able to make meaningful contributions? I imagine very few people on the street could tell you how to create an XOR operation with a simple set of logic gates. :-) (And I'm not saying I could either.) However, I'd say a large majority of people could pick up C++ in a few weeks (and they do) versus the years up years of studying that hardware engineers suffer though.

    Who knows. It could work. It'd just take some completely new thinking.

  • Here's RMS's take [linuxtoday.com] on free hardware.

    He thinks it's a good idea, but he points out several key issues that differentiate free hardware from free software.

  • hehe.

    Seriously, though, this model, if it works, will end the current conception of consumer processors as a secret technology as worthy of hiding as much as an Enigma machine would be.

    Furthermore, it gives less developed countries a chance to advance with innovative designs. Hardware making is expensive, but with a relatively small investment, a poor country could put out processors on open-source models, perhaps specializing rather than trying to compete with the big boys Intel and AMD.

  • I am not an electrical engineer. Maybe there are ways to make the chips easily, maybe it is still considered too difficult (particularly monetarily).

    That is beside the point. The ability to design the chip openly is valuable in and of itself. Perhaps it would never see the light of day. Who cares - if the idea behind the design is sound enough, it could very well be adapted into a product from a "viable" chipmaker. Maybe I'm not being realistic, but I don't see this as one of those "doing something because you can" deals... this could have merit, even if only from a pure science point of view. There's nothing wrong with cold, hard research - something can always be learned.

    Here's a question, though: Is there a way that if someone has a chip design, that design could be emulated in software to test the concept? Again, not being my field, I have no idea if doing such an emulation would even pan out anything useful... just an idea.

  • BThe simple fact is that there are only a few processor manufacturers, and this will not change as the market is not big enough to support more.

    Therefore, holding copyright on the design of a chip is largely irrelevant, especially when the manufacturers essentially operate as a cartel. Have you seen Motorolla designing an x86 chip recently? Didn't think so. With a few public exceptions the processor manufacturers largely stay away from each other.

    However, they will be receptive to open source as it would allow them to downgrade their expensive design engineering teams, outsource and up the shares dividends accordingly.

    I think Open Source designed processors will be higher quality, but it won't make any difference to price or competition in the field.

    They fuck you up, your mum and dad.

  • You can test the logic (high level) design of circuits on an FPGA, but CBL (Clocked Boolean Logic) design is sensitive to races - unlike alternative logics that are delay-insensitive (and asynchronous!) such as Theseus' Null Convention Logic [theseus.com] (NCL).

    Having said that, rapid prototyping using FPGAs, such as Xilinx's contribution to artificial intelligence research [nanodot.org] can be neat.
  • What you say is true if you are building "the fastest microprocessors in the known universe". But FPGAs are rapidly getting extremely powerful, and their cost keeps dropping. A great deal of power, and the tools to develop code for them, is right now readily available at a price that is affordable to even your average college engineering student. And I think this is a good thing.

    Multi million gate, very fast FPGAs are already available, and I personally don't think it will be very many years before they are suitable for general purpose microprocessors, rather than just special purpose reconfigurable computers. And when that happens then:

    Anybody can fix it when it breaks. I have seen in the last couple of years a dramatic increase in the number of VHDL simulators available, and a corresponding decrease in the cost of them. When someone identifies the problem, they will just write a patch, and then you will download into your FPGA/microprocessor/whatever hardware. This idea is already available with Xilinx Internet Reconfigurable Logic.

    You can tweak it to your needs. Just as with software, your typical hacker is not going to do extensive verification. They will do a quick simulation to test the changes they made, load the design into the FPGA, and try it out. And if it doesn't work right, just do another iteration.

    How many people are going to learn Verilog/VHDL? How many people have learned JAVA, Python, Perl... (this list is almost endless). And my HP16500 mostly sets in a corner gathering dust. I am now seeing the practice of imbedding a logic analyzer into the FPGA, and having a GUI frontend on your computer.

  • See Internet Reconfigurable Logic [xilinx.com] from Xilinx. These chips are now in the multi million gate range.
  • In order to have a usable system, you also need:
    • A motherboard

      More people might be using PPC, MIPS, or StrongARM if there were inexpensive motherboards; the fact that there aren't should give the observer cause to say "Hmmm..."

    • BIOS
    • Support for standard buses like PCI, USB, ATA, and such (I2O? Firewire? DIMMS? EV6?)

      ... Which may be a wrench in the works of attempting motherboard implementation, if using the specs requires paying royalties or signing NDAs

    • Obviously NVidia video drivers that only get compiled for IA-32 won't work on the "Custom CPU 5000."

      I'm not sure just how far this issue extends...

    The point is that it's not good enough to have a slick new CPU design; you need a system around it to take advantage of it, or, quite frankly, just to allow it to function.

    Those that were prepared to "roll their own MIPS variant" to build some highly customized embedded system may find this a quite acceptable scenario. But anybody thinking that this leads to having VA Linux Systems selling F-CPU [f-cpu.org]-based systems any time soon is severely delusional...

  • by ocie ( 6659 ) on Tuesday February 06, 2001 @11:02AM (#451974) Homepage
    As an intermediate, I would think that programmable logic would be a big boost to computing. This could go in an add-on card, on the motherboard, or even on the CPU die. When you want to view jpegs, it gets configured as a jpeg accelerator. When you want to generate an MD5 digest, it gets configured for this.

    There are already companies working on this. At the design automation conference, I saw a company that was working with an addon card that would accelerate different algorithms. And it ran under Linux!!

    RTC magazine just had an article on the 5 big technologies. One of these was Linux, and the predicted 6th technology is CPUs with embedded FPGA.
  • by MikeFM ( 12491 ) on Tuesday February 06, 2001 @12:15PM (#451975) Homepage Journal
    This is hardly pathetic. To start off this will no doubt drive the creation of even better opensourced circuit development and similation software. The better such software is the less the hardware costs required to develop and debug such designs. Secondly a lot of various new techniques are opening up that would allow fabrication to be much cheaper and more likely to occur at home.

    Sure open source hardware may take a while to get going as the curve to get into it is higher in both dollars and education but there have already been stabs at such things for years and it's succeeded in some lesser hardware hacking areas pretty well. Just remember if it wasn't for the home hackers we might not have PC's at all. A lot of people never thought the PC could take off for similar reasons but it seems to be doing okay. They didn't think Linux could take off but again it seems to have made a place for itself.
  • Would it really be that bad?

    Except for having to recompile the code, Transmeta is doing exactly that. You can upgrade your CPU. If they open their code-morphing software, they you could recompile your CPU (code). That would be cool.
  • You can learn to be an adequate coder by reading a few books, trying something, changing the code, trying something, heck, this is how both MS and the open source world started.

    This doesn't scale well to actual chip making though. You really need a thorough grounding in digital logic before you start throwing ands, nands, xors and ors around. There are no higher language equivalents like Perl or VB for chip making, it's just tedious gate and run after gate and run.

    Plus, there isn't as much room for self expression in chip making either. Taking the Perl example again, using hte language and reading the Camel book you get a good idea of Larry Wall's mindset. Can you get this from a cip? No?

    But, if you're a geek with delusions of grandeur and have a few thousand dollars to throw away in a fab, don't let me stop you.
  • by Flabdabb Hubbard ( 264583 ) on Tuesday February 06, 2001 @11:02AM (#451978) Homepage Journal
    Although we all agree on the benefits that open source software can bring, I am not sure that the same approach would extend to hardware.
    Hardware design, and chip design is an esoteric and hard to understand skill, debugging a race condition in a silicon chip can take a skilled technician many many hours of painstaking labor.

    Contrast this to software development which is more often than not simply drawing a pretty screen, and filling in a few callbacks.

    What will happen if the notoriously poor quality control standards of 'open source' reach the hardware level ? I mean, if your kernel doesn't compile, you simply back out the changes, but if your CPU wont boot, what do you do ?

    Anyway, companies like Sun have tried this with sparc international, (no doubt someone will whinge about the license) but this is essentially an open source CPU. It hasn't really caught on.

    I think this is just an attempt to garner publicity.

    Open Source should confine itself to the realm of software, where it makes eminent good sense.

  • by taniwha ( 70410 ) on Tuesday February 06, 2001 @11:20AM (#451979) Homepage Journal
    As other have pointed out the bulk of most large chips these days are designed in high-level languages (Verilog, and to a lesser extent VHDL) which look a lot like existing programming languages.

    You can learn either language relatively quickly if you know some other programming langauages - but using them for logic design does require a grounding in hardware that's outside of normal programming experience (wires, low level concurrency etc etc) - and converting a working simulation into a physical device requires a lot more infrastructure than most people have on hand ....

  • There are no higher language equivalents like Perl or VB for chip making, it's just tedious gate and run after gate and run.

    Um....no.

    There are many high level design languages for chip design, VHDL [uni-hamburg.de] and simulators for testing designs prior to fab.

    Designed chips is very much like programing.

  • by Zarquon ( 1778 ) on Tuesday February 06, 2001 @11:36AM (#451981)
    in the responses posted here so far. Most of the people seem to be whining about how much fab facilities cost, and how developing software is so much easier. Nobody talks about how much presses (both CD and Tree) cost, because we transmit our programs over the internet and run them on our own computers.

    Nobody seems to know about the equivalent for hardware: Designs written in a Hardware Description language such as Verilog or VHDL can be worked on as a group. When you want to test it, you download it to an FPGA. Complete development kits including software and a protoboard can be had from Altera or Xylinx for a few hundred dollars (less if you are a student). If you make a mistake, fix your VHDL and recompile.

    Also, people fail to consider that the designs for this type of thing rarely are on the level of AMD or Intel. We don't make 22 million-transistor designs, but if you want a custom hardware accelerator (say, an Ogg Vorbis accelerator, or hardware accelerated encryption where you KNOW EXACTLY what it is doing. No NSA backdoors. No worry about getting specs from OEMs.) this is the way to go about it. If your project gets popular, you can get them made in quantity as ASICs from any number of companies.

    Please, people, look into these things before you start flaming a project like this.
  • by SpanishInquisition ( 127269 ) on Tuesday February 06, 2001 @10:54AM (#451982) Homepage Journal
    ... is to have to recompile my CPU each time there is an upgrade.

  • by stomer ( 236922 ) on Tuesday February 06, 2001 @11:03AM (#451983)
    As I have tried to post twice, MIT Tech Review has an article [technologyreview.com] regarding the ability to use an inkjet printer to print a semiconductor chip.

    It is a very interesting read and even speaks to the possibility of this allowing open source cpu's in the future. Imagine, downloading and printing the latest version of your cpu before upgrading to the n.n kernel it is targeted at :)

    The speed is not yet that of a Pentium, but the researcher believes it will be someday. Wish /. would not have rejected this twice, never understood why!

On the eighth day, God created FORTRAN.

Working...