System on a Chip Concurrent Development 41
An anonymous reader writes "The old silo method of chip development, with hardware and firmware developers barely interacting with each other, won't cut it in today's fast-moving industry. IBM DeveloperWorks has the third in a series of articles about system-on-a-chip design. The author, Sam Siewert, displays the development tools and processes that speed system on a chip design and get all your developers working together effectively."
Complete series (Score:5, Informative)
Link [ibm.com] to all the articles in order.
Re:Complete series (Score:2)
This statement, from the second article, worries me,
Guess that stuff we've been doing for over a decade ago wasn't SoC work then. They just looked like one. Sheeeze.... (sarcasm off)
Everytime I read one of these IBM DeveloperWorks links I get annoyed. The articles are e
Re:Complete series (Score:3, Informative)
"The SoC emerged as a design concept as early as 2002."
Guess that stuff we've been doing for over a decade ago wasn't SoC work then.
You're right, that statement is uninformed fluff. Back in 1999, I remember going to a design conference, and SoC was THE big topic of the conference. SoC as a term predated that conference by a number of years. In many ways SoC is a lot like the term AJAX. It is a combination of technologies that are already in use, but now there is a marketing buzzword to identify that
2002, a marketing odyssey (Score:2)
This gave me an idea. (Score:1, Funny)
Call me crazy, but that might be more efficient than just throwing more cores at the problem.
Re:This gave me an idea. (Score:2)
Comment removed (Score:4, Informative)
Re:This gave me an idea. (Score:4, Interesting)
Jon
Re:This gave me an idea. (Score:3, Informative)
Re:This gave me an idea. (Score:3, Interesting)
Since when is an FPGA not hardware? You may have variations, like RAM based, ROM based, OTP (one time programmable) or mask ROM (programmed in the fact
Re:This gave me an idea. (Score:2)
The discussion was on general purpose CPUs like those used in PCs. The value of adding a FPGA or a CLD to a CPU core to allow new funtions to be added to the CPU.
I was only stating that it is not really worth the cost in a PC like system since any function that gets used enough will eventual get moved into an ASIC which in volume are much faster and cheaper then a FPGA.
Things like MMX, SSE, and 3D graphics chi
It's been done (Score:5, Informative)
Call me crazy, but that might be more efficient than just throwing more cores at the problem.
Here's one: Atmels' 20 MIPS processor + FPGA [atmel.com]
The problem is, fast processors are now SO cheap that the applications for a part like this are incredibly limited - you end up with the wrong FPGA and the wrong uP for more than it would probably cost you to buy the right architecture as discrete chips.
Re:This gave me an idea. (Score:5, Informative)
Were you thinking of something a lot different from the Xilinx Virtex 4 FX [xilinx.com], Altera Excalibur [altera.com] or Atmel part (referred to elsethread)?
nothing new here (Score:4, Informative)
Yes and no. (Score:4, Informative)
So it's nice having these kinds of articles around, as they tend to reinforce the obvious about current practices. You'd be surprised how many companies don't understand these things.
Two of the most important things (on TFA's list of recommendations) are, IMO:
"1. Identify hardware and firmware module owners to take responsibility through entire life cycle."
"2. Adopt configuration management version control (CMVC) tools that allow for feature addition branches and version tagging."
The other items are important. But these two stand out, as their impact has ramifications in the other areas.
For #1, this can be characterized as "Avoid the Netscape Development Model". That is, there is no single owner, and everyone gets to make whatever mods they want. This leads to excessive code bloat, broken API's, and no one single person responsible for fixing a specific section. It's typically brought about by total mis-management of the project, by MBAs. It's truly amazing that the majority of managers out there simply don't understand this.
And sorry to pick on Netscape, but the stories involved with their engineering mismanagement here are rather noteworthy.
For #2, the right SCM selection is critical; and the wrong one ends up costing not only money, but even worse, time. While branching and tagging is important (does any modern SCM not have this?), there are a lot of subtle issues which you can't see based upon the marketing blurbs.
A superb example of the hidden costs is ClearCase. Regardless of whether you love CC or hate it, it ALWAYS soaks up a lot of resources. Aside from the servers required (assuming they don't go down at critical times), I have yet to see a ClearCase project which didn't have an absymally low ratio of developers to SCM engineers.
Modern SCM systems (like bitkeeper) should have one SCM engineer per hundreds of developers (at least). With ClearCase, it seems like the ratio is in the tens (or perhaps a hundred if you're lucky).
Finally, what IS somewhat new (and not mentioned really in the article), is the incredible speed in development by using Open Source. There is no faster development model with the Closed Source approaches, because you ALWAYS run into things that you didn't foresee; and you either cannot solve them with Closed Source, or it will end up costing you significant money (and additional time) to surmount the issue.
But with Open Source code, you can either solve the problem immmediately; or, someone else has come across the problem, and has a solution already in place.
I pity (and avoid) the companies which don't understand all of the above points. I also prefer to work with their competitors, as I like having successes on my Resume, and not also-rans.
Erlang design principals? (Score:1)
Re:Erlang design principals? (Score:1)
But, needless to say, there are tradeoffs made in any design.
This is not groundbreaking. (Score:1)
But, how is this different from what Apple does with their BootRom?
I understand that in the realm of Windows-based PC architecture this is not common practice but I would not consider is new or groundbreaking and, in fact, it seems fairly obvious.
Someone please post a synopsis for us laymen of what makes this innovative.
Everything old is new again (Score:5, Interesting)
The concept of software/hardware codesign is not a new one. I would say that every project I have worked on for the past 20 years has been a software/hardware codesign project, regardless of whether it was an SoC or some other hardware system. In every case, I have seen the software start running shortly after the new hardware is powered up. Fast software boot is a validation that you did the design correctly, but it should not be viewed as an amazing feat. It should be an expected outcome that was planned for. Those who design in "silos" are doing it the wrong way from the start and are asking for problems.
Within the last 5 years or so, there has been a lot of hype about System-on-Chip (SoC) development. There have been lots of SoC articles, and many companies trying to sell tools to develop SoC's. But when I read through most of the hype, I find nothing all that revolutionary about the tools and methods. Chip fabrication has reached the point where one can integrate a lot of functionality on a chip, but the design methodologies for successful designs are still basically the same as the past non-integrated designs. Mistakes are less forgiving in an SoC since rework is very difficult or impossible, so the design verification requirements need to be thorough. Most design successes that I have seen have been more dependent on the thoroughness of the design team than the tools used. The best tools in the world will not save you from the blunders of bad designers.
Re:Everything old is new again (Score:2)
A lot of time, money and effort goes into both processes, but I can't recall the last time I heard about a major hardware flaw.
Is it just easier to debug hardware designs, do they have special automated mechanisms for error-checking...
what gives?
Re:Everything old is new again (Score:5, Insightful)
Is there any reason hardware design has such a high standard compared to software?
Part of the reason is that hardware design by nature is less flexible than software to change, particularly as more hardware goes inside chips. The up front cost of a System-on-Chip can easily be more than 1 million dollars, and the cost of re-spinning a chip to fix a design flaw can be almost as high. And the month or two of time lost waiting for the re-spin also has a time-to-market cost. The high cost justifies the high standard.
I can't recall the last time I heard about a major hardware flaw.
Most hardware bugs are detected in the bring-up phases of a project; and by the time a company commits the money to hardware production, they have a fairly high confidence that the hardware is solid. Most major hardware bugs don't make it to production, so people don't hear about those failures. Hardware bugs that do make it to production go unnoticed, are covered up through software workarounds, or they get a lot of bad press (like the infamous Intel FDIV bug, or more recently the Xbox 360 instability problems).
Companies can afford to put out beta versions and patches of software, so a lot more software bugs are visible outside a company. There is still a cost of having software bugs, particular in critical systems and high volume applications, but fundamentally you can patch software. Patching hardware is possible only if you are lucky enough to have an accessible place to change the hardware behavior, or enough time and money to re-spin. For big complex hardware, designers attempt to put in some programmability to hopefully work around potential bugs, but it takes a lot of foresight to predict where things might go wrong.
With field programmable gate arrays (FPGAs) the hardware does look more like software since you can compile and reload a design into the FPGA chip. I have seen hardware designer be less stringent about FPGA designs because of the increased flexibility. With FPGAs there is a power and cost tradeoff since they are more general purpose. For an equivalent design, an FPGA eats much more power than an equivalent SoC, and FPGAs are typically much more expensive in volume than SoCs. So FPGAs are not a viable solution for things like PDAs, cell phones, and MP3 players where SoC's are prevalent.
Re:Everything old is new again (Score:2)
Re:Everything old is new again (Score:1)
Any hardware bug that can be worked around in software is instantly reclassified as a software bug.
Seriously.
Re:Everything old is new again (Score:1)
down to a few simple ideas:
1) someone has to manage and plan
2) discipline (see #1)
3) adhere to and document defined interfaces
4) get the s/w guys going on a simulator as fast as you can
SOC... (Score:1)
Why don't we fix software development first. (Score:1, Funny)
FPGA + Embedded C + PCB (Score:1, Informative)
Check out Altium Designer, FPGA + Embedded C + PCB design all under the one hood, with all the handy things that come from having them all under the one hood.
Break the Bottleneck (Score:2)
Transmeta (Score:1)
Re:Transmeta (Score:2)
What SOC's do is to simplify the bringup time of new boards. A SOC without a reference port of some software (System Initialization, IRQ handling, device drivers) is almost useless. Without that, the customer has to do the work, which adds time. It's almost always far better to go with a different SOC that has this done, rather than develop it from scratch. Especially if a Linux port is available.
On the hardware si
Re:Transmeta (Score:1)
What about a custom processor and compiler generator that is optimized to your software algorithm / application? Seems this is what Tensilica [tensilica.com] is doing.
Sysem-on-Chip (SoC) makes sense when you want high-volume production of electronic systems with specialized hardware / software / programming framework. For example a chip that is a generic platform for a cell phone, with USB, Bluetooth, WiFi, GSM, camera, MPU, DSP, GPS circuitry. Typically the physical and data-link layer processing for this kind of app
ESL tools (Score:1)