Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

System on a Chip Concurrent Development 41

An anonymous reader writes "The old silo method of chip development, with hardware and firmware developers barely interacting with each other, won't cut it in today's fast-moving industry. IBM DeveloperWorks has the third in a series of articles about system-on-a-chip design. The author, Sam Siewert, displays the development tools and processes that speed system on a chip design and get all your developers working together effectively."
This discussion has been archived. No new comments can be posted.

System on a Chip Concurrent Development

Comments Filter:
  • Complete series (Score:5, Informative)

    by Saiyine ( 689367 ) on Monday December 26, 2005 @05:17PM (#14341171) Homepage

    Link [ibm.com] to all the articles in order.

    • At least there's some mention of SoC in the first and second articles. The third article, as linked by the submitter, just says to use revision control and branching. Woop de doo.

      This statement, from the second article, worries me,

      The SoC emerged as a design concept as early as 2002.

      Guess that stuff we've been doing for over a decade ago wasn't SoC work then. They just looked like one. Sheeeze.... (sarcasm off)

      Everytime I read one of these IBM DeveloperWorks links I get annoyed. The articles are e

      • Re:Complete series (Score:3, Informative)

        by dprice ( 74762 )

        "The SoC emerged as a design concept as early as 2002."
        Guess that stuff we've been doing for over a decade ago wasn't SoC work then.

        You're right, that statement is uninformed fluff. Back in 1999, I remember going to a design conference, and SoC was THE big topic of the conference. SoC as a term predated that conference by a number of years. In many ways SoC is a lot like the term AJAX. It is a combination of technologies that are already in use, but now there is a marketing buzzword to identify that

      • Because I'm a minor ARM fanatic, they list the ARM7500 system-chip as a milestone [arm.com] in 1994 and ARM7500FE in 1996. Perhaps not putting the system RAM on the chip excludes it from being a true SoC in IBM's terms. Unless IBM wish us to believe that they invented the universe...
  • by Anonymous Coward
    Why not make a CPU with a built-in FPGA, then load bits of the kernel into that hardware?

    Call me crazy, but that might be more efficient than just throwing more cores at the problem.
    • by JKR ( 198165 ) on Monday December 26, 2005 @07:42PM (#14341791)
      Check out these guys [criticalblue.com] who're doing just about that. Way cool stuff (disclaimer, used to work there, etc.)

      Jon
    • FPGAs tend to have a much lower clock speed than most CPUs. What tends to happen in mainstream hardware is first a function is done completely in software. As more and more users use those functions they get moved into hardware. FPGAs are sort of a halfway step. Functions that are not fast enough in software but not used by enough people to put into hardware.
      • FPGAs tend to have a much lower clock speed than most CPUs. What tends to happen in mainstream hardware is first a function is done completely in software. As more and more users use those functions they get moved into hardware. FPGAs are sort of a halfway step. Functions that are not fast enough in software but not used by enough people to put into hardware.

        Since when is an FPGA not hardware? You may have variations, like RAM based, ROM based, OTP (one time programmable) or mask ROM (programmed in the fact

        • If you want to get nitpicky How can you have a mask Read Only Memory Field Programmable Gate Array?
          The discussion was on general purpose CPUs like those used in PCs. The value of adding a FPGA or a CLD to a CPU core to allow new funtions to be added to the CPU.
          I was only stating that it is not really worth the cost in a PC like system since any function that gets used enough will eventual get moved into an ASIC which in volume are much faster and cheaper then a FPGA.
          Things like MMX, SSE, and 3D graphics chi
    • It's been done (Score:5, Informative)

      by seanadams.com ( 463190 ) * on Monday December 26, 2005 @10:04PM (#14342313) Homepage
      Why not make a CPU with a built-in FPGA, then load bits of the kernel into that hardware?

      Call me crazy, but that might be more efficient than just throwing more cores at the problem.


      Here's one: Atmels' 20 MIPS processor + FPGA [atmel.com]

      The problem is, fast processors are now SO cheap that the applications for a part like this are incredibly limited - you end up with the wrong FPGA and the wrong uP for more than it would probably cost you to buy the right architecture as discrete chips.
    • by Jerry Coffin ( 824726 ) on Monday December 26, 2005 @10:09PM (#14342330)
      Why not make a CPU with a built-in FPGA, then load bits of the kernel into that hardware?

      Were you thinking of something a lot different from the Xilinx Virtex 4 FX [xilinx.com], Altera Excalibur [altera.com] or Atmel part (referred to elsethread)?

  • nothing new here (Score:4, Informative)

    by wannasleep ( 668379 ) on Monday December 26, 2005 @05:57PM (#14341333)
    Hardware-software codesign is nothing new and revolutionary. It has been taught for years at berkeley [berkeley.edu] and around the country. A bunch of links can also be found here [uci.edu]
    • Yes and no. (Score:4, Informative)

      by btarval ( 874919 ) on Monday December 26, 2005 @08:58PM (#14342076)
      I can't speak for what's taught in the Universities nowadays, but I can tell you that the decreased development time in industry is relatively new. So much so, that I've seen a number of companies fall down when trying to figure out what to do. I've also seen a number of successes when they get things right. But people who understand the entire modern cycle (theory AND practice) seem to be the exception, and not the norm.

      So it's nice having these kinds of articles around, as they tend to reinforce the obvious about current practices. You'd be surprised how many companies don't understand these things.

      Two of the most important things (on TFA's list of recommendations) are, IMO:

      "1. Identify hardware and firmware module owners to take responsibility through entire life cycle."

      "2. Adopt configuration management version control (CMVC) tools that allow for feature addition branches and version tagging."

      The other items are important. But these two stand out, as their impact has ramifications in the other areas.

      For #1, this can be characterized as "Avoid the Netscape Development Model". That is, there is no single owner, and everyone gets to make whatever mods they want. This leads to excessive code bloat, broken API's, and no one single person responsible for fixing a specific section. It's typically brought about by total mis-management of the project, by MBAs. It's truly amazing that the majority of managers out there simply don't understand this.

      And sorry to pick on Netscape, but the stories involved with their engineering mismanagement here are rather noteworthy.

      For #2, the right SCM selection is critical; and the wrong one ends up costing not only money, but even worse, time. While branching and tagging is important (does any modern SCM not have this?), there are a lot of subtle issues which you can't see based upon the marketing blurbs.

      A superb example of the hidden costs is ClearCase. Regardless of whether you love CC or hate it, it ALWAYS soaks up a lot of resources. Aside from the servers required (assuming they don't go down at critical times), I have yet to see a ClearCase project which didn't have an absymally low ratio of developers to SCM engineers.

      Modern SCM systems (like bitkeeper) should have one SCM engineer per hundreds of developers (at least). With ClearCase, it seems like the ratio is in the tens (or perhaps a hundred if you're lucky).

      Finally, what IS somewhat new (and not mentioned really in the article), is the incredible speed in development by using Open Source. There is no faster development model with the Closed Source approaches, because you ALWAYS run into things that you didn't foresee; and you either cannot solve them with Closed Source, or it will end up costing you significant money (and additional time) to surmount the issue.

      But with Open Source code, you can either solve the problem immmediately; or, someone else has come across the problem, and has a solution already in place.

      I pity (and avoid) the companies which don't understand all of the above points. I also prefer to work with their competitors, as I like having successes on my Resume, and not also-rans.

  • Can erlang style message passing be built into something that small?
    • SoC, so the idea goes, leads to higher density of functionality for the same space/power. Therefore an equivalent SoC system ought to have more room for things like message queues. So, in theory, SoC the size of your typical motherboard ought to handle a lot more messages than your typical motherboard ;)

      But, needless to say, there are tradeoffs made in any design.
  • Ok, so I did RTFA but didn't understand a lot of it because I'm not a developer and ended up skimming through the parts I wasn't following.

    But, how is this different from what Apple does with their BootRom?

    I understand that in the realm of Windows-based PC architecture this is not common practice but I would not consider is new or groundbreaking and, in fact, it seems fairly obvious.

    Someone please post a synopsis for us laymen of what makes this innovative.
  • by dprice ( 74762 ) <daprice.pobox@com> on Monday December 26, 2005 @06:03PM (#14341354) Homepage

    The concept of software/hardware codesign is not a new one. I would say that every project I have worked on for the past 20 years has been a software/hardware codesign project, regardless of whether it was an SoC or some other hardware system. In every case, I have seen the software start running shortly after the new hardware is powered up. Fast software boot is a validation that you did the design correctly, but it should not be viewed as an amazing feat. It should be an expected outcome that was planned for. Those who design in "silos" are doing it the wrong way from the start and are asking for problems.

    Within the last 5 years or so, there has been a lot of hype about System-on-Chip (SoC) development. There have been lots of SoC articles, and many companies trying to sell tools to develop SoC's. But when I read through most of the hype, I find nothing all that revolutionary about the tools and methods. Chip fabrication has reached the point where one can integrate a lot of functionality on a chip, but the design methodologies for successful designs are still basically the same as the past non-integrated designs. Mistakes are less forgiving in an SoC since rework is very difficult or impossible, so the design verification requirements need to be thorough. Most design successes that I have seen have been more dependent on the thoroughness of the design team than the tools used. The best tools in the world will not save you from the blunders of bad designers.

    • Is there any reason hardware design has such a high standard compared to software?

      A lot of time, money and effort goes into both processes, but I can't recall the last time I heard about a major hardware flaw.

      Is it just easier to debug hardware designs, do they have special automated mechanisms for error-checking...

      what gives?
      • by dprice ( 74762 ) <daprice.pobox@com> on Monday December 26, 2005 @07:30PM (#14341736) Homepage

        Is there any reason hardware design has such a high standard compared to software?

        Part of the reason is that hardware design by nature is less flexible than software to change, particularly as more hardware goes inside chips. The up front cost of a System-on-Chip can easily be more than 1 million dollars, and the cost of re-spinning a chip to fix a design flaw can be almost as high. And the month or two of time lost waiting for the re-spin also has a time-to-market cost. The high cost justifies the high standard.

        I can't recall the last time I heard about a major hardware flaw.

        Most hardware bugs are detected in the bring-up phases of a project; and by the time a company commits the money to hardware production, they have a fairly high confidence that the hardware is solid. Most major hardware bugs don't make it to production, so people don't hear about those failures. Hardware bugs that do make it to production go unnoticed, are covered up through software workarounds, or they get a lot of bad press (like the infamous Intel FDIV bug, or more recently the Xbox 360 instability problems).

        Companies can afford to put out beta versions and patches of software, so a lot more software bugs are visible outside a company. There is still a cost of having software bugs, particular in critical systems and high volume applications, but fundamentally you can patch software. Patching hardware is possible only if you are lucky enough to have an accessible place to change the hardware behavior, or enough time and money to re-spin. For big complex hardware, designers attempt to put in some programmability to hopefully work around potential bugs, but it takes a lot of foresight to predict where things might go wrong.

        With field programmable gate arrays (FPGAs) the hardware does look more like software since you can compile and reload a design into the FPGA chip. I have seen hardware designer be less stringent about FPGA designs because of the increased flexibility. With FPGAs there is a power and cost tradeoff since they are more general purpose. For an equivalent design, an FPGA eats much more power than an equivalent SoC, and FPGAs are typically much more expensive in volume than SoCs. So FPGAs are not a viable solution for things like PDAs, cell phones, and MP3 players where SoC's are prevalent.

      • I can recall one major software bug, the FOOF memory bug on intel processors. Otherwise I would agree.
      • Is there any reason hardware design has such a high standard compared to software?

        Any hardware bug that can be worked around in software is instantly reclassified as a software bug.

        Seriously.

    • I agree with the parent 100% - the whole thing boils
      down to a few simple ideas:

      1) someone has to manage and plan
      2) discipline (see #1)
      3) adhere to and document defined interfaces
      4) get the s/w guys going on a simulator as fast as you can

  • I dont really want any more SOCks as I have enough to last me till next xmas now, although Linux or NetBSD on SOCks would be cool :)
  • by Anonymous Coward
    Before corrupting hardware
  • by Anonymous Coward
    http://www.altium.com/ [altium.com]

        Check out Altium Designer, FPGA + Embedded C + PCB design all under the one hood, with all the handy things that come from having them all under the one hood.

  • Putting the CPU, Chipset, Bios, RAM, and device controllers onto the same chip would erase many of the major bottlenecks effecting today's high-end PCs. Its these bottlenecks, which things like PCI-Express and SATA is trying to eliminate, that keeps today's fast PCs from being as fast as they truely should be.
  • If Transmeta [transmeta.com] proved anything, it was that you get more performance-per-watt and flexibilty by moving hardware into software. I guess that was the point behind the RISC processors that are now delivering up to 32 concurrent threads [sun.com] in hardware. Keeping the hardware simple gives you far greater potential. In light of this, I don't see how "system on a chip" can be a good thing?
    • I'm not sure if that's exactly what Transmeta proved. But I'll address your last question.

      What SOC's do is to simplify the bringup time of new boards. A SOC without a reference port of some software (System Initialization, IRQ handling, device drivers) is almost useless. Without that, the customer has to do the work, which adds time. It's almost always far better to go with a different SOC that has this done, rather than develop it from scratch. Especially if a Linux port is available.

      On the hardware si

    • What about a custom processor and compiler generator that is optimized to your software algorithm / application? Seems this is what Tensilica [tensilica.com] is doing.

      Sysem-on-Chip (SoC) makes sense when you want high-volume production of electronic systems with specialized hardware / software / programming framework. For example a chip that is a generic platform for a cell phone, with USB, Bluetooth, WiFi, GSM, camera, MPU, DSP, GPS circuitry. Typically the physical and data-link layer processing for this kind of app

  • As a SoC developer, looking for solutions to help improve dev-team productivity, I'm finding that there is a new breed of tools emerging called Electronic System Level (ESL) tools. These tools help to automate tedious low-level work and facilitate co-development of various aspects of SoC development. When interacting with hardware from a microprocessor, or even for communicating between multiple microprocessors I generally use memory-mapped registers. SpectaReg, a new tool from Productivity Design Tools [productive-eda.com]

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...