Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

ARM Offers First Clockless Processor Core 351

Sam Haine '95 writes "EETimes is reporting that ARM Holdings have developed an asynchronous processor based on the ARM9 core. The ARM996HS is thought to be the world's first commercial clockless processor. ARM announced they were developing the processor back in October 2004, along with an unnamed lead customer, which it appears could be Philips. The processor is especially suitable for automotive, medical and deeply embedded control applications. Although reduced power consumption, due to the lack of clock circuitry, is one benefit the clockless design also produces a low electromagnetic signature because of the diffuse nature of digital transitions within the chip. Because clockless processors consume zero dynamic power when there is no activity, they can significantly extend battery life compared with clocked equivalents."
This discussion has been archived. No new comments can be posted.

ARM Offers First Clockless Processor Core

Comments Filter:
  • Soooo... (Score:5, Funny)

    by Morkano ( 786068 ) on Saturday April 08, 2006 @08:44PM (#15093048)
    Soooo... How many mHz does it run at?
  • Synchronisation? (Score:5, Interesting)

    by Poromenos1 ( 830658 ) on Saturday April 08, 2006 @08:44PM (#15093051) Homepage
    Can a processor like this do things like play sounds? If it doesn't have a clock I don't think it could measure time accurately so it could reproduce the samples. What other drawbacks are there?
    • Re:Synchronisation? (Score:5, Informative)

      by EmbeddedJanitor ( 597831 ) on Saturday April 08, 2006 @08:52PM (#15093092)
      The peripherals (serial ports, sound, LCD,...) are still clocked. The core is synchronised with peripherals by peripheral bus interlocks.

      This is not really any different than the way a clocked core synchronises with peripherals. These days devices like the PXA255 etc used in PDAs run independent clocks for the peripherals and the CPU. This allows for things like speed stepping to save power etc.

  • Horrible summary (Score:5, Informative)

    by Raul654 ( 453029 ) on Saturday April 08, 2006 @08:44PM (#15093052) Homepage
    I read the summary and cringed. (1) Don't call them clockless -- they're called a-synchronous, because (unlike a synchronus processor, one with a clock), all the parts of the processor aren't constantly starting and stopping at the same time. A typical synchronus processor can only run at a maximum frequency inversely proportional to the longest length in the critical path - so if it takes up to 5 nanoseconds for information to propagate from one part of the chip to the other, the clock cannot tick any faster than once every 5 nanoseconds. (2) One very serious problem in modern processors is clock skew [wikipedia.org] - if you have one central clock, the parts closest to the clock get the 'tick' signal faster than the parts farhther away, so the processor doesn't run perfectly synchronously.
    • VAX 8600 (Score:5, Interesting)

      by Tjp($)pjT ( 266360 ) on Saturday April 08, 2006 @09:01PM (#15093117)
      Maybe the first commercial micro-processor. DECs VAX-8600 [microsoft.com] was asynchronous. And it smoked for the day. I worked on some of the multi-variant multi-source clock skew calculations for the simulator used to model the processor, among other duties. Very slick hardware for the time. External syncronous contexts are maintained of course for syncronous busses but the internal processor speed is quicker in theory and cheaper power since you have fewer switching transitions. Think of the fun in ECL logic back then. :)
    • Re:Horrible summary (Score:5, Informative)

      by Peter_Pork ( 627313 ) on Saturday April 08, 2006 @09:06PM (#15093135)
      Too late, they ARE called clockless CPUs:
          http://en.wikipedia.org/wiki/CPU_design#Clockless_ CPUs [wikipedia.org]
      Yes, they are based on asynchronous digital logic, but calling them clockless is ok. They do NOT have a clock signal.

      One of the top problems in CPU design is distributing the signal to every gate. It is very wasteful. Clockless CPUs are a revolution waiting to happen. And it will. The idea is just better in every respect. It will take effort to reengineer design tools and retrain designers, but they are far superior (now that we really know how to make them, which is a recent development).
      • Re:Horrible summary (Score:4, Informative)

        by Raul654 ( 453029 ) on Saturday April 08, 2006 @09:12PM (#15093156) Homepage
        If memory serves, the big problem with these chips was the possiblity of 'state explosion' - as the chip got bigger, the number of possible states increased expontentially. Has this been resolved?
        • You are confused (Score:5, Interesting)

          by comingstorm ( 807999 ) on Saturday April 08, 2006 @10:57PM (#15093427) Homepage
          I think the confusing part is that, in the terminology of conventional, "synchronous" design, "asynchronous logic" is used to mean "the combinatorial logic in a single stage". What conventional, clock-based design typically does is break the logic up into stages with clocked latches in between, thus limiting the depth of each "asynchronous" logic stage.

          Unfortunately, self-clocked design (like the reported ARM uses) is also sometimes called "asynchronous" logic design; however, this is a completely different kind of thing than the "asynchronous" combinatorial logic used in clock-based design. Self-clocked design also does combinatorial logic in latched stages, but uses a self-timed asynchronous protocol to run the latches instead of a synchronous clock. Basically, the combinatorial logic figures out when it's finished, and tells both the next stage ("data's ready, latch it") and the input latch from the previous stage ("I'm done; gimme some more data").

          To close the loop, each stage can wait until there's new data ready at its inputs, and space to put the output data. Thus, in absence of some bottleneck, your chip will simply run as fast as it can.

          To overclock a self-timed design, you simply increase the voltage. No need to screw around with clock multipliers; as long as your oxide holds up, your traces don't migrate, and the chip doesn't melt...

          • Re:You are confused (Score:3, Interesting)

            by ultranova ( 717540 )

            Basically, the combinatorial logic figures out when it's finished, and tells both the next stage ("data's ready, latch it") and the input latch from the previous stage ("I'm done; gimme some more data").

            That sounds a bit like a dataflow language [wikipedia.org]. Maybe you could make a program that automatically converts a program made in such a language into a chip design ? Then we'd only need desktop chip manufacturing to make true open-sourced computing a reality...

            But no, such chips would be illegal, since they wo

    • by Mateorabi ( 108522 ) on Saturday April 08, 2006 @09:12PM (#15093154) Homepage
      Processors like this do not have a clock. Each piece of the processor is self-timing, with handshaking done between components to pass the data (compare this with clocked processors, where you can assume the data is at your input and valid just by counting cycles.) Asynchronous processors don't have global 'cycles' when all components must pass data.

      But your assertion about critical path is slightly off. Asynch processors still have a critical path. If you immagine the components as a bucket-bregade and the data the buckets, then they may not all be heaving the buckets at exactly the same time anymore, but they will still be slowed down by the slowest man in the line. The difference is that critical path is now dynamic. You don't have to time everything to the static, worst-case component on your chip. If you consistenly don't use the slowest components (say, the multiply unit), then you will get a faster IPT (instruction per time) on average.

      And yes, you don't have clock skew any more which is nice, but you now have to handshake data back-and-forth across the chip. Of course putting decoupling circuitry in can help.

      • Hasn't the commercial microprocessor industry already been flirting with the idea of asynchronous electronics? Looking at developments like DDR, execution units in processors that accept instructions on both the up and down parts of the clock cycle, and whatnot, it seems as if the idea of strictly obeying a clock signal is becoming a bottleneck. Granted, it's a big jump to actual clock-less operation, but it seems as many of the big players in the processor market have already taken the first baby steps i
        • Re:Imminent (Score:3, Interesting)

          Most (all?) commodity motherboards are completely synchronous. In fact, even the buses running at different speeds are actually clocked at rational fractions of the One True System Clock. (Letting them run at different clocks would require extra latency for the synchronization stages, to keep metastability [wikipedia.org] from eating the data alive.)
    • I agree totally that it was terrible. However i knew what they meant, and didnt even notice how bad it was written..
    • If you're going to nit pick language, you should at least use the standard form of the word: "asynchronous". But this bit of language nazism is particularly lame: "asynchronous" and "clockless", in this context, mean exactly the same thing. "Asynchronous" simply means, "not synchronized". How do you synchronize something? With a clock.
    • Re:Horrible summary (Score:3, Interesting)

      by NovaX ( 37364 )
      As others pointed out, you've made mistakes.

      The most glaring is that you assume that synchronous processors can only have one clock - that's incorrect. While the clock tick is of fixed length (by design), the global clock (as seen by external parties) may run at a different speed than internal clocks.

      If the a path of logic takes 5ns to complete, and its clock matches exactly, then you are perfectly optimized. You are hampered not by the clock, but by the transistor's switching speed. This path will have the
      • Re:Horrible summary (Score:5, Informative)

        by Raul654 ( 453029 ) on Saturday April 08, 2006 @10:27PM (#15093349) Homepage
        It's not just direct - power consumption is proportional to the *cube* of the frequency (according to the research paper I just peer-reviewed). But, there are all kinds of ways to vastly reduce that, using voltage scaling, frequency scaling, and power-down-during idling technqiues.
        • Yep, you're right about that - that was a mistype. However, it does not change the meat of my reply. Its just good nit-picking. :)
  • timing (Score:2, Interesting)

    by Teclis ( 772299 )
    This may create difficulties for software that needs precise timing. Developing PICs I found that timing with the clock speed is easy and important.
    • Re:timing (Score:4, Informative)

      by ScrewMaster ( 602015 ) on Saturday April 08, 2006 @08:49PM (#15093080)
      The fact that the CPU itself has no master clock is absolutely irrelevant to timing applications. You can bet your bottom dollar that the processor will sink interrupts, and that there will be a timer/counter component to the chip. Timing won't be a problem.
    • Nothing stops you from having an oscillator with counter attached for timekeeping purposes. Dallas Semi has dozens of options, especially for microcontroller designs.
  • oh wait....
  • I worked for ARM... (Score:5, Interesting)

    by Toby The Economist ( 811138 ) on Saturday April 08, 2006 @08:50PM (#15093086)
    I worked for ARM for four years.

    Truely wonderful and very special company for the first two of those years, then it slowly and surely went downhill - these days, it's just another company. ARM's culture didn't manage to survive its rapid growth in those few years from less than two hundred to more than seven hundred.

  • ARM? (Score:5, Funny)

    by aapold ( 753705 ) * on Saturday April 08, 2006 @09:01PM (#15093118) Homepage Journal
    Those were the guys that fought the CORE, right?
  • Other Uses (Score:2, Interesting)

    by under_score ( 65824 )
    This seems obvious: laptops! The low power consumption makes them perfect. I'd love a multi-processor ARM9 core laptop running... oh, say, OS/X :-) Just for the geekiness of it.
  • The summary (Score:5, Funny)

    by suv4x4 ( 956391 ) on Saturday April 08, 2006 @09:11PM (#15093151)
    So in short, your next smart clock may as well have a CPU without a clock.
    Those damn young'uns and their newfangled clockless clocks.
  • Not That Difficult (Score:5, Interesting)

    by Mateorabi ( 108522 ) on Saturday April 08, 2006 @09:23PM (#15093179) Homepage
    I took an undergrad class on asynchronous chip design back in 2000. The class project was to implement the ARM7 instruction set (well, most of it) in about 5 weeks. We split it up into teams doing the Fetch, Decode, Reg file, ALU, etc. The nice thing about asynch is that as long as you have well defined, four phase handshaking between blocks you don't have to wory about global timing (there is no global "time" reference!). We were able to get it mostly done in those 5 weeks. Nothing manufacturable, and not tuned for performance, but we could simulate execution.

    One of the neatest things about asynch processors is their ability to run in a large range of voltages. You don't have to worry that lowering the voltage will make you miss gate setup timing since the thing just slows down. Increasing voltage increases rise time/propegation and speeds the thing up. The grad students had a great demo where they powered one of their CPUs using a potato with some nails in it (like from elementary school science class.) They called it the 'potato chip'.

    • by Manchot ( 847225 ) on Saturday April 08, 2006 @10:01PM (#15093272)
      Another cool thing about asynchronous processors is that you can see the effect of temperature on the processor's speed. Wikipedia [wikipedia.org] describes a demonstration in which hot coffee placed on the processor caused it to visibly slow down, while liquid nitrogen caused its speed to shoot up.
    • Your project was really cool, but it's just a very simple in-order pipeline. Doing the same thing on a complex, ~20 stages out-of-order pipeline is very different. For instance, verifying such a design is considerably more difficult than for a clocked design. With verification accounting for about half the design cycles these days, I believe that asynchronicity won't make it in high-perf processors in the near future.

      The alternative proposed by the research community is GALS [boisestate.edu] - globally asynchronous, loca

    • by tsotha ( 720379 )
      Almost 20 years ago I did some asynchronous stuff as a discrete-logic board designer. It was pretty seductive - we could save lots of power and use slower, cheaper parts without sacrificing the overall board speed.

      It didn't really work out. While we could easily get prototypes to work well over rated temperature ranges, getting the production version to work reliably was an order of magnitude more effort than the clocked version. As the complexity of the logic increases, the number of potential race con

  • It isnt the first, but it will be the most modern..

    cool. I want one. Or two...
  • Overdoing it (Score:2, Insightful)

    by log0 ( 714969 )
    Is there a chance these things will cook themselves?

    Current processors are clocked at whatever speed they can safely run at and many of them automatically underclock themselves if they overheat.

    Without a clock, what keeps the speed at a safe level?
    • Without a clock, what keeps the speed at a safe level?

      Interesting question. Maybe the instructions are clocked (slowed) locally within the chip.
    • Re:Overdoing it (Score:3, Insightful)

      by langelgjm ( 860756 )
      I know nothing about microprocessor design, but a simple answer would be to have a temperature sensor attached to a voltage regulator. When the temperature gets too high, reduce the voltage, and consequently, the speed (that is, assuming the other few posts I skimmed were correct - always a toss-up on /.).
  • by ollj ( 966671 ) on Saturday April 08, 2006 @09:38PM (#15093215)
    "What time is it?" "Shut! The! Fuck! Up! I'm saving energy here!"
  • This is certainly not the first commerical processor without a clock. The PDP/8 operated using a series of delay lines arranged in a loop so that the end of an instruction triggered the next one. One of the EE courses I took (back when EE majors still had to use real test equipment and soldering irons) involved a design of a clocked version of a PDP/8 as a class project.

    Gads. Now that I'm "overqualified" to write software (i.e., employers don't seem to think experience is worth paying any extra for), the geek world has completely forgotten that it even has a history.

  • How fast is it? (Score:3, Interesting)

    by MOBE2001 ( 263700 ) on Saturday April 08, 2006 @09:43PM (#15093232) Homepage Journal
    Not to belittle the energy savings, but how fast is it compared to a clocked CPU with a similar instruction set? To me, speed the most interesting quality of a new chip design other than reliability. The problem with a clock is that clock speed is dictated by the slowest instruction. Since a clockless CPU does not have to wait for a clock signal to begin processing the next instruction in a sequence, it should be significantly faster than a conventional CPU. Why is this not being touted as the most important feature of this processor?
  • by Charan ( 563851 ) on Saturday April 08, 2006 @09:48PM (#15093244)

    This seems to be a good overview of clockless chips. I can't vouch for its accuracy (not my area), but the source - IEEE Computer Magazine - should be good. The article was published March 2005.

    (warning: PDF)
    http://csdl2.computer.org/comp/mags/co/2005/03/r30 18.pdf [computer.org]

  • by Vexar ( 664860 ) on Saturday April 08, 2006 @10:02PM (#15093276) Homepage Journal
    For those of us with short-term memories, we can go back in time and read historical articles about the Transmeta Crusoe [slashdot.org] processor, which was supposed to be clockless. Of course if you go to their Crusoe Page [transmeta.com] today, their pretty diagram sure has a clock.

    What did I miss? I remember the hype, the early diagrams of how it was all supposed to weave through without the need for a clock. Would someone care to elaborate on the post-mortem of what was supposed to be the first clockless processor, 4 years ago?

    • ISTR reading somewhere that Intel actually came up with an asynchronous 386, but it was shelved. Does anyone else recall this?
      • Take a look at this [transentric.com]

        1997 - Intel develops an asynchronous, Pentium-compatible test chip that runs three times as fast, on the half the power, as it synchronous equivalent. The device never makes it out of the lab."

        So why didn't Intel's chip make it out of the lab? "It didn't provide enough of an improvement to justify a shift to a radical technology," Tristram says. "An asynchronous chip in the lab might be years ahead of any synchronous design, but the design, testing and manufacturing systems that
  • Sweet! (Score:5, Funny)

    by ixtapa ( 903468 ) on Saturday April 08, 2006 @10:03PM (#15093278)
    I can't wait to get my hands on one of these and over-asynch the hell out of it. Imagine running it under dry ice - I bet it could run up to 50% more clockless over its default clocklessnes.
  • Why is async good (Score:5, Informative)

    by quo_vadis ( 889902 ) on Saturday April 08, 2006 @10:05PM (#15093281) Journal
    I know typing this out will be useless, and it will get overlooked by the mods, but I might as well say this. Asynchronous designs have several advantages :

    1. It will give good power consumption characteristics i.e. low power consumed, not just because of the built in power down mode, but also because of the voltage the chips will be running at. By pulling the voltage lower than a synchronous equivalent, it will be simpler to have greater power savings. This becomes possible if you are willing to sacrifice speed. and in async devices, speed of switching can be dynamically altered as each block will wait till the previous one is done, not until some outside clock has ticked.

    2. Security: Async designs give security against side channel power analysis attacks. As all gates must switch (standard async design usually uses a dual rail design, so most gates means all gates along both +ve & -ve switch), differential power attacks become much harder. Thus async designs are perfect for crypto chips (hardware AES anyone?)

    3. elegance of solution:the world is generally async. Key presses are, memory accesses are. so why not the processor :). (Yes I know busses are clocked, before you start, but if they were not.... )

    But they have several points of disadvantage:

    1. They are hard to do. Especially using the synchronous design flow that most of the world uses. Synchronous tools assume, especially in RTL, that the world is combinational, and that sequential bits are simply registers that occur once a clock cycle (not true for full custom designs like intel and amd, but for slightly lower level : esp ASIC design)

    2. The tools that exist now, are either able to do good implementation using only a few gates ie small functions or bad implementations, that are in worst case as slow as synchronous equivalents but are larger functions. Tools exist like http://www.lsi.upc.edu/~jordicf/petrify/ [upc.edu] Petrify , but these become unusable for circuits with more than ~50 gates.

    3. Async designs are usually large. This is not always true, but standard async designs are usually implemented as dual rail or using 1-of-M encoding on the wires. But the main overhead comes from the handshaking circuitry. For really fine grain pipeling, the output of each stage must be acknowledged to the previous stage. This adds a massive overhead, as it necessitates the use of a device called the Muller C Element, that sets the output to the output, only if the inputs are the same, or retains the previous value, if not. Many copies of this element are usually required, and its this that adds space, for example, a simple 1 bit OR gate, that would usually have 4 transistors, has 16 transistors for the dual rail async implementation.

    For the time being, I think they will find a lot of use in low power applications - such as embedded microcontrollers/processors, in things like wireless sesnor networks, and security processors. However I believe that full processor design is very far off.
    • Thanks for looking at this with a realistic perspective. There is a reason that the article said these chips would be used in deeply embedded or automotive situations. In these situations, low power consumption granted by an asynchronous design is great. Not so great, however, is the overall performance. Part of the reason for clocking something (for example synchronous busses) is to avoid the excessive need for handshaking algorithms. Extending the handshaking methodology to multiple pipeline stages s
    • Re:Why is async good (Score:5, Interesting)

      by bigberk ( 547360 ) <bigberk@users.pc9.org> on Saturday April 08, 2006 @11:44PM (#15093556)
      > Security: Async designs give security against side channel power analysis attacks

      You're right about that. I research side channel attacks on crypto hardware, and my first response to this was --- well, this would make EM analysis more complicated. For those not familiar with the general approach, in side channel attacks you don't try to do anything as complicated as breaking the underlying math of the crypto. Instead you observe the hardware for emissions that can give some clues as the instructions being carried out. If your observations help give you any info about what the chip is processing, you might learn parts of keys or gain a statistical advantage in other attacks. So if it's harder to observe signals emitted (electromagnetically from the chip, then attacking the hardware is harder.
  • Comment removed based on user account deletion
  • by Gertlex ( 722812 ) on Saturday April 08, 2006 @10:30PM (#15093362)
    So I take it you can't overclock it? :D
  • .. a digital watch?
  • by Praetor11 ( 512322 ) on Saturday April 08, 2006 @11:27PM (#15093503)
    ARM made a clockless chip in 1994 for cellphones. Couldn't find an amazing reference, but a quick google turned up http://www1.cs.columbia.edu/async/misc/technologyr eview_oct_01_2001.html [columbia.edu] where they briefly mention it... The last time I heard of this stuff being used was in 2001-- I actually wrote an English paper about it purely to see if I could bore my professor :-p
  • Price (Score:2, Funny)

    I heard it would cost an ARM and a LEG...
  • Between the VCR, Microwave, etc. I changed 16 clocks since we went over to DST.

    I will be happy to have CPU without one.
  • by Shadowlawn ( 903248 ) on Sunday April 09, 2006 @06:57AM (#15094367)
    ARM is actually building this chip with Handshake Solutions [handshakesolutions.com], a Philips incubator. The work stems from Philips Research as early as in 1986 (yes that's 20 years from research to product), and has matured very much over the years. We used to have courses at our university explaining the basics behind these asynchronous designs. All in all I'm excited to see this technology finally in a product, and hope it will make my pda last yet a little bit longer.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...