Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

ARM Offers First Clockless Processor Core 351

Sam Haine '95 writes "EETimes is reporting that ARM Holdings have developed an asynchronous processor based on the ARM9 core. The ARM996HS is thought to be the world's first commercial clockless processor. ARM announced they were developing the processor back in October 2004, along with an unnamed lead customer, which it appears could be Philips. The processor is especially suitable for automotive, medical and deeply embedded control applications. Although reduced power consumption, due to the lack of clock circuitry, is one benefit the clockless design also produces a low electromagnetic signature because of the diffuse nature of digital transitions within the chip. Because clockless processors consume zero dynamic power when there is no activity, they can significantly extend battery life compared with clocked equivalents."
This discussion has been archived. No new comments can be posted.

ARM Offers First Clockless Processor Core

Comments Filter:
  • Synchronisation? (Score:5, Interesting)

    by Poromenos1 ( 830658 ) on Saturday April 08, 2006 @08:44PM (#15093051) Homepage
    Can a processor like this do things like play sounds? If it doesn't have a clock I don't think it could measure time accurately so it could reproduce the samples. What other drawbacks are there?
  • timing (Score:2, Interesting)

    by Teclis ( 772299 ) on Saturday April 08, 2006 @08:44PM (#15093053) Homepage
    This may create difficulties for software that needs precise timing. Developing PICs I found that timing with the clock speed is easy and important.
  • The next palm pilot? (Score:1, Interesting)

    by Anonymous Coward on Saturday April 08, 2006 @08:48PM (#15093075)
    Would this be responsive enough to be used in PDA like applications? or even laptops?
  • I worked for ARM... (Score:5, Interesting)

    by Toby The Economist ( 811138 ) on Saturday April 08, 2006 @08:50PM (#15093086)
    I worked for ARM for four years.

    Truely wonderful and very special company for the first two of those years, then it slowly and surely went downhill - these days, it's just another company. ARM's culture didn't manage to survive its rapid growth in those few years from less than two hundred to more than seven hundred.

  • VAX 8600 (Score:5, Interesting)

    by Tjp($)pjT ( 266360 ) on Saturday April 08, 2006 @09:01PM (#15093117)
    Maybe the first commercial micro-processor. DECs VAX-8600 [microsoft.com] was asynchronous. And it smoked for the day. I worked on some of the multi-variant multi-source clock skew calculations for the simulator used to model the processor, among other duties. Very slick hardware for the time. External syncronous contexts are maintained of course for syncronous busses but the internal processor speed is quicker in theory and cheaper power since you have fewer switching transitions. Think of the fun in ECL logic back then. :)
  • by OrangeTide ( 124937 ) on Saturday April 08, 2006 @09:08PM (#15093141) Homepage Journal
    It theoretically should make a good chip for PDAs and cellphones. I think initially it will be used as a controller for automobiles though. Asynchronous chips are currently not that fast because the tools used to design them are incredibly new, but they are already very low power. I predict we'll have them all over the place in a couple years is all. Intel and AMD might already be considering (or may already have) used asynchronous logic in parts of their processors or support chipsets.

    Basically a good asynchronous chip would draw almost no power while it's waiting for something (like I/O events from network, keyboard, timers, etc). And it would instantly ramp up and handle the event as fast as it possible could. The speed is generally a factor of voltage and temprature. It's how fast the gates can switch and perform interlocks under current conditions, rather than what rate a clock is driving everything.

    It's going to be interesting to see what performance metric is used on these "clockless" chips by the industry and by the marketing/sales types. MIPS? FLOPS? SPECmark? not that MHz was ever a good benchmark, but things like MIPS is a lot easier to manipulate to make your product appear faster than your competitors.
  • Other Uses (Score:2, Interesting)

    by under_score ( 65824 ) <.mishkin. .at. .berteig.com.> on Saturday April 08, 2006 @09:09PM (#15093146) Homepage
    This seems obvious: laptops! The low power consumption makes them perfect. I'd love a multi-processor ARM9 core laptop running... oh, say, OS/X :-) Just for the geekiness of it.
  • by Mateorabi ( 108522 ) on Saturday April 08, 2006 @09:12PM (#15093154) Homepage
    Processors like this do not have a clock. Each piece of the processor is self-timing, with handshaking done between components to pass the data (compare this with clocked processors, where you can assume the data is at your input and valid just by counting cycles.) Asynchronous processors don't have global 'cycles' when all components must pass data.

    But your assertion about critical path is slightly off. Asynch processors still have a critical path. If you immagine the components as a bucket-bregade and the data the buckets, then they may not all be heaving the buckets at exactly the same time anymore, but they will still be slowed down by the slowest man in the line. The difference is that critical path is now dynamic. You don't have to time everything to the static, worst-case component on your chip. If you consistenly don't use the slowest components (say, the multiply unit), then you will get a faster IPT (instruction per time) on average.

    And yes, you don't have clock skew any more which is nice, but you now have to handshake data back-and-forth across the chip. Of course putting decoupling circuitry in can help.

  • Not That Difficult (Score:5, Interesting)

    by Mateorabi ( 108522 ) on Saturday April 08, 2006 @09:23PM (#15093179) Homepage
    I took an undergrad class on asynchronous chip design back in 2000. The class project was to implement the ARM7 instruction set (well, most of it) in about 5 weeks. We split it up into teams doing the Fetch, Decode, Reg file, ALU, etc. The nice thing about asynch is that as long as you have well defined, four phase handshaking between blocks you don't have to wory about global timing (there is no global "time" reference!). We were able to get it mostly done in those 5 weeks. Nothing manufacturable, and not tuned for performance, but we could simulate execution.

    One of the neatest things about asynch processors is their ability to run in a large range of voltages. You don't have to worry that lowering the voltage will make you miss gate setup timing since the thing just slows down. Increasing voltage increases rise time/propegation and speeds the thing up. The grad students had a great demo where they powered one of their CPUs using a potato with some nails in it (like from elementary school science class.) They called it the 'potato chip'.

  • How fast is it? (Score:3, Interesting)

    by MOBE2001 ( 263700 ) on Saturday April 08, 2006 @09:43PM (#15093232) Homepage Journal
    Not to belittle the energy savings, but how fast is it compared to a clocked CPU with a similar instruction set? To me, speed the most interesting quality of a new chip design other than reliability. The problem with a clock is that clock speed is dictated by the slowest instruction. Since a clockless CPU does not have to wait for a clock signal to begin processing the next instruction in a sequence, it should be significantly faster than a conventional CPU. Why is this not being touted as the most important feature of this processor?
  • by Charan ( 563851 ) on Saturday April 08, 2006 @09:48PM (#15093244)

    This seems to be a good overview of clockless chips. I can't vouch for its accuracy (not my area), but the source - IEEE Computer Magazine - should be good. The article was published March 2005.

    (warning: PDF)
    http://csdl2.computer.org/comp/mags/co/2005/03/r30 18.pdf [computer.org]

  • by Manchot ( 847225 ) on Saturday April 08, 2006 @10:01PM (#15093272)
    Another cool thing about asynchronous processors is that you can see the effect of temperature on the processor's speed. Wikipedia [wikipedia.org] describes a demonstration in which hot coffee placed on the processor caused it to visibly slow down, while liquid nitrogen caused its speed to shoot up.
  • by Vexar ( 664860 ) on Saturday April 08, 2006 @10:02PM (#15093276) Homepage Journal
    For those of us with short-term memories, we can go back in time and read historical articles about the Transmeta Crusoe [slashdot.org] processor, which was supposed to be clockless. Of course if you go to their Crusoe Page [transmeta.com] today, their pretty diagram sure has a clock.

    What did I miss? I remember the hype, the early diagrams of how it was all supposed to weave through without the need for a clock. Would someone care to elaborate on the post-mortem of what was supposed to be the first clockless processor, 4 years ago?

  • Re:Horrible summary (Score:3, Interesting)

    by NovaX ( 37364 ) on Saturday April 08, 2006 @10:18PM (#15093325)
    As others pointed out, you've made mistakes.

    The most glaring is that you assume that synchronous processors can only have one clock - that's incorrect. While the clock tick is of fixed length (by design), the global clock (as seen by external parties) may run at a different speed than internal clocks.

    If the a path of logic takes 5ns to complete, and its clock matches exactly, then you are perfectly optimized. You are hampered not by the clock, but by the transistor's switching speed. This path will have the same delay, regardless of whether it is driven by a clock.

    You might be getting confused because you are thinking about pipelining, where the longest stage dictates stage length. If everything is driven by one clock, you create waste because some partitions will finish sooner than others, and are therefore idle. However, modern designs now employ shew-tolerant clocking [amazon.com]. By using multiple clocks, the issues created by clock skew can be entirely avoided. The walls between pipeline stages are destroyed and skewing delay negated.

    Your issue with propogation delay of the clock is also not of great concern, in most cases. Synchronous chips can employ distributed clocks, islands of asynchronous logic, and the Pentium-4 actually has stages to help propogate the clock. However most processors are unlike the speed demon design of the P4 and clock speed is limitted by other issues than clock propogation. Currently, that limitting factor is power. In dynamic logic, frequency has a direct relationship to power consumption.
  • Re:VAX 8600 (Score:3, Interesting)

    by isdnip ( 49656 ) on Saturday April 08, 2006 @10:45PM (#15093398)
    He didn't say it was a microprocessor. Actually, it was a small mainframe, in terms of size, or a high-end "supermini". It simply used asynchronous design concepts, when even the other minicomputers and mainframes of the day were synchronously clocked.

    The VAX 8600 was produced by a team at DEC that had a heritage doing large computers (PDP-10, DECSYSTEM-20). It was competing, internally, with a different group with a "midrange" (VAX) heritage, who produced the VAX 8800 and some other machines. There was no love lost between the groups. They had very different design philosophies, and the 8600 crowd was rather amazed that the 8800 actually worked.

    Intel has rival groups too, of course. ISTM that the ones who produced the NetBurst machines (Pentium IV) had the upper hand for a while, but the Israelis who put out Pentium M proved the value of the older Pentium III base, and that evolved into the new Intel Core. Both are clocked, of course, but Pentium IV was designed to have the highest advertised clock speed, as if it mattered. It was one hot chip, much too literally. Async processors move even farther away from that, of course.
  • You are confused (Score:5, Interesting)

    by comingstorm ( 807999 ) on Saturday April 08, 2006 @10:57PM (#15093427) Homepage
    I think the confusing part is that, in the terminology of conventional, "synchronous" design, "asynchronous logic" is used to mean "the combinatorial logic in a single stage". What conventional, clock-based design typically does is break the logic up into stages with clocked latches in between, thus limiting the depth of each "asynchronous" logic stage.

    Unfortunately, self-clocked design (like the reported ARM uses) is also sometimes called "asynchronous" logic design; however, this is a completely different kind of thing than the "asynchronous" combinatorial logic used in clock-based design. Self-clocked design also does combinatorial logic in latched stages, but uses a self-timed asynchronous protocol to run the latches instead of a synchronous clock. Basically, the combinatorial logic figures out when it's finished, and tells both the next stage ("data's ready, latch it") and the input latch from the previous stage ("I'm done; gimme some more data").

    To close the loop, each stage can wait until there's new data ready at its inputs, and space to put the output data. Thus, in absence of some bottleneck, your chip will simply run as fast as it can.

    To overclock a self-timed design, you simply increase the voltage. No need to screw around with clock multipliers; as long as your oxide holds up, your traces don't migrate, and the chip doesn't melt...

  • Re:Why is async good (Score:2, Interesting)

    by TooManyNames ( 711346 ) on Saturday April 08, 2006 @11:06PM (#15093454)
    Thanks for looking at this with a realistic perspective. There is a reason that the article said these chips would be used in deeply embedded or automotive situations. In these situations, low power consumption granted by an asynchronous design is great. Not so great, however, is the overall performance. Part of the reason for clocking something (for example synchronous busses) is to avoid the excessive need for handshaking algorithms. Extending the handshaking methodology to multiple pipeline stages seems somewhat self-defeating. How might one effectively design an asynchronous pipeline with the same overall performance of a synchronous pipeline? How can you handle register bypassing or interlocks in a general case without some synchronization happening (as the different paths among different stages will inevitably introduce different time delays) yet also without adding a handshake to every stage? I guess my point is that I wouldn't expect this to find wide acceptance in scenarios in which reasonably high performance (where pipelining is a key factor) is needed. Seems great for embedded applications, poor for games.
  • Re:Why is async good (Score:5, Interesting)

    by bigberk ( 547360 ) <bigberk@users.pc9.org> on Saturday April 08, 2006 @11:44PM (#15093556)
    > Security: Async designs give security against side channel power analysis attacks

    You're right about that. I research side channel attacks on crypto hardware, and my first response to this was --- well, this would make EM analysis more complicated. For those not familiar with the general approach, in side channel attacks you don't try to do anything as complicated as breaking the underlying math of the crypto. Instead you observe the hardware for emissions that can give some clues as the instructions being carried out. If your observations help give you any info about what the chip is processing, you might learn parts of keys or gain a statistical advantage in other attacks. So if it's harder to observe signals emitted (electromagnetically from the chip, then attacking the hardware is harder.
  • by hereticmessiah ( 416132 ) on Sunday April 09, 2006 @12:46AM (#15093711) Homepage
    ARM was never like that. Unlike their parent company, Acorn, it was both a company of brilliant engineers and was always highly profitable. In later days, Acorn's share in ARM was all that kept it from going under.
  • by sconeu ( 64226 ) on Sunday April 09, 2006 @01:21AM (#15093772) Homepage Journal
    ISTR reading somewhere that Intel actually came up with an asynchronous 386, but it was shelved. Does anyone else recall this?
  • by tsotha ( 720379 ) on Sunday April 09, 2006 @03:13AM (#15094011)
    Almost 20 years ago I did some asynchronous stuff as a discrete-logic board designer. It was pretty seductive - we could save lots of power and use slower, cheaper parts without sacrificing the overall board speed.

    It didn't really work out. While we could easily get prototypes to work well over rated temperature ranges, getting the production version to work reliably was an order of magnitude more effort than the clocked version. As the complexity of the logic increases, the number of potential race conditions increases exponentially. So every nth board had to be scrapped in the early production runs.

    It turns out, for TTL and its successors the same manufacturer can produce two copies of the same part that are an order of magnitude different in speed. We would get situations where a signal would propagate through five gates faster than a single gate on another path, so if you missed a path during the design phase you were sure to see a failure eventually. Also, there were no commercial async design tools available at the time, so simulation was definitely "roll-your-own".

    Another problem we didn't even consider early on was the inability of the repair technicians to understand the curcuit, so getting a board repaired required the assistance of an engineer.

    We would have been fine for onesey-twosey production, but for large commercial runs? The benefits just didn't outweigh the extra hassle.

    I'm curious to see if they have any more trouble than normal (for a CPU) when they ramp up to production volumes.

  • Re:You are confused (Score:3, Interesting)

    by ultranova ( 717540 ) on Sunday April 09, 2006 @03:32AM (#15094035)

    Basically, the combinatorial logic figures out when it's finished, and tells both the next stage ("data's ready, latch it") and the input latch from the previous stage ("I'm done; gimme some more data").

    That sounds a bit like a dataflow language [wikipedia.org]. Maybe you could make a program that automatically converts a program made in such a language into a chip design ? Then we'd only need desktop chip manufacturing to make true open-sourced computing a reality...

    But no, such chips would be illegal, since they wouldn't neccessarily have DRM.

  • Re:Imminent (Score:3, Interesting)

    by (negative video) ( 792072 ) <meNO@SPAMteco-xaco.com> on Sunday April 09, 2006 @04:01AM (#15094090)
    Most (all?) commodity motherboards are completely synchronous. In fact, even the buses running at different speeds are actually clocked at rational fractions of the One True System Clock. (Letting them run at different clocks would require extra latency for the synchronization stages, to keep metastability [wikipedia.org] from eating the data alive.)
  • Re:only talk (Score:4, Interesting)

    by pedantic bore ( 740196 ) on Sunday April 09, 2006 @07:20AM (#15094404)
    This isn't entirely accurate.

    Sun has clockless chips up and running (real silicon, not sims) and they have done some interesting things, but they don't have a complete system that's ready to ship. And there are other components out there that use the clockless philosophy to do certain things, but they're not CPUs in any sense. To give credit where credit is due, as the parent post points out, ARM beat Sun out the door with a clockless CPU that is a drop-in replacement (to some degree, anyway -- not clear how much) for an existing, established architecture. But that wasn't/isn't Suns goal (although perhaps it should be...). They're pushing in new directions, not using this to reimplement current architectures.

  • by TheRaven64 ( 641858 ) on Sunday April 09, 2006 @09:51AM (#15094623) Journal
    ARM have been working on asynchronous designs for a long, long time. I recall reading about their plans for an asynchronous core in Acron User magazine, back when there actually were Acorn users. This was back when the ARM6 was new and shiny, and the asynchronous part was expected to be released as the ARM8.
  • by the_real_bto ( 908101 ) on Sunday April 09, 2006 @10:06AM (#15094648)
    Take a look at this [transentric.com]

    1997 - Intel develops an asynchronous, Pentium-compatible test chip that runs three times as fast, on the half the power, as it synchronous equivalent. The device never makes it out of the lab."

    So why didn't Intel's chip make it out of the lab? "It didn't provide enough of an improvement to justify a shift to a radical technology," Tristram says. "An asynchronous chip in the lab might be years ahead of any synchronous design, but the design, testing and manufacturing systems that support conventional microprocessor production still have about a 20-year head start."
  • certainly not (Score:3, Interesting)

    by r00t ( 33219 ) on Sunday April 09, 2006 @11:05AM (#15094833) Journal
    It's NetBSD that requires gcc and a paged MMU.

    Linux is more portable. Linux runs on the original 68000. Linux was just ported to the Blackfin DSP. There seem to be about a dozen crappy little no-MMU processors that can run Linux.

    Linux requires a gcc-like compiler, but not necessarily gcc. IBM and Intel have both produced non-gcc compilers that are able to compile Linux.

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...