Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Technology

Philips, ARM Collaborate On Asynchronous CPU 163

Sean D. Solle writes "While not an actual off-the-shelf chip, Philips and ARM have announced a clockless ARM core using what they call "Handshake Technology." Read on for more about just what that means; according to this article, the asynchronous ARM chip has yet to be developed, but the same Philips subsidiary has applied similar technology to other microprocessors.

Sean D. Solle continues "Back in the early 1990's there was a lot of excitement (well, Acorn users got excited) about Prof. Steve Furber's asynchronous ARM research project, "Amulet". The idea is to let the CPU's component blocks run at their own rate, synchronising with each other only when needed. Like a normal RISC processor, one instruction typically takes one clock cycle; but in a clockless ARM, a cycle can take less time for different classes of instructions.

For example, a MOV instruction could finish before (and hence consume less power than) an ADD, even though they both execute in a single cycle. As well as energy-efficiency, running at effectively random frequencies reduces a chip's RFI emissions - handy if it's living in a cellphone or other wireless device."

This discussion has been archived. No new comments can be posted.

Philips, ARM Collaborate On Asynchronous CPU

Comments Filter:
  • 1997 - "Intel develops an asynchronous, Pentium-compatible test chip that run three times as fast, on half the power, as its synchronous equivalent. The device never makes it out of the lab."
    • by Anonymous Coward
      The device never makes it out of the lab.

      Duh everybody know Intel is all about the clockspeed. How can they sell a clockless cpu?? How could they claim their processor was better than AMD's without some silly numbers to use?
    • by h0tblack ( 575548 ) on Tuesday November 02, 2004 @04:47AM (#10698054)
      Read the story.. there were ARM based asynchronous chips in the lab (AMULET) a long time before 97.
      • Actually the first Asynchronous Microprocessor (in VLSI) was developed in 1989: The First Asynchronous Microprocessor: The Test Results. A.J. Martin, S.M. Burns, T.K. Lee, D. Borkovic, and P.J. Hazewindus. Computer Architecture News, 17(4), 95-110, June 1989. [CS-TR-89-06] The Design of an Asynchronous Microprocessor. A.J. Martin, S.M. Burns, T.K. Lee, D. Borkovic, and P.J. Hazewindus. ARVLSI: Decennial Caltech Conference on VLSI, ed. C.L. Seitz, 351-373, MIT Press, 1989. [PS | CS-TR-89-02 ]
    • by kf6auf ( 719514 ) on Tuesday November 02, 2004 @04:58AM (#10698086)
      So the question is WHY didn't it make it out of the lab? Did it cost too much to produce? That's the only real possibility I can think of - I don't think Intel's Marketing Division had absolute power over the company in 1997 to push the MHz agenda.
      • The dark side is more powerful than you can ever imagine.
      • "So the question is WHY didn't it make it out of the lab? Did it cost too much to produce? That's the only real possibility I can think of -"

        I read about this once ages ago. The reason it didn't get out the gate was that it would still have taken time to produce. By the time it was done, it wouldn't be 3x faster anymore, it'd actually be slower than whatever was out at the time.

        Take with a grain of salt, I'm not claiming to have a strong grasp of this particular topic.
      • by Anonymous Coward
        The biggest problem with asynchronous chips lies in the fact that all the design tools, fabrication facilities, and testing methodologies are geared toward synchronous processors. If Intel or any chip maker were to release an asynchronous processor on a large scale, it would require a MASSIVE overhaul of the industry. It's an investment that requires both time and money on a risky departure from decades of acquired knowledge in designing synchronous chips.
    • by jimicus ( 737525 ) on Tuesday November 02, 2004 @05:10AM (#10698138)
      No they weren't. From TFA:

      The AMULET1 microprocessor is the first large scale asynchronous circuit produced by the APT group. It is an implementation of the ARM processor architecture using the Micropipeline design style. Work was begun at the end of 1990 and the design despatched for fabrication in February 1993. The primary intent was to demonstrate that an asynchronous microprocessor can offer a reduction in electrical power consumption over a synchronous design in the same role.
    • The Amiga's Zorro expansion bus was asyncronous back in the 1980s.

      Ok, wasn't a processor and one of the chips that drove it was clocked, but it goes to show you how clever the designers of that system were.
    • So how long did they study the PDP-6 to learn how to do it? :-}
  • by philj ( 13777 ) on Tuesday November 02, 2004 @04:45AM (#10698046)
    See here [man.ac.uk]. Developed by Steve Furber [wikipedia.org] and his team at The University Of Manchester [manchester.ac.uk]
    • by Anonymous Coward
      Yes, indeed they did. The article doesn't mention any collaboration between the teams, which seems strange because:

      1) ARM like to licence CPU core design IP, as mentioned in a later thread.

      2) One of the major upsides of asynchronous CPU design (said Prof. Furber on the Manc. Uni course) is that because the subcomponents of the CPU aren't nearly so tied to temperature, voltage and clock speed requirements (which directly affect flip-flop "set up" and "hold" time), the intellectual property invested in cre
  • by luvirini ( 753157 ) on Tuesday November 02, 2004 @04:45AM (#10698047)
    If we see same thing applied to non ARM architectures, there a many strange things going to happen, as quite many things in current computers are based on the assumption that things have specific clock rates. Obviously things might get very intresting...
    • Usually with these kind of asynchronous cpus the communication with the outside world is made synchronous again. Just the inside of the processor is asynchronous. This is relativly easy since you only have to make sure that the asynchronous path is travelled faster than a clock cycle.
      The big advantage is that not every flipflop has to be active at every clock pulse and thus saves a lot of energy. Also the chip doesn't turn into a giant clock transmitter.

      Jeroen
    • Provided the input/output interfaces of the CPU aren't changed the rest of the system won't care if the internal workings of the CPU are clocked, clockless, or performed by thousands of pixies using pen and paper. Clockless input/output lines would certainly make for some interesting design needs accross the entire device, but after a quick read of the arcticle I suspect the changes relate only to the inner workings of the chip.
  • way more elegant (Score:5, Informative)

    by fizze ( 610734 ) on Tuesday November 02, 2004 @04:47AM (#10698057)
    the very first drafts of microprocessors were clockless.
    just with higher speed and hence, brute force, performance could be achieved easily.
    The problems which could not be solved back then were the obvious synchronisation issues. Setting up a common clock seemed the only way to resolve them.

    The idea behind clockless designs is less a "back-to-the-roots" idea, but more a step to gain the advantages of such a design, which are, amongst others:

    Reduced Power Consumption
    Higher Operation Speed

    Moreover, highly sophisticated compilers could tune program code to match a given performance/power ratio.

    Yet, I would not bet on clockless cores to become the new mainstream, by far not. Clockless cores will most likely be aimed at embedded design appliances, and low- and ultra-low-power applications.
    • Yet, I would not bet on clockless cores to become the new mainstream, by far not.

      Mainly because Intel's marketing has depended on clock speed for the last 20 years. I wouldn't be at all surprised to see some of the technology used in future generations of mainstream processors - low power consumption is a selling point when your electricity and air con bills are somewhere up in the stratosphere, particularly if it can still achieve reasonable performance. I don't see it replacing x86 or x86-64, but I co
    • Re:way more elegant (Score:5, Interesting)

      by renoX ( 11677 ) on Tuesday November 02, 2004 @05:27AM (#10698218)
      Agreed that clockless cores have few chance to become mainstream, but still they have a better chance of being used now than before.

      Let me explain: before to reduce power consumption the "easy" thing was to use a process which created smaller transistor, but smaller doesn't means 'reduced power consumption' anymore..
      So clockless CPU becomes more interesting now.
    • Yet, I would not bet on clockless cores to become the new mainstream, by far not. Clockless cores will most likely be aimed at embedded design appliances, and low- and ultra-low-power applications.

      I think that "embedded appliances" are even more "mainstream" than anything else, since there are far more embedded systems around than general-purpose PC workstations, servers, laptops etc altogether.
    • Ivan Sutherland discussed this topic in his Turing Award lecture (he called it Micropipelines) [acm.org] in 1989, using clock transitions to trigger state changes. One problem with high clock rates is that clocks are now so fast that they may not propagate to the entire chip in a single cycle. While I'm not sure that a purely clockless arcihtecture is at hand (since handshaking is not entirely free of cost), clocking could be used within regions on the chips (to reduce gate count and propagation distance) and clockle
  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Tuesday November 02, 2004 @04:56AM (#10698081) Journal
    The benefit to today's high-functionality embedded operating systems like Linux, Symbian, iTron, and Windows CE is that they implement a preemptive task switching operating system. At any time, the clock interrupt may fire and the operating system will then queue up the next thread into the CPU.

    Nowadays, the whole CPU is not powered at any one time. If an instruction does not access certain parts of the chip, they are dark. Now this does not hold for some predictive processors which may be processing not-yet-accessed instructions, but in general if an instruction is not using some part of the chip, that part of the chip does not require juice.

    Taking out the clock and relying on the chip parts to fire and return means that each application in the system must return to the OS at some point to allow the OS a chance to queue up the next thread. Without the clock interrupt, the OS is at the mercy of the program, back to the bad old days of cooperative multitasking.

    The clock is what tells the OS that it is time to give a time slice to another thread. If we say "OK, well we'll just stick a clock in there to fire an interrupt every x microseconds," then what have we accomplished? We are back at square one with a CPU controlled by a clock. No gain.

    This kind of system would work in a dedicated embedded system which did not require a complex multitasking operating system. Industrial solutions for factories, car parts, HVACs, and other things that need reliability but don't really do that much feature-wise seem to be prime candidates for this technology. "Smart" devices? Not so much.
    • by fizze ( 610734 ) on Tuesday November 02, 2004 @05:06AM (#10698117)
      Preemption is a "dirty hack" to achieve nice behaviour in a timely manner.
      For embedded systems where interrupt latency is the primary aspect, other approaches have to be found. also, if the CPU checks after every x instructions if there is an interrupt to process, you get a margin of the timely behaviour.
      I am no embedded / safety critical developer, but I know that the fastest response times on interrupts and worst-case response times vary greatly depending solely on the (RT)OS used.
    • by Anonymous Coward on Tuesday November 02, 2004 @05:09AM (#10698135)
      I think you are getting clock confused with ticker interrupt. A CPU clock is typically measured in nanoseconds. A ticker interrupt is typically measured in milliseconds. A clockless core will still need to field interrupts (for I/O) and very well can still field a ticker interrupt. -cdh
    • by CaptainAlbert ( 162776 ) on Tuesday November 02, 2004 @05:14AM (#10698153) Homepage
      You appear to be confusing the CPU's clock with a real-time clock interrupt. They are fundamentally not the same thing.

      The clock being dispensed with is the one that causes the registers inside the CPU to latch the new values that have been computed for them. At 3GHz, this happens every 333ps. The reason this clock exists is basically because it makes everything in a digital system much, much easier to think about, design, simulate, manufacture, test and re-use. But, it's not an absolute requirement that it be present, if you're clever. (Too clever by half, in fact.)

      The other clock, which you were referring to, fires off an interrupt with a period on the order of milliseconds, to facilitate time-slicing. If your application requires such a feature, you can have one, regardless of whether your CPU is synchronous or asynchronous internally. It's a completely separate issue.
    • Not relevent (Score:5, Informative)

      by r6144 ( 544027 ) <r6k&sohu,com> on Tuesday November 02, 2004 @05:15AM (#10698160) Homepage Journal
      As far as I know, Linux and many other operation systems already use an external chip (the 8254 on the PC) for most timing tasks, including preemptive multitasking. For ultra-high precision timing, the CPU clock (the time stamp counter on an IA32 cpu) is used, but they are not all that essential. Last time I heard, since CPU frequencies can change by power management functions on some P4s, they are a bit tricky to use correctly for timing, so they are not used when not absolutely needed.

      As for the power problem, all parts of the CPU is powered, except that gates that aren't switching consume less power (mostly leakage, which seems to be quite significant now). In synchronous circuits, at least the gates connected directly to the clock signal switch all the time, while in asynchronous circuits unused parts of the CPU can avoid switching altogether, so some power may be saved, but I don't know how much it will be.

      • Re:Not relevent (Score:3, Informative)

        by Scarblac ( 122480 )

        You didn't understand what this is about. It's not about timing.

        You talk about "CPU frequencies". What is that? That's the frequency of the CPU clock signal. It runs everything inside the CPU - at every 'tick' of the clock, instructions move through the CPU, registers are updated, etc. This is about CPUs that don't use a clock signal at all, different things happening aren't synchronous. These CPUs don't have a frequency.

        (Probably wrong also, I don't have the time to express myself more clearly - just wan

        • I mean that current applications on current synchronous chips use external clock chips for the most part, but the CPU's own 3GHz clock is also used occasionally. Since it is not used all that often (definitely not needed for preemptive multitasking), applications that really need to use them currently can still use a high-frequency external clock chip instead when they are ported to these async CPUs.
          • I seem to recall ntp docs pointing out that the software clock (ie increment time counter by 0.0x secods every tick) is order of magnitudes more accurate than the cheapo motherboard rtc. IIRC, the RTC loses several seconds per year.
    • We are back at square one with a CPU controlled by a clock. No gain.

      Except said "clock" is one million times slower if we speak millisecond-wide granularity (HZ in the kernel is, what, 1024 now ?), and a lot of asynchronous processing can happen between task switching.
    • We're talking about two different types of clocks:

      • a timing source needed to preempt a long-running task
      • the heart-beat that dictates when the CPU is going to do the next instruction.

      These two are completely different things. The former can have a pretty low resolution as well -- but is needed for other tasks as well. Any non-degenerate processor will need some kind of timing source, but there is no reason why it would be connected to the number of instructions executed.

      In a multitasking operating sy

  • Quite impressive... (Score:3, Interesting)

    by Goalie_Ca ( 584234 ) on Tuesday November 02, 2004 @05:07AM (#10698122)
    The complexity of souch a core must be astounding. For all you non-ee's out there, a chip is full of little memory cells called flip-flops. At the end of each circuit rests a flip-flop in which normally the rising edge of the clock stores the results of that circuit so it pass that data on and start new stuff without loosing it. Everythig is synchronized to the clock. This is definently over-simplified but that's essentially why a circuit has a clock.

    To eliminate clocks you would new circuitry such arbitrers and some sort of completion logic which could be used to trigger a flip-flop. To break a slashdot law, i haven't done any reading on any modern techniques so would some one enlighten me on some design issues involving simple tasks such as accessing a register file, or making a memory read. Surely a bus would still maintain a clock.
    • by Anonymous Coward
      For all you non-ee's out there, a chip is full of little memory cells called flip-flops.

      Hey, go Philips! Go ARM! I'd love to get John Kerry removed from my chips...

    • by Anonymous Coward
      When I was at university we studied standard ways to overcome these problems down to gate and transistor level. The impression i was given that its not that hard. Some of the standard ways include doing a calculation twice and waiting for the results to be the same. Of coarse there are no tools to do it in automated fashion that i am aware of like synchronous design. Here the tools are extensive from vhdl to handel C implenentation. Even dynamic logic can be synthesised automatically now. This means its
    • The team at Manchester have also developed [man.ac.uk] an Async bus to link Synchronous IP blocks with different timing constraints together on a single chip more easily. If you want to know more about this you can attend CS3212 Asynchronous System Design [man.ac.uk] at Manchester University. I did last year, and it was pretty hard :)
    • by rahard ( 624274 )

      To eliminate clocks you would new circuitry such arbitrers and some sort of completion logic which could be used to trigger a flip-flop.... enlighten me on some design issues involving simple tasks such as accessing a register file, or making a memory read.

      if you remember your digital design, there's an asynchronous counter. basically, it involves handshaking just like handshaking in a protocol level but at a lower level. yes, there's arbiter, muller c-element (rendezvous), and other nifty components.

    • Since you expressed a particular interest in register files, here is a recent publication:

      David Fang and Rajit Manohar. Non-Uniform Access Asynchronous Register Files. Proceedings of the 10th International Symposium on Asynchronous Circuits and Systems, April 2004.
      http://vlsi.cornell.edu/~rajit/ps/reg.pdf

      The fastest/lowest energy asynchronous circuits do not use clocks for anything. Moreover, very few arbiters are used in practice. The "completion logic" of course is always the hard part, but about 1
    • AFAIK, there are basically two ways to implement clockless designs. The simplest, in principle, is to keep track of min/max propagation delays and make sure that everything is guaranteed to settle out without any race hazards. It is a huge bookeeping job for non-trivial designs so this approach doesn't scale very well. That's why people started putting flip-flops and clocks in, to keep the combinatorial stages separated.

      It is also possible, though, to make fundamental mode logic. If you feed outputs b

  • To me it feels like that Philips is turning into a major R&D company that could one day topple IBM. Looking at their recent expension in Bangalore, no one would disagree. Maybe, a good thing to keep the fire burning under IBM.
  • ARM Business Model (Score:3, Interesting)

    by joelethan ( 782993 ) on Tuesday November 02, 2004 @05:21AM (#10698194) Journal
    I'm interested becasuse ARM's business model usually involves licensing their chip designs. ARM cpus are widespread in cell phones etc. They have their own market and application area away from Wintel, PowerPC etc.

    Also, anything that might boost my pitiful ARM shares value is most welcome! Why?... Why did I believe the hype?

    /joelethan

  • by chris_sawtell ( 10326 ) on Tuesday November 02, 2004 @05:24AM (#10698203) Journal
    I hope they don't try to patent this.
    Refer to 1944 for prior art.
    • They may have a legitmate patent if they have designed some gee-whiz new way to build asyncronous circuits that isn't hideously complicated. I did some circuit design while studying electrical engineering at uni and syncronous is vastly easier to werk with, even at the small scales we were working on with only a dozen components.

    • I hope they don't try to patent this.

      you know, it's not a laughing matter.
      some of the ideas on this have been patented.
      try google with "micropipelines patent". you'll find plenty of them.

    • I hope they don't try to patent this.

      I can guarantee that they have.

  • Well, a very cool (?) implication of this technology would be that chip performance would be depending of the, well, performance of the die used and the enviroment. So increasing voltage and decreasing die-temperature would make to chip faster automatically...
    (i can see the "the xxx-chip would have won against the yyy-chip if they had used a bigger HSF" flames comming...)
    • then the new benchmark, would need to have its proccess run exactly the same each time to give best idea from one run on each benchmark, otherwise to benchmark, you would need to run it 5+ times and average the score.
    • "Well, a very cool (?) implication of this technology would be that chip performance would be depending of the, well, performance of the die used and the enviroment. So increasing voltage and decreasing die-temperature would make the chip faster automatically..."

      AFAIK, that is already the case with all IC's (including analog) that I have ever come across.. nothing new here. A fixed clock just limits operating speeds to 'known reliable' over specified voltage/temperature ranges.

      And for including voltage/

  • no way - (Score:1, Funny)

    by Anonymous Coward
    ARM? Handshaking? They're having a laugh...
  • by gtoomey ( 528943 ) on Tuesday November 02, 2004 @06:24AM (#10698409)
    These asyncronous computers are implementations of data flow computers [acm.org].
    The problem is that the first implementations were very slow.
  • I had an idea once (Score:5, Informative)

    by ajs318 ( 655362 ) <sd_resp2@@@earthshod...co...uk> on Tuesday November 02, 2004 @07:07AM (#10698522)
    The reason why a clock is commonly used in microprocessor circuits is to try to synchronise everything, because different logic elements take a different amount of time for the outputs to reach a stable state after the inputs change. This is known as "propagation delay" and is what ultimately limits the speed of a processor. With CMOS, you can actually reduce the propagation delay a little by increasing the supply voltage, but then your processor will be dissipating more power. {CMOS logic gates dissipate the most power when they are actually changing state, and almost no power at all while stable, whether they are sitting at 1 or 0. This is in contrast to TTL, which usually dissipates more power in a 0 state than in a 1 state, but there are some oddball devices that are the other way around}.

    The clock is run at a speed that allows for the slowest propagation, with data being transferred in or out of the processor only on the rising or falling edges. This allows time for everything to get stable. It's also horrendously inefficient because propagation delays are actually variable, not fixed.

    If you wire an odd number of NOT gates in series, you end up with an oscillator whose period is twice the sum of the propagation delays of all the gates. If you replace one of the NOT gates with a NAND or NOR gate, then you can stop or start the oscillator at will. Furthermore, by extra-cunning use of NAND/NOR and EOR gates, you can lengthen or shorten the delay in steps of a few gates. Obviously at least one of the gates should have a Schmitt trigger input to keep the edges nice and sharp; but that's just details.

    My idea was to scatter a bunch of NOT gates throughout the core of a processor, so as to get a propagation delay through the chain that is just longer than the slowest bit of logic. Any thermal effects that slow down or speed up the propagation will affect these gates as much as the processing logic. Now you use these NOT gates as the clock oscillator. If you want to try being clever, you could even include the ability to shorten the delay if you were not using certain "slow" sections such as the adder. This information would be available on an instruction-by-instruction basis, from the order field of the instruction word. The net result of all this fancy gatey trickery is that if the processor slows down, the clock slows down with it. It never gets too fast for the rest of the processor to keep up with. Most I/O operations can be buffered, using latches as a sort of electronic Oldham coupling; one end presents the data as it comes, the other takes it when it's ready to deal with it, and as long as the misalignment is not too great, it will work. For seriously time-critical I/O operations that can't be buffered, you can just stop the clock momentarily.

    The longer I think about this, the deeper I regret abandoning it.
    • by Anonymous Coward
      two words: process variation.
      • by ajs318 ( 655362 )
        That's another reason to scatter the delaying gates throughout the core, and use enough of them. You have to hope that you don't get too many instances of a logic element and one of its associated delaying gates falling on the opposite sides of a process variation boundary. Especially where the effects favour faster propagation in the delaying gate. So, my intention was to aim for the clock delay being slightly but definitely longer than, and not exactly equal to, the logic delay. It would still respond
    • by chrysrobyn ( 106763 ) on Tuesday November 02, 2004 @08:26AM (#10698778)

      My idea was to scatter a bunch of NOT gates throughout the core of a processor, so as to get a propagation delay through the chain that is just longer than the slowest bit of logic.

      I assume that you hope to use your self timed logic (as it's known in the industry) to avoid all the problems associated with clocked logic and provide an easy to use asynchronous solution. Please do not forget manufacturing tolerances and that you have to make your self-timed logic 99.99999% certain slower than the slowest asynchronous path. This means that you have to qualify your entire logic library with a specific technology, then guardband it to make sure that when manufacturing shifts due to reasons you cannot explain, your chip still works. For this reason, in my experience, self timed logic has been slower than clocked logic for nominal cases and much slower in fast cases (in special cases, better than breaking even in slow process conditions).

      Self-timed logic of the kind you describe would likely still end up with latches to capture the result / launch into the next self-timed logic block. In this case, you're still paying the latch cycle time penalty for clocking your pipeline. You're still burning the power associated with the clock tree (although you are gating your clocks to only the active logic, known as "clock gating", an accepted practice), and you're additionally burning the power for each oscillator, which I suggest would likely be more than the local clock buffers in a traditional centrally PLL clocked chip.

      An ideal asynchronous chip would be able to not use latches to launch / capture and still be able to keep multiple instructions in flight -- using race conditions for good and not evil. This would involve a great deal of work beyond simply using inverters and schmitt triggers. This is a larger architecture question requiring a team of PhDs and people with equivalent professional experience.

    • by TonyJohn ( 69266 )
      You stated that the clock period (and therefore the length of the ring oscillator) should be about the same length as the critical path through the design. This is likely to be significantly less than 50 gates, and therefore your oscillator will only have 25 inverters. In a design with a million gates or more, this is not really enough to monitor the process and temperature variation across the die (which is surprisingly significant). If you could get enough gates into the ring (use NAND gates?), then th
  • Way Back When (Score:5, Interesting)

    by opos ( 681974 ) on Tuesday November 02, 2004 @07:22AM (#10698567) Homepage
    A long long time ago (1970s) Charlie Molnar, designer of the Linc tape (the Linc computer was an NIH funded (late 1960s) minicomputer that evolved into the PDP 8 and pushed DEC into the minicompuer business) explored asynchronous computing. Along the way they discovered synchronizer failure - i.e. the inability to reliably synchronize asyncronous subsystems - see Chaney, T.J. and Molnar, C.E. 1973. Anomalous behavior of synchronizer and arbiter circuits. IEEE Trans. Comp. pages 421-422. The bottom line is that it is physically impossible to guarantee that the data setup requirements (the minimum time the data must be asserted before it can be reliably clocked into the flip flop) of a flip flop can be met when the clock is asserted by one async component and the data are asserted by another async component. To my knowledge, this fundamental limitation has never been overcome.
    • Re:Way Back When (Score:3, Interesting)

      by BarryNorton ( 778694 )
      A good review, as well as the state of the art, afaik, in showing how much we can formally say about what can be achieved practically is Ian Mitchell's MSc thesis (1996, British Columbia) 'Proving Newtonian Arbiters Correct, Almost Surely' (which is an answer to Mendler and Stroup's 'Newtonian Arbiters Cannot Be Proven Correct', paper versions of both being available from the proceedings of Designing Correct Circuits, in 1992 and 1996)
    • Not sure if that applies, but isn't the whole internet essentially a very large asynchronous switched circuit? Data transfer is still reliable given the right protocols. Even if it is outside reasonable thought to implement a multiprocessor-interconnect over, say, TCP/IP because of latency and bandwidth issues, I think it's not fundamentally impossible to do. I would see SETI@home and United Devices as a kind of asynchronous processing system with very different timings. Bus throughput may be measured in ki
  • Interesting... (Score:5, Interesting)

    by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Tuesday November 02, 2004 @07:30AM (#10698598) Homepage
    It looks like Philips (through their tame spin-off Handshake Solutions) are letting the world see Tangram again (or something very like it.) Back in around 1994/1995 the Amulet team (already mentioned accurately by others) were looking into using the Tangram language to develop their asynchronous microprocessor technology - it was a fairly neat solution that did most of the things we wanted, though there were a few things it was crap at at the time - but then Philips decided to cut us off. It would be entirely fair to say that this was very annoying! Now it looks like they're letting the cat get its whiskers out of the bag again.

    FWIW, ARM have probably known (at least informally and at a level not much deeper than your average slashdot article) a large fraction of what Philips have been up to in this area for at least a decade.
  • chip manufacturers have been so eager to take the focus off the clock speed (amd+intel using new "model numbers") this type of thing injects thats much more logic into their reasoning if they start to embrace this type of approach... things like cache size and performance benchmarks may eventually be significantly more imporatnt even from a marketing perspectave that how many ghz you can cram into a chip without having it melt through the socket it's embedded in...
  • The WIZ Processor (Score:4, Interesting)

    by MarcoPon ( 689115 ) on Tuesday November 02, 2004 @08:08AM (#10698715) Homepage
    Take a look at The WIZ Processor [bush.org], by Steve Bush.
    It's a drastic departure from common CPUs. Definitely intresting.

    Bye!

  • by Anonymous Coward
    This was the purpose of the /dtack pin. This was used to acknowledge a transfer when operating async, or you could just ground it and run things at cpu self clocking sync. So how is this new again?



  • With chips that don't need a clock, there's no room for obsessive tweaking and rediculous liquid nitrogen cooling systems.

    What will all the overclockers do with their time now?

    Congratulations Science, you've ruined another perfectly good hobby.

    • Well, I'm not sure. Maybe "overclocking" will just become "overvoltageing", i.e. increasing the core voltage in order to make the chip faster (and since higher voltage inevitably means more heat, you still can use your ridiculous liquid nitrogen cooling systems).
    • Allow me to change the other replier's "maybe overvoltage" comment to "absolutely, all it means is that you need to increase the voltage."

      More voltage => more heat => same "overclocking" game.

      Some numbers (sorry, old from the '90s) for an Asynchronous MIPS R3000
      (Vdd, MIPS, power dissipiation (W))
      (1.00, 9.66, 0.021)
      (1.51, 66, 0.29)
      (3.08, 165, 3.4)
      (3.30, 177, 4.2)
      (3.51, 185, 5.1)
      (4.95, 233.6, 13.7)

      source: http://resolver.library.caltech.edu/caltechCSTR:2 0 01.012
  • by mdxi ( 3387 ) on Tuesday November 02, 2004 @10:21AM (#10699462) Homepage
    In 2001 they presented a paper [sun.com] on an asynch processor design called FLEETzero/FastSHIP [sun.com]. According to the patents list on this page [sun.com], they're still doing work on it (see also here [sun.com].)
  • NO NO NO!!! (Score:3, Funny)

    by kompiluj ( 677438 ) on Tuesday November 02, 2004 @03:07PM (#10702296)
    Give me back my gigahertz!!!
    I want all my precioussss... gigahertz!!!
  • When I was a student, I did my internship at Philips. They were working on this back then. In 1990. "Almost ready". "expect results on the market in a year or two". ... Yeah right.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...