Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Announcements

Boosting Battery Life For RISC Processors 113

prostoalex writes "National Semiconductor and ARM Holdings will jointly develop the power management solution for RISC chips, that they estimate will improve battery life by 25-400%. The target date of the first sample product is Q2 2003." My old Tadpole laptop sure could have used this. I counted myself as lucky when I got a whole 45 minutes out of a battery.
This discussion has been archived. No new comments can be posted.

Boosting Battery Life For RISC Processors

Comments Filter:
  • by gTsiros ( 205624 ) on Tuesday November 12, 2002 @08:43AM (#4650042)
    I once had this link to research done on cpus, which are designed from the ground up to be VERY low power. Consider this: they saved power at the *gate* level!
    • by e8johan ( 605347 ) on Tuesday November 12, 2002 @09:07AM (#4650114) Homepage Journal
      That is one method of doing it (turning of clock trees to shut down a set of gates). One other way is to adjust the supply voltage and clock frequency to the CPU core. As ARM allready utilizes clock gating, the voltage/frequency technique is a very viable option for even more efficient CPUs. I'm usually not a big fan of Intel's, but look at their XScale and the measures they've taken to preserve energy. I have to say that I'm impressed!
      • You're right to not be a big fan of Intel as XScale, afaik, is simply another iteration of StrongARM, the rights to which Intel acquired from DEC as part of settlement of lawsuit (DEC won - Intel got rights to all of DEC Semiconductors IC designs!). Digital did the work on designing StrongARM for low-power.

        There's a Digital Tech Journal article on the StrongARM, archive copy at:

        [compaq.com]
        http://www.research.compaq.com/wrl/DECarchives/D TJ /DTJP05/DTJP05HM.HTM

    • no, actually, the energy to turn "on" a gate is returned to the "power supply" when said gate turns "off" :) Very l33t i might say...
  • 25-400%? why not just say 0 to aleph-0%...
    • Probably because they are talking about the power savings during different criteria. Like it's a 400 % saving when the CPU is Idle, and 25% while the CPU is working full tilt.

      • You have to take the unknown value of x hours of battery consumption and apply it to the estimated average of the inverse expected increase in n.

        x = 1/n(25%/400% - y) where y = battery life now (say 4 hours).

        x = 1/n(16% - 4) [.16 - 4 = -3.84)

        x = 1/n(-3.84)

        1/3.84 = n .26ths of an hour increase in average expected battery life, or about 15 minutes. This is how you keep your job as an engineer... :)

        [this is a joke. this is only a joke. these numbers may be interperated by completing the square in a quadratic equation]
  • .. remember when Palm was holding out from introducing colour devices because of worries about battery life? (or so they claim).

    Combine this with fuel-cell power packs, which is now approved [slashdot.org] by the DoT and is already in use on some airlines (BA [electric-fuel.com]), this means....

    Pitching my PDA against the onboard computer in an Othello death match! YaY!
    • Combine this with fuel-cell power packs, which is now approved by the DoT and is already in use on some airlines (BA), this means....

      You're confusing state-of-the-art hydrogen fuel cells with plain old zinc-air batteries.

      Those itsy-bitsy-run-for-days-on-alcohol fuel cells aren't available commercially yet. The Electric Fuel product is not what everyone else thinks of as a fuel cell, despite the marketing hype; it's a zinc-air battery, a technology that's been around for many years. Many hearing aids use them. You can buy their batteries in many electronic stores, together with adapters that connect them to cellphones and PDAs; I have one for use with my Treo.

      Once you open the package, the battery runs for a limited time. You can stick it back in the package to "suspend it" for a while, but it cannot be recharged, and the capacity isn't much higher than most conventional batteries. When it's dead, you throw it out, like an alkaline battery.

      Compare this with a micro fuel cell that uses hydrogen extracted from alcohol... they're expected to run many hours, and when they die, you "recharge" them by feeding them more methanol.

      • Thanks for the clarification :) Was looking for articles about usage of fuel cells and guess I did not check that article properly before linking to it...

        The fuel cells for laptops *are* the real deal, though, once they come out that is.
  • Great move! (Score:5, Insightful)

    by e8johan ( 605347 ) on Tuesday November 12, 2002 @08:51AM (#4650058) Homepage Journal

    When developing portable devices the most limiting factor today is not processing resources, memory or anything such. It is simply the power source.

    Batteries of today are either too weak or too heavy. How ofter does one have to choose between a slim-line battery or an ultra-long life.

    There have been many suggestions for competing technologies such as fuelcells, harvesting of motion energy and solar cells to mention a few. But still, they have proven to be too expensive, large or have some other problem (such as not being ready for production use yet). Hopefully these one of these, or any other, portable power sources will make it possible to carry real computing power without having to carry a heavy battery pack.

    The solution today is to reduce the power usage. This can be done by shutting down parts of the clock trees in the CPUs, or by using Intel's PowerStep (i.e. two working speeds), or Transmetas's variable voltage and frequency technology, LongRun. As the article lacks technical details we can only guess about the techniques used behind the PowerWise solution. Also, the figures 25-75% efficiency gain is most probably measured under special conditions.

    But, in order to avoid sounding too negative, it seems like the industry has realized the problem and are working for a solution. I feel that most of today's solutions (power saving) are just a cure for the symptoms (bad battery time), not for the cause (bad battery technology).

    • Re:Great move! (Score:4, Insightful)

      by clickety6 ( 141178 ) on Tuesday November 12, 2002 @09:29AM (#4650207)
      Perhaps another approach would be to reduce requirements on the PC power by writing software that is less bloated and more efficient and is geared towards a portable solution. There really should be no need for my laptop to have a 1 Ghz chip just to run some word processing and spreadsheet software. Nor should the computer need 250 Megs of memory just to start up and run some windows. We should eb using nutcrackers to crack our nut, not a pile driver!

      • Re:Great move! (Score:4, Interesting)

        by e8johan ( 605347 ) on Tuesday November 12, 2002 @09:34AM (#4650223) Homepage Journal
        Very true. Look at this [microsoft.com] for a laugh. A minimal installation will take approx. 120MB of disk space and 40MB of RAM. I can't help wondering what they are doing over there (at M$).
      • Re:Great move! (Score:3, Insightful)

        by jonbrewer ( 11894 )
        by writing software that is less bloated and more efficient and is geared towards a portable solution.

        Last I checked, the older versions of WordPerfect still worked... And there's always vi. :-)
        • Yes, but they either run on modern machines (powerfull cpus and such, sitting idle and wasting power) or on older machines (inferior batteries, hardware not really engineered to conserve power)
          What we really need are modern cpus running at low clockrates and designed to use as little power as possible, coupled with modern batteries.
    • Re:Great move! (Score:1, Insightful)

      by wheany ( 460585 )
      But does the CPU really consume that much energy? Hasn't the biggest power hog always been backlit displays and disk drives?
      • Re:Great move! (Score:3, Interesting)

        by e8johan ( 605347 )

        How is it then that we do not see standard CPUs with custom portable devices?

        You are correct when you say that backlit displays, disk drives, CD players and such consumer much power, but so does also all transistor switches (from on to off and the other way around). The sheer number of transistors and the incredible frequency of these switches makes the CPU one of the power hogs in a portable system today.

        • Well of course the CPU consumes energy, but dont expect the battery life of a portable device improve by 100% when the power the CPU consumes goes down by 50%.

          I believe this is why Transmeta processors do worse than the should after all the hype. Yes they consume less energy, but when you take into account all the energy the device consumes, their battery life isn't that much better than a device that has a mobile Pentium. They're just slower...
          • Re:Great move! (Score:3, Insightful)

            by e8johan ( 605347 )

            "dont expect the battery life of a portable device improve by 100% when the power the CPU consumes goes down by 50%"

            I don't, but Amdahl's law apply here too, reduce the biggest factor. You probably get a better yeild if you attach the CPU than any other device. These companies tend to evaluate the problems before attacking them (even though it doesn't seem so all the time).

    • Re:Great move! (Score:4, Insightful)

      by SmittyTheBold ( 14066 ) <[deth_bunny] [at] [yahoo.com]> on Tuesday November 12, 2002 @11:02AM (#4650679) Homepage Journal
      The solution today is to reduce the power usage. This can be done by shutting down parts of the clock trees in the CPUs, or by using Intel's PowerStep (i.e. two working speeds), or Transmetas's variable voltage and frequency technology, LongRun. ...or, by using an architecture that does not require as many useless (er...extra) transistors and is therefor more efficient to begin with. Witness the PowerPC, for example. In particular, the G3 is amazingly efficient in the desktop form.

      Compare, for example, the G4 at 11.5 million transistors, (I am not sure about the current G4e) and the P4 at 42 million (once again, this is an old number - recent P4s may have a different count). Is it any mystery, then, why the G4 uses so little power in comparison?

      I'm not discounting your ideas totally here - I'm just saying there is more to saving energy than throttling the CPU.

      In response to your last line - "most of today's solutions (power saving) are just a cure for the symptoms (bad battery time), not for the cause (bad battery technology)" - I have to say that although batteries are a hindrance, they are not much of one at the moment. Portables currently dissipate quite a bit of heat. If you increase the power they use (and increase the power given to them) you increase heat output, which is bad. Current laptops are about like holding a lightbulb against your lap. Are you sure you want that to be increased?

      The real limits with laptops these days have more to do with dissipating the heat they already produce than powering that mess. Upgrading batteries is not a solution to this, while more efficient processors are.
      • Re:Great move! (Score:4, Interesting)

        by e8johan ( 605347 ) on Tuesday November 12, 2002 @11:19AM (#4650799) Homepage Journal
        I have to agree that the P4 is a monster when it comes to transistor count and the PowerPC and the derivates are amazing. However, there will always be idle parts of the CPU core that can be shut down during different periods (for example fp ops.). Just since you have a simple (as in beatiful, optimized, etc) architecture does not mean that you should not further improve it by using state of the art optimization methods.
        • Re:Great move! (Score:3, Insightful)

          I'm not saying (or at least didn't intend to say) that the G4 is a perfect architecture, not that is has no room for improvement. I'm just suggesting that performance tweaks like those used in the P4 are a bit like worrying about the aerodynamics of your '57 Chevy. Sure, you can add a bit of fuel efficiency, but there are far greater gains to be had.
  • Ugh. (Score:4, Funny)

    by archeopterix ( 594938 ) on Tuesday November 12, 2002 @08:51AM (#4650059) Journal
    Here's the marketspeak-filtered cache, in case it gets slashdotted:
    We are developing technology to optimize battery use in portable devices.
  • According to the article:

    Arm's Intelligent Energy Manager solution implements advanced algorithms to optimally balance processor workload and energy consumption, while maximizing system responsiveness to meet end-user performance expectations.

    Transmeta's only claim to fame for their chips was using software to reduce power consumption, and it worked -- obviously, the Intelligent Energy Manager is just a ripoff of Transmeta's design. Linus should sue.
    • by Anonymous Coward on Tuesday November 12, 2002 @08:57AM (#4650084)
      [BLOCKQUOTE]
      According to the article:

      Arm's Intelligent Energy Manager solution implements advanced algorithms to optimally balance processor workload and energy consumption, while maximizing system responsiveness to meet end-user performance expectations.

      Transmeta's only claim to fame for their chips was using software to reduce power consumption, and it worked -- obviously, the Intelligent Energy Manager is just a ripoff of Transmeta's design. Linus should sue. [/BLOCKQUOTE]

      Umm, no? These aren't CPUs used in computers and laptops.. these are used in handheld devices and embedded applications. I develop for ARM personally and the "algorithms" (note: they do not say software) is simply silicon embedded within the processor.. not software that runs on the processor itself.

      As the poster mentioned, I doubt this will affect any laptops. I don't know of any that run off ARM cores.
      • come on people

        you just turn off the part of the core when you dont need it

        not a really taxing idea (transmeta intel MOT and IBM all do the same in various ways)

        but putting anything to silicon is always hard so kudos

        john 'MIPS' jones

      • Transmeta's only claim to fame for their chips was using software to reduce power consumption, and it worked -- obviously, the Intelligent Energy Manager is just a ripoff of Transmeta's design. Linus should sue.

        I thought their claim to fame was using VLIW for power at low clock cycles, then giving the chip the ability to emulate instruction sets via a translation layer above? Doesn't sound much to do with general energy saving techniques which you think they may have a patent on. ARM chips are already very lower power, my last desktop PC I think it ran at 30mW. Asynchronous designs such as the Amulet seem to be the future of _very_ lower power devices (and being ARM instruction set should drop right in). That works at the gate level. There are PCs [castle.uk.co] that run on ARM, but there are very few ARM laptop currently (though as Linux becomes more the desktop OS, I'm sure we'll see ARMLinux laptops appearing).

        Phillip.
  • by doug363 ( 256267 ) on Tuesday November 12, 2002 @09:02AM (#4650102)
    ARM's press release has some technical details in it:
    http://www.arm.com/news/powerwise1111 [arm.com]

    They're basically targetting mobile phones and similar embedded systems like PDAs, because this is where ARM's main market share is at the moment. They say that they're looking at a more system-wide approach than is currently used, and they want to standardize the embedded software/hardware interface as part of this.

    Also, note that "samples available Q2 2003" doesn't necessarily mean actual silicon. ARM doesn't make chips, they license their designs out to other companies which use them as a basis for an actual chip, so a "sample" quite likely means a software simulation. Actual devices which use this technology probably won't be around until 2004 at least.

    • by Anonymous Coward
      I think it's interesting to note that they use cellphones as a typical example of improvement.. however, the micros in cellphones use a fraction of the power. The vast majority of power is consumed by the RF transmitter, as evidenced by the amount of battery life you get while making a phone call as opposed to standby operation.

      25 to 75% baseband power savings probably amounts to no more than 10% total improvement on battery life. Marketing fluff?
      • Certainly, even a huge reduction in processor power usage probably won't do anywhere near that much for overall power usage. However, they're anticipating a massive blowout in power usage by the phone's microprocessor in the next few years. As I'm sure you're aware, there's a move towards adding all sorts of significant enhancements: video conferencing, larger colour screens, interactive content, etc. They think that this could increase processor power usage by up to 10 times without some sophisticated power management. I don't know accurate this is, but since this stuff is probably aimed at ARM's customers, i.e. mobile phone designers, I'd guess that they'd better provide some significant saving in overall power usage before anyone would use it.

        I'm not sure on what the transmitter's power requirements are like with 3G phones, or (hypothetically) ultra wideband phones... Does anyone know how they compare to GSM phones? I know that the max allowed transmitter power of a GSM phone varies a lot between countries.

  • by VTg33k ( 605268 ) on Tuesday November 12, 2002 @09:06AM (#4650113)
    "...that they estimate will improve battery life by 25-400%."

    "Aw, people can come up with statistics to prove anything, Kent. Forty percent of all people know that."
    -- Homer Simpson
  • by EggplantMan ( 549708 ) on Tuesday November 12, 2002 @09:07AM (#4650118) Homepage
    I always thought RISC was inferior, that's why it lost out to CISC and went the way of the dodo. Who wants a reduced instruction set anyways? That's why it always lagged in the floating point benchmarks. I look forward to the day when our CISC processors are even better equipped - with an instruction for every conceivable operation.
    • Re:Merits of RISC (Score:3, Informative)

      by e8johan ( 605347 )
      "always lagged in the floating point benchmarks"

      This lagging is not due to flaws in the ISA (instruction set architecture). Todays CISC cpus (atleast the post-Pentiums and Athlons) are RISCs with a CISC shell.

      The expansion of instruction sets have two drawbacks: 1) bloated designs and 2) more and more complex compilers. That is why RISC is leading the way (in a CISC suite) and CISC is degraded into keyboard controllers etc.
      • The expansion of instruction sets have two drawbacks: 1) bloated designs and 2) more and more complex compilers. That is why RISC is leading the way (in a CISC suite) and CISC is degraded into keyboard controllers etc.
        I'm not sure what a "CISC suite" is, but I presume you're referring to Pentiums and such as being RISC internally with a CISC ISA? In that case, your point #2 doesn't apply. Compilers see only the ISA.
        • Re:Merits of RISC (Score:3, Interesting)

          by e8johan ( 605347 )

          The points were indicating the drawbacks from expanding instructions sets (further). I understand that it can be read as being a part of the conclusion, but I did not intend it that way.

          When discussing what ISA the compiler sees I can't help wondering how efficient code a compiler could emit if it gained access to the risc cores of a P4 or a Tbird? Maybe it is time to introduce another mode (after protected mode of the 386): RISC mode (or, as intel's marketeers would call it PowerMode(tm) :])

          • I doubt it could do much better than the P4's own hardware. Internally, the CISC instructions are turned into RISC traces, based on dynamic information that the compiler doesn't have, and also architectural information that presumably changes with each version of the chip.

            The only advantage left to the compiler is complex dataflow analysis, etc., and that can be done just fine with CISC instructions. Compiler intermediate representations are RISC-like anyway.

            Thus, the combination of optimizing compilers plus P4's trace cache makes the ISA almost irrelevant. Even the lack of general-purpose registers can be largely made irrelevant by the trace cache.

            (Translation: the whole system is now so complex that it's nearly impossible to tell what effect a change may have. :-)

            • Re:Merits of RISC (Score:3, Informative)

              by e8johan ( 605347 )

              Each CISC instruction is transformed into a set of RIST instructions. These u-ops (micro-ops, Intel lingo) are then dynamically resceduled and register renaming and all such techniques are applied. I don't know if the P4 can manage out of order comitting, but the instructions are issued and executed in arbitrary order.

              This gives that the internal state of the CPU will be complex and dynamic. This does however not indicate the there are no optimizations that can be made by removing the CISC abstraction layer.

              For example, all fp operations on x86 CPUs emulate a stack based maths co-processors, which is implemented with real registers. Direct access to these can improve quality of the code tremendeously. When saying this, one must remember that the P4 bashes most CPUs in fp benchmarks, and can, most likely be even better with direct access.

              To sum things up: I do not understand how you can say that a CISC layer does not slow the system down and that the ISA is "almost irrelevant". I have interpret that as pure ignorance.

              • Re:Merits of RISC (Score:2, Interesting)

                by p3d0 ( 42270 )
                I do not understand how you can say that a CISC layer does not slow the system down and that the ISA is "almost irrelevant". I have interpret that as pure ignorance.
                Well, maybe it's ignorance, but I haven't yet seen anything in your post that explains why the CISC instruction set impedes performance.

                Your best example was the FP stack. However, does that not internally become traces that can access the FP registers in arbitrary ways? Can the traces not eliminate extra spills, dups, swaps, and other artifacts of stack-based computation? If it doesn't currently do so (which would surprise me) then I would expect that a future version of the Pentium certainly could.

                I say the ISA is almost irrelevant because the compiler's optimizations occur with RISC-like instructions, and then the actual execution (u-ops) occurs with RISC-like instructions. The CISC ISA doesn't actually do anything except communicate the former to the latter. Certainly there is overhead for translating the ISA to u-ops, but hot code is usually executed many times, and so the translation cost is amortized over a large number of iterations, making it negligible.

                AMD's whitepapers on x86-64 [amd.com] claim that the x86 ISA is a good one for their moden processors because they get the code density of CISC with the register usage and ABI models of RISC. Clearly they may be biased because they have a technology to promote, but I think their arguments have merit.

                Perhaps you could give an example of how the P4's internal u-op traces are sub-optimal because of the CISC ISA?

                • "Can the traces not eliminate extra spills, dups, swaps, and other artifacts of stack-based computation?"

                  No, since you cannot use the stack registers to hold information for a long time, but have to put it somewhere else (or recalculate). Direct access to a register based fp unit would enable more efficient code, even though the implementation of the current fp unit(s) is nice and fast.

                  "...they get the code density of CISC with the register usage and ABI models of RISC."

                  I have to admit that the big advantage of CISC ISAs is the code density. As memory bandwidth is growing into a problem, this is one way out. You can see this in the ARM Thumbs instructions too.

                  "Perhaps you could give an example of how the P4's internal u-op traces are sub-optimal because of the CISC ISA?"

                  I don't have the time to dig up a real example, but I can give you some hints. 1) Each time the compiler is forced into using a more complex instruction than needed because the simpler instruction isn't available in the ISA, 2) Stack handling imposed on the code by the lack of general purpose registers (the effect of this is somewhat reduced by caches), 3) mov instructions forced into the code by special purpose registers. Just to mention a few issues I have with CISC!

                  • Thanks for your patience...
                    No, since you cannot use the stack registers to hold information for a long time, but have to put it somewhere else (or recalculate).
                    That's spilling (and rematerialization). Now that I think about it, I guess the traces probably can't get rid of spills, and they probably can't do rematerialization. You may have a point there.

                    However, why can't I store something on the stack for a long time? Push...do a million things...pop. As long as I don't overflow the stack in the mean time, I can store things there as long as I want. Overflowing the stack is a matter of register pressure, which affects non-stack-based ISAs too.

                    1) Each time the compiler is forced into using a more complex instruction than needed because the simpler instruction isn't available in the ISA,
                    These should get translated into the appropriate u-ops. Bingo, no problem.
                    2) Stack handling imposed on the code by the lack of general purpose registers (the effect of this is somewhat reduced by caches),
                    This is spilling. I'll grant you that one. :-) Though, as you say, caches should make that relatively cheap. Spills inside a hot loop should always be cache hits, and spills outside a hot loop don't matter from a performance perspective.
                    3) mov instructions forced into the code by special purpose registers.
                    You mean movs from one register to another? Those would disappear in the traces, if they do any kind of register renaming.
                    • "why can't I store something on the stack for a long time?"

                      If you want to keep a result, but then use it in further calculations too, this can be a problem. Perhaps you can pop it and push it twice, but it is still not a nice operation.

                      "Overflowing the stack is a matter of register pressure, which affects non-stack-based ISAs too."

                      Yes, it applies to noo-stack-based ISAs, but having 16, 32 och 64 (or any other number) or available general purpose fp registers compared to one top-of-the-stack register to work with restricts the code.

                      "These should get translated into the appropriate u-ops."

                      The get translated into more u-ops than needed. I said "forced into using a more complex instruction than needed". This wastes CPU resources.

                      "as you say, caches should make that relatively cheap"

                      I did not say (or at least not mean) relatively cheap. Cheaper that direct RAM access, but that is *expensive*.

                      "Those would disappear in the traces, if they do any kind of register renaming."

                      Still, you'll lose at least one cycle to do the actual renaming and this increases compiler complexity *alot*.

                    • I think you have the impression that the traces are nothing but concatenated sequences of u-ops translated dumbly from the CISC instructions. I was under the impression that the hardware was capable of doing some straightforward optimizations on them.

                      If it is a simple concatenation, then I see your point, and I agree that the CISC instructions are problematic. However, a future rev of the Pentium could conceivably do things like register renaming once, and store the result in the trace cache, making that operation essentially free.

                      In effect, there's no reason the chip couldn't do more and more of the work of the compiler. That's essentially what Transmeta did.

                    • "I think you have the impression that the traces are nothing but concatenated sequences of u-ops translated dumbly from the CISC instructions. I was under the impression that the hardware was capable of doing some straightforward optimizations on them."

                      I think that u-ops are, as you say, translated dymbly from the CISC ISA into RISC like ops. I've seen this an a computer architecture seminair, but you may be right. In that case, my points are less valid, but still, not totaly invalid.

                      "do things like register renaming once, and store the result in the trace cache, making that operation essentially free."

                      I'm not sure what you mean. I know that Transmeta uses a cache for the translated (or morphed) code, but (I believe that) P4s do it instruction by instruction.

                      "In effect, there's no reason the chip couldn't do more and more of the work of the compiler. That's essentially what Transmeta did."

                      The Transmeta chip (Crusoe) is not an intelligent chip. It is an (for the task highly optimized) VLIW CPU. The code morphing is done in software (compiled for the VLIW). The translated instructions are stored in a cache to avoid recompilation. The most unique feature of the Crusoe is that it emulates the internal state of the x86 (or any other CPU that it tries to emulate) thus saving loads of instructions to build condition flags etc.

                    • The P4 does cache the u-ops. See this article on Ars Technica [arstechnica.com].

                      You are correct that Crusoe does the translation in software. Aside from that, it's the same idea, and it makes the ISA largely irrelevant for the same reasons.

                    • Ok, I was wrong concerning the P4. But still, I can not agree with you when you claim that the ISA is "largely irrelevant".

                      A compiler has more knowledge of the code than the CPU scheduler, thus, a compiler, given access to the inner RISC-like core, would be able to produce better code. For example, the scheduler cannot skip the calculation of irrelevant results forced into the code by a limiting CISC ISA. Also, it would (probably) be easier to write a good compiler for the RISC-like core, since it is bound to be more symetric (more gp regs, less restrictions in what op can be applied to what reg etc.).

                    • A compiler has more knowledge of the code than the CPU scheduler, thus, a compiler, given access to the inner RISC-like core, would be able to produce better code.
                      The compiler doesn't really have more knowledge so much as different knowledge. The compiler's knowledge is static, unless it's a JIT, while the processor has dynamic knowledge. Most of the compiler's knowledge can be encoded into proper selection of the CISC instructions.
                      For example, the scheduler cannot skip the calculation of irrelevant results forced into the code by a limiting CISC ISA.
                      You mean like the div instruction computing both the quotient and the remainder? Ok, so that's one reason the ISA is not "completely" irrelevant, and is only "largely" irrelevant. :-)
                      Also, it would (probably) be easier to write a good compiler for the RISC-like core, since it is bound to be more symetric (more gp regs, less restrictions in what op can be applied to what reg etc.).
                      True, but that benefit would be lost if the core changes with each version of the chip. For instance, anyone who wrote a code generator for a P1 already had a pretty good code generator for the P2, P3, and P4, not to mention AMD and Cyrix processors, despite the fact that the innards are quite different. (Granted, the P1 codegen was a hassle. :-)

                      Now, instead of targeting the RISC core, you could target a virtualized RISC ISA, while the chip does the same kind of translation internally into u-ops. In fact, an even better ISA for exposing a chip's internals to the compiler is VLIW (Very Large Instruction Word), and if you could write code in a virtualized VLIW ISA, that may be best of all.

                      And with that, we have arrived at the IA-64 ISA. It uses something called EPIC (Explicitly Parallel Instruction Computing) which is a kind of virtualized VLIW/RISC ISA.

                    • "Most of the compiler's knowledge can be encoded into proper selection of the CISC instructions."

                      Nope, not which results are worth keeping and which are not (as the core has more register than are externally visible).

                      As for your next point ("largely irrelevant"), it depends. As pretty much all CPUs are the same today and they can perform pretty much the same functions the ISAs are largely irrelevant. But still, direct access to the RISC core is better than having to suffer a CISC ISA.

                      "...but that benefit would be lost if the core changes with each version of the chip."

                      That is correct, but if one would entierly skip the CISC ISA and put power into extracting huge amounts of ILP for a RISC processor one would get the best CPU.

                      " It uses something called EPIC (Explicitly Parallel Instruction Computing) which is a kind of virtualized VLIW/RISC ISA."

                      EPIC removes some of the problems when extracting the ILP, which is good. I would still say that a simple RISC with massive multiple issue and speculative execution combined with lots of gp regs and register renaming would be the best solution. This would give us simple compilers and less complex CPUs (compared to CISC -> RISC u-ops based x86 monsters :P ).

                    • Well, I'm not sure I agree with your conclusion (since there's still the CISC code-density issue), but it has been nice discussing it with you. :-)
                    • I too enjoyed this discussion. See you next time the subject comes up! :-)
    • I really hope this is a joke - otherwise I badly misunderstood what happened to the Dodo.
    • by jonr ( 1130 ) on Tuesday November 12, 2002 @09:24AM (#4650182) Homepage Journal
      It is a simple question of market laws. The x86 architecture is the ruling class, therefore it gets most of the research money, and as a results has the fastest running processors.
      When the ARM came out, it blowed the 386s (The top x86) and 68020's out of the water. We were talking 3-4 times faster. And when the ARM3 came out with it's cache, it really kicked 386 ass.
      And remember the Alpha? Another RISC design that was way ahead the rest. The only one left is the PowerPC family, still holding on to the x86 juggernaut.
      And programming the ARM was a bliss. 13 general purpose registers, the barrel shifter. (Do a arthimetic and shift in the same instruction) Conditonal branching... It was a real joy. The x86 assembler is what programmers do in hell.
      J.
      • The only one left is the PowerPC family

        And, hey, don't forget the IBM Power4, which is not strictly a PowerPC chip ....

      • And programming the ARM was a bliss... Conditonal branching... It was a real joy.
        Yeah, it sucks writing programs in x86 asm, with no if statements, and only infinite loops.
      • Conditonal branching... It was a real joy.

        Holy s&^*t, ARM has conditional branching?? That sounds way cooler than the x86 unconditional branches.

        Now that I've got the sarcasm out of me, a processor would be useless if there was no conditional branching. I believe you are mistaking "conditional branching" for "predicated execution" my friend.
    • by dcoetzee ( 625103 ) on Tuesday November 12, 2002 @09:37AM (#4650234)
      Yes, wouldn't it be wonderful? Instead of compiling down to LISTS of instructions, we compile every program down to a SINGLE instruction, designed in hardware to do whatever that program does. The speedup would be immense. Admittedly, there would be some pressure on hardware designers. Personally, I still think all hardware should be implemented in software...
      • ...compile every program down to a SINGLE instruction, designed in hardware to do whatever that program does.

        Q: "Why is your code full of bugs Mr Jones?"
        A: "Don't blame me its a harware issue"

        For once developers could say that and still be telling the truth :-)
      • I believe you're referring to the mythical dostuff() function :)
  • Asynchronuous logic? (Score:3, Interesting)

    by Koos Baster ( 625091 ) <ghostbusters AT xs4all DOT nl> on Tuesday November 12, 2002 @09:12AM (#4650138)
    In line with the low-power paradigm gaining momentum within CPU designs, asynchronuous design [slashdot.org] is often mentioned in the context of battery life. Apparently, the ARM processor seems to be the (only) architecture used for innovative CPU designs.

    Is this really the case, and if so, why? (Obviously CISC architectures are far too complicated to fine-tune in a drastic manner - other than building a Crusoe-like RISC chip and emulating the whole thing.)

    Moreover, is power consumption (and not primarily performance) after all those years, going to be the criterium that's going to decide the RISC-CISC issue in favour of RISC?
    • I'd say that SPARC VI and VII will be more innovative (article here [theregister.co.uk]). I must say that ARM is not the *only* architecture used for innovative CPU designs. In academia I've seen both MIPS, SPARCs and custom designs to show/implement special innovations (vector co-processors, async logic, etc.) Usually small subsets of commersial CPUs are used for the truly innovate designs. (IMHO)
      • > Usually small subsets of commersial CPUs are used for the truly innovate designs. ...As well as in my humble opinion - could not agree with you more.

        And you are right about SPARC, MIPS as well. In addition to some interesting tech features, Sparc has the advantage of being an (almost) true clear and open architecture, rather than a concrete chip design. If I remember correctly, MIPS is great in its context-insensitive structure (no condition bits). Then Crusoe (and PowerPC, for that matter) are great in that they were intended for emulation, but allow native code, thus migrating away from the obvious enventual x86-dead end.

        However, a feature that only ARM and Transmeta incorporated in design (from the beginning), is the performance / transistor -ratio. Being a low cost 32 bit alternative to the 16 bit dominated market of the late eigties, Acorn's choice for a RISC architecture was probably a pragmatic matter, rather than a philosophical one. And inside an Archimedes desktop computer it did not primarily minimize power consumption, but rather maximize performance / research cost.

        Well anyway. Ever since Acorn let go of the ARM processor, it's been pretty popular in actual devices as well as in design experiments.
        • by e8johan ( 605347 ) on Tuesday November 12, 2002 @10:15AM (#4650394) Homepage Journal
          Another advantage of the ARM is the Thumb instructions that reduces the traffic over the memory bus. We must remember that driving memory bus is an expensive operation (power wise) compared to finding data in the cache. Smaller code means more code in the cache. One problem is that multimedia applications (such as movies, music, etc) fails to utilize the cache well (since the data isn't re-accessed). This is a problem area that needs more research.
    • I know that the University of Manchester was designing an asynchronous Arm-binary compatable processor (here [man.ac.uk]). Now that is has been shown to be feasable to do RISC in asynch, maybe big vendors will start to take notice.

      A bit of (greatly simplified) background for those who havn't looked into this. The huge decrease in asych power consumption (at least using CMOS) is becasue the MOSFETs discipate (sp?) power during voltage level changes. Many transisters in sychronous chips change state every clock pulse, but changes are much more isolated in asynchrouns chips. I predict (in my infinite wisdom, and using my crystal-ball-of-infinite-wisdom) that RISC based asynch will get much more powerful and fast as companies like Intel, Motorola, etc put more money into reasearching this previously forgotten/niche field.

      Probably they'll only be used in portable systems at first to conserve power, but maybe if design techniques progress, we'll see them in desktop PCs or even some heavy metal systems within 10 years or so as other benifits becom apparent... But even my crystal-ball can't say for sure.

      -- below is opinion, don't read it if you're an easilly offended Republican or supporter thereof as you may be unintentionally offended - damn -1 flaimbait modders... --
      I wouldn't count on much (US) government research money or grants going into it while Bush is president though... I don't exactly think he's interrested in energy conservation...

  • Potential is there (Score:3, Informative)

    by NutMan ( 614868 ) on Tuesday November 12, 2002 @09:20AM (#4650166)
    I think there is a lot of room for improvement here. For example, TI has a family of RISC microcontrollers [ti.com] that use a tenth of a microamp in sleep mode, but only take 6 microseconds to wake up due to an interrupt.

    In typical usage, there is a lot of time that the CPU is doing nothing. Design one that can take snoozes for as little as a millisecond at a time with insignificant latency and you can save a lot of power.

  • 25 - 400% ? (Score:5, Funny)

    by Roofus ( 15591 ) on Tuesday November 12, 2002 @09:26AM (#4650190) Homepage
    That's a nice range there.

    Me: Hey Jim, we're throwing a party at your house tonight.

    Jim: Great! How many people are gonna come? I need to know how much beer and hoes to pick up.

    Me: Oh, plan on somewhere between 25 and 400.

  • by Anonymous Coward
    My Z88 already gives me 20+ hours of use on 4 AA batteries, but I want something more powerful than 3.3MHz Z80!
  • Super PowerBooks (Score:1, Redundant)

    by Malic ( 15038 )
    That would make PPC's and Power4's even more attractive!
  • by melonman ( 608440 ) on Tuesday November 12, 2002 @10:24AM (#4650439) Journal

    > My old Tadpole laptop sure could have used this.

    I think the type of RISC processor might have something to do with the power consumption. ARM has always concentrated on frugal at the expense of fast.

  • To think, one day I could get more than four hours of battery life out of my Pocket PC. . (Yeah, I bought one. I was young and stupid. Don't judge me....)
  • I really hope they won't do something like "ACPI for ARM". ACPI basically requires a big interpreter in the kernel which will execute code from the BIOS. When you know how buggy BIOSes are, and when you see the actual tendency to design hardware/software to deprive users's rights, you want an open system where software is a sort of "reference design" and is replaceable.
  • Risc Os 5 in a new true 32 bits version will soon be available for Xscale CPU desktop systems. It could mean a second life for an OS that was developed by Acorn at the same time when they started the ARM development. It shows that ARM isn't just for embedded applications. The lean and mean approach of ARM has its equivalent in Risc Os. There's a trend towards desktop systems with less heat and sound, energy saving isn't bad then.

    http://www.riscos.org/cgi-bin/news?days=

    http://www.iyonix.com/Launch/winapc1.html

  • ...Batteries don't care if your CPU is RISC, CISC, chicken feathers, or satin. Would a better title be, "Reducing power use in RISC, to save battery life"?
  • **** GROWTH CENTER REPAIR SERVICE

    For those who have had too much of Esalen, Topanga, and Kairos. Tired of
    being genuine all the time? Would you like to learn how to be a little
    phony again? Have you disclosed so much that you're beginning to avoid
    people? Have you touched so many people that they're all beginning to
    feel the same? Like to be a little dependent? Are perfect orgasms
    beginning to bore you? Would you like, for once, not to express a
    feeling? Or better yet, not be in touch with it at all? Come to us. We
    promise to relieve you of the burden of your great potential.

    - this post brought to you by the Automated Last Post Generator...

There is no royal road to geometry. -- Euclid

Working...