Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Chipmakers Admit Your Power May Vary 138

Dylan Knight Rogers writes to mention a News.com story discussing the realities of chip power consumption. From the article: "Assessing only pure performance is passe. The debate these days is about performance-per-watt, which seems like it should be a simple miles-per-gallon type of calculation. However, miles are miles, and gallons are gallons. There's no one simple way to measure processor performance, and measuring the amount of power output by today's chips is proving just as difficult."
This discussion has been archived. No new comments can be posted.

Chipmakers Admit Your Power May Vary

Comments Filter:
  • Re: (Score:2, Insightful)

    Comment removed based on user account deletion
  • How is this news? (Score:5, Insightful)

    by AuMatar ( 183847 ) on Saturday June 10, 2006 @06:44PM (#15510702)
    Performance being difficult to measure is well known- you can't go by clock speed, or even clock speed*instructions per clcok since these will differ based on instruction mix. For power, a simple inverter will use different amounts of power depending on if its on or off- exact power for a chip is impossible to guess. This is all old news.
    • by dsanfte ( 443781 )
      Uh, actually, watts sucked per hour/minute/etc has been very easy to measure for many decades now. There is no reason why chip wattage drawn should be difficult to gauge in the slightest.
      • Uh, actually, watts sucked per hour/minute/etc has been very easy to measure for many decades now. There is no reason why chip wattage drawn should be difficult to gauge in the slightest.

        Possible? Sure. Easy? Well, modern CPU power converters have several physically-distributed power outputs that don't share the load equally, they drive multiple load pins that don't share the load equally either, and they only tolerate a fraction of a milliohm of added resistance. There can also be a big question about

        • Re:How is this news? (Score:3, Informative)

          by Firethorn ( 177587 )
          they only tolerate a fraction of a milliohm of added resistance

          Say what? They're not [i]that[/i] intolerant. Otherwise the overclockers wouldn't be playing around with increasing the voltage. Normal power supplies would have to be far better, and motherboard power compensators far more expensive. Besides, if your measurement device adds that much resistance, you simply increase the voltage of the rail a smidgen to compensate.

          Now, I am talking about doing all this in a lab, for best results.

          The true diff
          • they only tolerate a fraction of a milliohm of added resistance

            Say what? They're not [i]that[/i] intolerant.

            Yes, they are. Consider a high-end CPU that draws 100 watts at 1.2 volts. That's 83 amps of current. If you add a one milliohm series resistor for measuring the current, you've dropped the CPU voltage by 83 millivolts. By comparison, I just looked up an Opteron and it was only specced for a +/- 50 mV change.

            Certainly the measurement is doable. I'm just saying that the cheap and dirty approach

            • You don't have to isolate the CPU to figure out the power consumption. Just plug the mobo into a watt-meter and see what it says. Sure, the chipset and RAM are included in the measurements, but so what. A CPU that uses 10W with a chipset that uses 70W is pretty useless as a laptop.

              As for determining what sort of load to run under, how about do some research and see what people use the chips for? It's good for other aspects of the business too.

              All in all, I don't buy the argument that Intel dosesn't know
              • True, but looking back up the thread, I was arguing against the claim that measuring the power used by individual chips is utterly trivial. It's fine to say "this has no place in a laptop", but a lot of people would like to know why.
        • Well, modern CPU power converters have several physically-distributed power outputs that don't share the load equally, they drive multiple load pins that don't share the load equally either, and they only tolerate a fraction of a milliohm of added resistance.

          So what? Use a known set of peripherals with known power load and a known PSU, then measure power usage at the wall. Simple.

          • So what? Use a known set of peripherals with known power load and a known PSU, then measure power usage at the wall. Simple.

            It's not as simple as you might think. How do you know how much power your "known" set of peripherals use? I'm sure that there are power specs for just about anything, but I'm sure they just like the specs we see for CPUs at the moment - close but still an approximation. Trying to get an exact measurement of power used for something like a graphics card or a motherboard is going to
            • How do you know how much power your "known" set of peripherals use?

              Measurement. You can minimize error by testing in a known setting - a C800 with integrated video, for instance. Add a PCI card as a secondary display and measure the difference. Same goes for disks.

              Trying to get an exact measurement of power used for something like a graphics card or a motherboard is going to have the same problem of measuring CPU power usage

              Separating motherboard power from cpu power is problematic. If you can under

              • I've got another way: everyone is concentrating on measuring the power input, as electricity. The alternative is to measure the power on the output end, as its dissapated as heat. Submerge the motherboard in a dielectric coolant (which is well-insulated from the outside) and watch as its temperature changes over time. You can't get a good instantaneous power measurement, but you can get a pretty good average over time. You'd calibrate the setup by watching the temperature change as a known amount of power i
    • by kfg ( 145172 )
      Performance being difficult to measure is well known. . .

      it should be a simple miles-per-gallon type of calculation. . .

      It is. That's the problem. Mr. Krazit seems to be utterly clueless. I defy him to predict the milage I get the next time I go out for a drive.

      Hasn't he ever noticed, like most of the rest of us have, that the milage he gets is not actually the same as the EPA test "prediction"?

      That's because the EPA test only gives valid results for. . .the EPA test, which is actually an average of multipl
  • by 0racle ( 667029 ) on Saturday June 10, 2006 @06:48PM (#15510714)
    So it's exactly like the miles-per-gallon on new cars.
  • benchmarks (Score:4, Insightful)

    by Anonymous Coward on Saturday June 10, 2006 @06:49PM (#15510717)
    This is what benchmarks are for. Compare the performance of two systems with other variables held as constant as possible. This has been going on for years, has it not? If I want a computer to play games, I see what different CPU configurations yield in, say, HL2 with the same ram and video card.

    Is this perfectly scientific? No.
    Is it practical? Hell yes!
    • Spoken like a true engineer. I salute you!
    • Re:benchmarks (Score:2, Insightful)

      by KermodeBear ( 738243 )
      This is exactly how I feel. Theoretical speed is nice and all, but just where does theory and practice meet? Usually never. It's the practical application that matters, which is why, in my opinion, benchmarks of common operations are important. Things such as frames per second in a video game, or how long it takes to encode a DVD, or how long it takes to open up a large PDF... Those are things that matter and, perhaps more importantly, things that the average joe can wrap his mind around. How many Joe Users
  • by EmbeddedJanitor ( 597831 ) on Saturday June 10, 2006 @06:51PM (#15510719)
    Especially with caching and pipelining, MIPS per W gets very difficult to measure. If you can live in the cache you don't need to go fetch from the outside world. If you stall the pipeline, you lose performance. Some operations (eg. DIV) clock a lot of transistors, some (NOP) don't. It was a lot easier to measure MIPS/W when devices were synchronous. Now they're a group of asynchonous entities (core CPU, cache,...).

    BTW, EPA mpg are measured without using real mile on real roads.

    • You need a stats class, badly.

      No offense, but nothing is perfect. This is why we have a thing called "standard deviation".

      Me hitting the letter "e" will probably not take the same amount of energy to process twice. But I bet over 1000 e's the standard deviation could be found and would indicate that 66% of the time it's "x J +/- y" and so on...

      So you sample something like "building the linux kernel to a ram drive" 100 times, find the deviation and use that. The tighter AND lower the better. The wider and higher the worse.

      Tom
      • He had a point though; if you constantly do video editing with your PC, your personal measurements of performance will be different from someone who uses only Word and Firefox all the time.

        There's a reason I ask people what they intend to do with their PC before selling them one -- do they need more ram, or more drive space or more drives or a bigger video card ... etc.

        Very rarely does CPU speed come into the equation; the slowest CPU available at retail is quite fast enough for most people, most of the tim
      • You need a stats class, badly.

        Why?

        Me hitting the letter "e" will probably not take the same amount of energy to process twice. But I bet over 1000 e's the standard deviation could be found and would indicate that 66% of the time it's "x J +/- y" and so on...

        But with different usage-patterns you *will* get consinstant differences..so sure, you'd get data, and they'd be valid, but that doesn't mean GP needs a stat-class. Regardless of how much people refuse to believe it, even in todays 'massage-the-da

        • I'm sorry, but under no conditions will "compile this C file" vary by an unmanageable amount. If you expect the power to vary by 500 Watts each time you compile something... you're sadly mistaken.

          Most likely with the CPU/memory under full load the Wh deviation is less than 10% of the mean usage. On a typical desktop the Wh rating is about 200-250 at full load. If you see a variance of more than say +/- 20Wh something is wrong or the test isn't reproducible. If you think things like differing occurences
        • Why?

          Because, as the parent post pointed out, the problem of how to describe the measurement of a varying quantity does not actually pose a problem.


          But with different usage-patterns you *will* get consinstant differences.

          I agree the "hitting E" example seems a bit odd, but I would guess the parent just needs a computer science course badly. ;-)

          Instead, do the same with a few typical real-world usage patterns (arranged into a repeatable suite). Then divide the performance value by the watts value,
  • does anyone think those people care a lick about price per watt (as this is a green party thing)?

    Our nation is one of conviences, not of caring if our grandchildren have conviences.

    • Two words: cooling and batteries. Even if power was free (it isn't), you either run into problems to cool your system properly, or get decent battery-life (for a laptop).
    • Because they're stupid. Then they bitch at the bank for the 1.50$ "service fee".

      Let's see... processor running full steam instead of low power mode when idling probably amounts for a waste time of more than 90% (unless you work/live at the box).

      Opteron at full == 95W, at low == 35W, diff 60W. price per KWh is about 7 to 10 cents. Let's say 8.5 to be close to middle. 60W * 24 * 31 * 0.085 = $3.80 per month. Probably double that once you factor in power supply inefficiencies and cooling costs. So you sp
    • does anyone think those people care a lick about price per watt (as this is a green party thing)?

      Our nation is one of conviences, not of caring if our grandchildren have conviences.


      You are obviously not running a server farm composed of hundreds or thousands of CPUs running at full load 24x7. Not only do you have to power those procs, but you have to cool them as well. For almost any server, the cost in energy over five years is going to be more expensive than the chip itself. Energy requirements are cal
    • does anyone think those people care a lick about price per watt (as this is a green party thing)?

      The average desktop consumer certainly doesn't. However, performance per watt is very important to two segments of the population:

      1.) Laptop users. A high performance per watt, and more importantly a low wattage in general, means you can get more things done on a single charge. The average consumer doesn't care about money, but they do care about time.

      2.) Corporations, especially those with large serve
  • Knowledge is power, and power corrupts, so knowledge of power must be very dangerous.

    May Heisenberg will protect us!

  • Well... (Score:4, Insightful)

    by radish ( 98371 ) on Saturday June 10, 2006 @06:55PM (#15510730) Homepage
    Miles per gallon are hardly constant either. Uphill? Downhill? 10mph or 100? Highway or city? Same difference.
    • Re:Well... (Score:3, Informative)

      by smoker2 ( 750216 )
      That's why manufacturers base their MPG figures on something called the Urban Cycle. [vcacarfueldata.org.uk]

      This takes in slow city traffic, faster freeway traffic and top speed travelling, approximate to an average consumers car usage.

  • by Rosco P. Coltrane ( 209368 ) on Saturday June 10, 2006 @06:57PM (#15510734)
    For most users (i.e. not power-users doing heavy calculations for some scientific purpose, or high-quality video editing, or raytracing), most processors provide way more power than needed, and have done so for years. Or at least, they *would* provide all that power if the software running on top of it wasn't bloated and unnecessarily complex, unoptimized and badly written. And no, I'm not just talking about Windows, I'm including Linux, MacOS and all the others in the bag.

    The best proof that modern software makes modern hardware suck is that, back in the mid-eighties, I used an Atari ST to do desktop publishing, and it wasn't all that different from what I can do now with a simple PC that would look like a supercomputer back then.
    • by Burning1 ( 204959 ) on Saturday June 10, 2006 @07:53PM (#15510877) Homepage
      As a resource becomes more plentiful, uses for that resource also increase. A similar example from automotive engineering comes to mind:

      Because of advances in engineering and design, engines are far more powerful and efficient now than they were in the early 90s. Cars have far better aerodynamics. However, gas mileage has not improved. In many cases it's gotten worse since the 80s. Likewise 0-60 times haven't improved much.

      So what happened? Instead of improving the performance of your average family sedan, auto makers have added better armor, more air vents, more lights, DVD players, and more plush materials. Everything is safer and more comfortable now than it was in the 80s and 90s.

      My 86 Camry will beat your 2007 Camry in a drag race and it will get better fuel mileage. But for a 500 mile trek across California or a bad accident? I know which one I'd prefer.

      Likewise, my Pentium 4 has 16000 times more ram than my first computer (a C64,) and 256 times the ram of my first 486 (side note: how long before someone informs me of the amount of ram my 486 had?)

      My 486 could write a document just as easily and with as much style as my P4. But it couldn't write a document while I was watching a subtitled MP4 movie in another window, listening to music, burning a DVD, and downloading hot lesbian pr0n from bit torrent. And it certainly couldn't do all that on dual 20 inch widescreen flat panel displays.

      Sure, software is more bloated. But like the 2007 Camry (available wherever fine cars are sold,) after a long day your ass is going to be a lot more comfortable.
    • You young people have it so easy,
      when I was young....
      TTL was hot and fast
      CMOS was cool and slow
      a fave joke was about the Russians designing the worlds largest micro chip
      these days its not a joke its reality
      CMOS is hot enough to cook an egg
      hard drives shut down from over heat
      components accelerate their aging when running hot which they always are
      in summer you have to be in air con or have multiple fans
      I now always have a 4" fan on my cpu

      V8's are becoming dinosaurs

      the next generation is 128bit p
    • The brother post may have used a car analogy, but he's basically right. There is a similar situation in Game Development.

      Right now, you have machines that will do amazingly powerful things, especially with the Next-gen coming out. So what do you do with that power?

      Quite simply, one of the things you can do is optimize less.

      For example, early FPS games were written largely in assembly in an attempt to eke out every bit of power from the system. It worked, but it was really expensive financially and broke
    • Did the Atari do dynamic WYSIWYG editing? Were you able to have twenty programs running and half a dozen services at the same time? I know there are fans of command line and character mode software, but I usually avoid that whenever possible. The old method of running one program at a time was a drag, quitting one program just to run something else.
      • Did the Atari do dynamic WYSIWYG editing? Were you able to have twenty programs running and half a dozen services at the same time?

        Yep - here's the same software running on a more recent machine [nvg.org]. My old ST ran Thing, qed, Papyrus, CAB and so on just fine - although in 640x400 monochrome. Multitasking with Geneva worked very nicely, and there was always MiNT for all the UNIXy stuff. WYSIWYG was more than possible with NVDI, which let me use Truetype fonts in all GEM applications.

        I eventually saw sense and bo
  • News? (Score:5, Informative)

    by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Saturday June 10, 2006 @07:08PM (#15510768) Homepage
    I'd think both AMD and Intel are well aware of the MIPS/Watt challenge. It's not new. Problem is CUSTOMERS still want a bazillion Ghz attached to the processor because they think it will make it faster or better or something.

    I've got two x85 class Opterons sitting here at 1Ghz most of the time. That's ~35W vs. ~95W. AMD seems to care about power. Intel is no worse off with the Pentium M and "core" series (netburst was a mistake).

    Tom
  • Dvorak admits he trolls, Chipmakers admit your power may be vary, what's next?
    Looks like everyone's coming out of the closet today.
  • I don't want to have a moving target -- like the fluctuating prices of comparing miles per gallon of diesel versus miles per gallon of gasoline.

    I don't want it to be like buying industrial lighting where you have to compare 60W bulbs with 800 lumens to 75W bulbs with 1000 lumens.

    I'm not confused by simple linear ratios: We just don't need naming conventions and measurements whose only purpose is to obfuscate easy
    comparisons that would allow for simple commodity pricing of a consumer good (which is what y

  • by Garabito ( 720521 ) on Saturday June 10, 2006 @07:46PM (#15510857)
    Why, for a given chip, power consumption raises with clock speed? I know there's corelation, but I'd like to know the physical relation between the two variables.
    • It has to do with the capacitance of the traces and the inefficiencies of the transistors themselves.

      Raise the clock and the charge time on the traces goes down, means you need a higher voltage. Think of filling a bottle with a small hose. If you want to fill a single bottle faster you have to increase the pressure [voltage]. Also raises the current overall if you keep it up. This is why overclockers often have to raise the voltage of the part they are OC'ing.

      Raise the clock and more transistors are swi
      • Dude, that was the WORST answer I've ever seen. Either you never took an electronics class, or you failed.

        Here's the short answer to the GP:

        A fixed amount of energy is needed for any computation (dividing a number, or flipping the output of an inverter), the amount of energy depends on the architecture or process, but just pick any value for now.

        Power is the integral of energy over time.

        As the frequency increases, that same amount of energy mentioned before is needed in a shorter amount of time. Hence pow
        • You're actually wrong. Energy goes up with frequency not duration. twice the clock for half the period doesn't require the same energy. Things get less efficient as you scale them.

          You're right, I'm not an EE. But I have worked at enough hardware firms to know that raising the clock does more than "raise the work per time period".

          Tom
          • Jesus, you're both way over-analysing this. Let's go back to basics.

            Raise the clock speed without altering the chip: more work is done per unit time.

            Therefore energy requirements per unit time increase (by definition).
            Therefore power requirement increases (by definition).

            This extra capacitance crap, etc., explains non-linear increases. But the OP wasn't asking about rate of increase, he was asking about increase, period, which can be answered with grade 9 physics.

            • Nothing wrong with being a bit thorough. I mean, why then can't I still accomplish the same unit of work at twice the clock, half the time and the same amount of energy?

              Yeah, the simple answer is "more shit is happening for a constant unit of time". The more accurate answer is the circuit is less efficient requiring more energy to operate at a higher frequency.

              That also explains why you can't scale indefinitely without the chip melting. If was a matter of work you could just duty cycle it. Sure your thr
          • Energy goes up with frequency not duration.

            Well...

            Frequency = 1 / Period.

            Period is duration.

            Assuming a circuit could function at twice the frequency, the SAME amount of energy is required for each edge transition, assuming a static digital CMOS circuit: each capacitor must be charged to create a field to create a channel to charge the load cap, etc etc etc... Putting aside topology, or the entirety of electrical engineering for that matter, this is essentially Feynman's lecture on the thermodynamics of com
      • "Now, if you ran a circuit at twice the clock but did the same amount of work and shut it off at T=0.5 would you still use more power? [homework question]. "

        It depends on the specific values of the clocks, chips have an optimum point. If you take about a chip with the optimum at 1GHz and run it at 500MHz, at the same voltage, it will need more energy mostly because of leackage. Now, if you put the same chip at 2GHz, you'd still need more energy, mostly because of the increased resitence of the components

    • Take a look at this:

      http://forums.anandtech.com/messageview.aspx?catid =50&threadid=1867448&STARTPAGE=1 [anandtech.com]

      I remembered seeing it a few days ago.

    • Why, for a given chip, power consumption raises with clock speed? I know there's corelation, but I'd like to know the physical relation between the two variables.

      When not changing state, a CMOS device dissipates almost no power. But each CMOS gate has a tiny capacitance that must be charged or discharged each time it changes state. This requires energy. The energy dissipated for each transition is essentially constant, but the number of transitions in a given time can vary. Since power=energy/time, the mor

    • by Anonymous Coward
      When a gate output switches from one level to the other, there is a brief amount of time of which a low resistance connetion is made from the power supply directly to ground. Since ALL switching is based on the clock, this is when most of the power is dissapated.
      This happens because of the way CMOS logic works. Being Compilimentary Metal-Oxide Semiconductor logic, every gate has p-type and n-type transistors. n-type transistors can only drive a '0' but p-types can only drive a '1' so both ar
  • Because those people with portable notebooks seem to care the most, I have a simple test: Run a "pure" benchmark like Prime95 and see how many iterations you get before it craps out.

    It seems some want to eliminate the time component from speed measurements, so you'd only care that one machine got to 110,000 calculations versus another getting to 120,000 calculations.

    With desktop machines, just hook up each computer to a 1000VA battery backup UPS and see how FAR each gets ... not whether one got to 100,000

  • At first glance, I thought the headline said "Chipmunks" instead of "Chipmakers." For a split second I thought maybe Alvin, Simon, and Theodore had a tech column going.
  • The only reason that people are arguing over power consumption is that power consumption has become a market point. Two years ago it pure performance, and no one cared how much energy was needed, mostly because electricity was essentially free, and it did not matter that for each watt of power that was wasted, another 10 watts was neccesary for cooling and other secondary effects. So the lusers got sucked up into clock speed, and the arguments raged over clocking and artificial benchmarks that hardware en
    • All K8 processors have had frequency scaling, most K7s had it too, was just disabled in the desktop parts. The P4s had frequency scaling too.

      The problem before hand was that the designs were just inefficient. It took your K6-2 or P2 running at full tilt to keep up with demand. Scaling didn't make too much sense. Now a 500Mhz K8 can cope with most usage, playing mp3s takes less than 1% of the cputime where it used to take more than 80% on a 486...

      I wouldn't call the power savings as a "new scam" or trick
      • All K8 processors have had frequency scaling

        Semprons don't, which came as a bit of a surprise when I tried to configure clockspeed control and found that it wasn't working.

        • Maybe not the K7 semprons but I thought K8s did. Did your BIOS recognize the CPU?

          Tom
          • Maybe not the K7 semprons but I thought K8s did. Did your BIOS recognize the CPU?

            The BIOS recognizes it just fine. For whatever reason, AMD either disabled clockspeed control in Socket 754 Semprons or didn't include the necessary circuitry for it (depending on whether Semprons are Athlon 64s with certain functionality (such as half of the cache) turned off or whether they're a completely different design.)

      • playing mp3s takes less than 1% of the cputime where it used to take more than 80% on a 486

        Only the server guys ever cared about CPU usage. That's why a SCSI operation utilizing 20% over 60 seconds, beat an IDE operation using 30% for 50 seconds (in this case 3 seconds total time saved)

        • My point was that scaling a 200Mhz chip doesn't usually make sense since you needed the speed to use the damn box. Sure you could idle it but the savings wouldn't be as important.

          Only when we started getting into designs like the K7 and Core processors did the speed become excessive. A 2Ghz K7 core was way more than capable of playing mp3s or video files while not killing the box.

          Tom
    • Power is still costing me about the same as it did years ago. I think what happened is that

      A: Companies started caring more, since they have to pay twice the electricity (once in their server farm, and again to remove the resulting heat).
      B: Portable computing keeps gaining in popularity, including cellphones and PDA type devices. Less power demand increases battery life and reduces weight.
      C: CPU's just started getting so hot that more and more elaborate measures were needed to cool them. Reducing power d
      • The reason that the frequency scaling on the desktop came about is for two reasons:

        1. When the TDP of chips hit over about 80W, heat became a problem and frequency scaling helped abate that.
        2. The tech is needed in laptop chips. With the exception of the Pentium 4, all notebook chips are just modified desktop parts. So the manufacturers made one core and saw no reason to disable that feature, especially since #1 is true.

        So frequency scaling just piggybacked its way onto the desktop. What's good for the goos
  • The same exact processor can exhibit up to 50% variation in average power usage. Manufacturing variability.
  • However, miles are miles, and gallons are gallons. There's no one simple way to measure processor performance, and measuring the amount of power output by today's chips is proving just as difficult.

    Sure, both performance and power usage of a chip will vary depending on what you do with it, so any simple one-number power:performance measure won't tell you much useful. Of course, the same thing is true with cars, too; both gas mileage and other aspects of performance (including whether it will go where you

  • Then I want to know the next step. I want to know how many tens/hundreds/thousands/millions of instructions per cycle (I henceforth trademark the analogy GIPC/MIPC/KPC for any processor performance comparison!!!!) the processor can handle, plus the comparative cycle rate (I.E. Speed) of the processor. Then I'd compare that per watt. I'd think that would be better - specialized instructions or not.
  • My solution, that is:

    A standardised code segment with broad instruction type usage and a long time to complete(to minimize differences/errors). If everybody is using the same reference instruction sequence on all processors for the same amount of time, no debate ensues. Right?
  • Why hasn't anyone combined the concepts of heat pipe, heat sink, and calorimeter? Anyone who's taken high school physics knows the concept.

    The foundation's already there, in water cooling systems. A rudimentary system could be built by dropping a thermometer in the reservoir, and turning off the radiator at the beginning of each test.

    Granted, you're only measuring waste heat, but how much power does a CPU pump through data busses?
  • The article glosses over the real problems.

    The first real problem is that blade servers are so small now, but require so much power, that companies can easily fit way more compute power in a server room than can be reasonably cooled. So they need more power-efficient servers to use their server space effectively.

    And the problem isn't that power can't be measured--it can be measured just as easily as performance. Which is the problem hinted at in the article--firms focusing on the positive results they hav
  • Charge up a new battery, see how many mAh it has, and use the computer. See how long it lasts with the way you use a computer. There, that's the only number that really matters.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...