Sure, very little current flows through the transistor's gate. But, the transistors themselves are imperfect switches, and so you get some current flowing from Vdd to Vss all the time anyway. For the products I tend to work on, around half or more of the power consumption comes from leakage, amazingly.
For the uninitiated: CMOS gates consist of a pair of complementary switches. One set connects Vdd (the positive voltage indicating a logic '1') to the output node, and the other set connects Vss or GND (the zero voltage indicating a logic '0') to the output node. The way CMOS works, there should only be one path from either Vdd or Vss to the output node. All other paths must be open.
The simplest example is an inverter. It has two switches. The switch from Vdd to output opens with the input is 1 and closes when the input is 0. The switch from Vss to output does the opposite: Closes when the input is 1 and opens when the input is 0.
CMOS burns power two main ways. The first and most obvious way is through switching, also called dynamic power. When the output goes to '1', the gate outputs a high voltage. This voltage then charges all of the gates connected to that output. Even if the gates don't leak, they still end up taking on a certain amount of charge due to their capacitance. The total charge taken on is V*C, where V is the voltage and C is the total capacitance of all the inputs this gate drives. Later, when the gate's output switches to 0, all that charge flows back out to ground. The more often you switch an output from 1 to 0, the more charge you ratchet from Vdd to Vss. Furthermore, while you're switching, there's often a very brief period when the two switches are both slightly closed. You can get some current racing directly from Vdd to Vss at this time.
The second, perhaps less obvious way CMOS burns power is through leakage. Modern transistors are far from perfect switches. When they're closed, they conduct, and when they're open they also conduct, just not as well. This leads to a phenomenon known as leakage. That is, even when the gates aren't switching, there's a constant current from Vdd to Vss, because the transistors haven't completely cut off the current flow. You can sometimes address this by lowering the input voltage or using transistors with different threshold voltages, but that trades off speed for leakage.
So, while the promise of CMOS is that no current flows when gates don't switch, the actuality is that tiny transistors in modern processes aren't as good at holding up to that ideal.
How about some data? 3Q11 saw around 750K units world wide for plasma and LCD public displays according to this link., whereas the North American and Chinese LCD and plasma TV market for 3Q12 was closer to 54M units, according to this link from the same source.
And before you cry foul because I picked different years, please note I picked the same quarter, and the peak quarter for the year for both years. You can also look at the Y/Y growth and extrapolate the 2011 numbers from 2012. The Y/Y growth numbers were negative, meaning it fell slightly, and yet TVs are about 2 orders of magnitude larger than public displays.
So, yeah, I was off a bit. It's 2 orders of magnitude. Still, that drives a lot more economies of scale in the TV market.
That's the total market for public displays. Now what proportion of these public displays are actually appropriate for e-Ink? And how does that compare to consumer uses, such as e-readers for volume?
Seems more likely they're watching for taggers.
That's a great question. You start to need a concept of transactions and rollback in more places. Databases already have this. Journaling filesystems already do this to an extent. (Btrfs actually COWs, so you theoretically could roll back to an older version also.)
I'm not saying you can do this everywhere, but I think it's a strategy that can find a home many places.
Speaking of iOS: Are you saying that if the battery is low, the phone should shut off without warning, saving all data, or give a few warnings as the battery gets low? The no-error-alert paradigm is just stupid.
My car warns me when it detects a failure, and I think it's no failure of software designers if they also warn me when things are amiss. I'd hate it if my car just tried to "handle" low fuel, low oil pressure, low tire pressure, or what-have-you, as about the only thing it could do for any of those is just stop. iOS devices are in a similar circumstance with low battery.
Are you still of the opinion that there should never be an error alert unless it's the programmer admitting some sort of failure? "I failed to program an infinite capacity battery."
Make forking exceptionally cheap, and move to a checkpoint-and-commit paradigm. Fork just before the first open(), go acquire all your resources (open(), malloc(), etc.). Depending on whether all that succeeds or part of that fails, you know which thread to kill. Kill the thread that did the open, etc. if that path failed, otherwise kill the thread that's waiting at the last checkpoint.
If that sounds at all familiar, it should. Most modern CPUs already do this in hardware. It's called speculative execution, and they do one of these forks at nearly every branch.
In this example, what if the failing function just returned NaN, and you let the NaNs propagate? I guess in some cases you care which of factorial(), zeta() or geommean() failed, but more often you care whether the expression as a whole failed or not.
Hmmm... so if I ask a program to read a file that doesn't exist, should it just create an empty document of that name? Possibly the right answer for a word processor, but quite probably the wrong answer when specifying an attachment to an email.
I think that statement needs to be clarified: An internal error alert pop-up that could happen without a hardware failure is an admission of failure on the part of the programmer, no doubt. But, if the user truly is in error, there's nothing to admit on the programmer's part when they tell the user they're wrong. The program still has to check the user, though.
...write code without error.
How to I find users that won't give it incorrect or inconsistent input, or hardware that won't fail unexpectedly? I don't program the users, and I've yet to find 100% reliable hardware that never wears out.
"Never test for an error condition you don't know how to handle." -- Steinbach's Guideline for Systems Programmers.
Ah, that's a bit different. It's advice as to what level you should place your error checking. For example, if you do "fd = open(
"my terminal is a lethal teaspoon." -- Patricia O Tuama