Anybody that would consider her being a hate-monger is out of touch with reality.
Sorry, dude, that's sarcasm. not satire.
Wires on silicon aren't a vacuum. The dominant effect is actually RC delay. As you make wires smaller, the resistance goes up (inversely proportional to cross-sectional area). As you make the wires closer together, capacitance goes up (inversely proportional to distance between the conductors). So, as geometries shrink, propagation delays for real signals in real wires on real silicon go up.
I won't even get into buffers which are required to recondition the signal on long routes... (Someone elsewhere on the thread already did.)
Can you propose a non-stochastic process that produces alpha particles, and a way of constructing a family of logic gates out of that process?
Sure, throughput is what matters most for operations you can parallelize. However, as Amdahl's Law cruelly reminds us, there's always parts of the problem that remain serial, and they'll put an upper bound on performance. You can't parallelize the traversal of a linked list, no matter how hard you try. You have to invent new algorithms and programming techniques. (In the specific case of linked lists, there are other options that trade space for efficiency, such as skiplists.)
Gustafson's Law does offer some hope: As we build more capable machines, we'll tackle bigger problems to utilize those machines. That's how we're able to, for example, get wireless data speeds on our cell phones operating on batteries that would make wired modem users of just 10-15 years ago jealous.
But, Gustafson's Law only serves as a counterpoint to Amdahl's Law to the extent that you tackle bigger problems, as opposed to trying to reduce current problems to take less time and energy.
And you think printf() and strtol() are major bottlenecks worth dedicated silicon area why?
Modern CPUs already have many accelerators for high end functions, such as numerical computations, cryptography, and the all important memcpy. (Memory copies are a traditional bottleneck, and general enough that they can be easily offloaded.) They come in two forms—specialized SIMD/vector instruction sets, and dedicated blocks for high-level functions that take multiple microseconds. An example of the former are the SIMD-oriented AVX instructions found on modern x86 chips. As an example of the latter, chips aimed at high end signal processing often have discrete blocks such as FFT accelerators. Others aimed at network tasks (especially DPI) have regular expression engines.
The problem with accelerator blocks is that they do take up area. And if they're powered up, they leak. Leakage current is a significant factor in modern designs. To get faster transistors, you need to drive their threshold voltage down. As you lower the threshold voltage, their leakage current goes up exponentially. So, that circuit better be bringing a lot of bang for the buck if it's going to be sitting there taking up space and leaking.
Another issue with dedicating area to fixed functions is the impact it has on distance between functions on the die. In the Old Days, you could get anywhere on the die in a single clock cycle. With modern designs and modern clock rates, cross-die communication is slow, taking many many cycles. So, when you plop down your custom accelerator, you have to figure out where to put it. Do you put it right in the middle of the rest of the computational units, slowing down the communication between their functions (either lowering clock rate or increasing cycle counts), or do you put it on the other side of the cache, meaning it takes several cycles to send it a request and several cycles to see the result?
This is why many custom accelerator blocks out there today focus on meaty workloads. A large FFT still takes a good bit of time to execute, and there's usually other work the main CPU can do while it executes. Thus, the communication overhead doesn't tank your performance. printf(), on the other hand, generally shows up right in the middle of a bunch of other serial steps. You can't overlap that with anything. Hauling off to a printf() accelerator block generally would make zero sense. If you're really spending that much time in printf(), you're better off rewriting the code to use a less general facility.
A final issue with dedicated hardware is that you can't patch it. Someone finds a bug in your printf() and you're back to using a library version. I could go on, but I think I've made my point.
That's true for active power. (V^2/R). For leakage power, it's even worse. That looks closer to exponential. I've seen chip for which leakage accounted for close to half the power budget.
I only enter mine into two, but both are crap.
The crappier of the two, amazingly, was not written in-house. We apparently bought that turd! (A site-customized/bastardized version of SumTotal Unified Workforce Interface (assuming I found a link to the correct turd), in case you're curious. It's non-modular!)
Maybe it's AT&T's network, or maybe my phone (BB10), but the videos often don't load quickly enough for me to notice them until it's too late. My only hint is that the browser gets strangely unresponsive, and then 5 seconds later it pops over to full screen nonsense.
I've been known to just kill the browser app outright when that happens, as it's quicker than trying to get the video player to quit.
I freely admit that some of the trouble may be phone specific. Still, auto-play videos suck.
There are a couple popular news sites that seem to have moved to HTML 5 videos that don't need a flash plugin. I don't know how to block their videos on my phone. Turning off flash doesn't help, since it isn't involved.
The browser does have a switch between 'mobile' mode, which gives me a turn-of-the-century web browsing experience (not what I want), and 'desktop' mode, which usually (but not always) much better.
Unfortunately, there isn't a way to determine a site is sleazy prior to clicking on a link.
Oh, and give me a way to say "Never play a video under any circumstances, unless I explicitly say 'play this video.'" KTHXBYE.