Forgot your password?

Comment: Re:Lightfoot (Score 1) 168

by Mr Z (#47686457) Attached to: Processors and the Limits of Physics

Wires on silicon aren't a vacuum. The dominant effect is actually RC delay. As you make wires smaller, the resistance goes up (inversely proportional to cross-sectional area). As you make the wires closer together, capacitance goes up (inversely proportional to distance between the conductors). So, as geometries shrink, propagation delays for real signals in real wires on real silicon go up.

I won't even get into buffers which are required to recondition the signal on long routes... (Someone elsewhere on the thread already did.)

Comment: Re:can't cross chip in one clock. big deal. (Score 1) 168

by Mr Z (#47686423) Attached to: Processors and the Limits of Physics

Sure, throughput is what matters most for operations you can parallelize. However, as Amdahl's Law cruelly reminds us, there's always parts of the problem that remain serial, and they'll put an upper bound on performance. You can't parallelize the traversal of a linked list, no matter how hard you try. You have to invent new algorithms and programming techniques. (In the specific case of linked lists, there are other options that trade space for efficiency, such as skiplists.)

Gustafson's Law does offer some hope: As we build more capable machines, we'll tackle bigger problems to utilize those machines. That's how we're able to, for example, get wireless data speeds on our cell phones operating on batteries that would make wired modem users of just 10-15 years ago jealous.

But, Gustafson's Law only serves as a counterpoint to Amdahl's Law to the extent that you tackle bigger problems, as opposed to trying to reduce current problems to take less time and energy.

Comment: Re:Why not integrate entire C-library functions? (Score 1) 168

by Mr Z (#47686335) Attached to: Processors and the Limits of Physics

And you think printf() and strtol() are major bottlenecks worth dedicated silicon area why?

Modern CPUs already have many accelerators for high end functions, such as numerical computations, cryptography, and the all important memcpy. (Memory copies are a traditional bottleneck, and general enough that they can be easily offloaded.) They come in two forms—specialized SIMD/vector instruction sets, and dedicated blocks for high-level functions that take multiple microseconds. An example of the former are the SIMD-oriented AVX instructions found on modern x86 chips. As an example of the latter, chips aimed at high end signal processing often have discrete blocks such as FFT accelerators. Others aimed at network tasks (especially DPI) have regular expression engines.

The problem with accelerator blocks is that they do take up area. And if they're powered up, they leak. Leakage current is a significant factor in modern designs. To get faster transistors, you need to drive their threshold voltage down. As you lower the threshold voltage, their leakage current goes up exponentially. So, that circuit better be bringing a lot of bang for the buck if it's going to be sitting there taking up space and leaking.

Another issue with dedicating area to fixed functions is the impact it has on distance between functions on the die. In the Old Days, you could get anywhere on the die in a single clock cycle. With modern designs and modern clock rates, cross-die communication is slow, taking many many cycles. So, when you plop down your custom accelerator, you have to figure out where to put it. Do you put it right in the middle of the rest of the computational units, slowing down the communication between their functions (either lowering clock rate or increasing cycle counts), or do you put it on the other side of the cache, meaning it takes several cycles to send it a request and several cycles to see the result?

This is why many custom accelerator blocks out there today focus on meaty workloads. A large FFT still takes a good bit of time to execute, and there's usually other work the main CPU can do while it executes. Thus, the communication overhead doesn't tank your performance. printf(), on the other hand, generally shows up right in the middle of a bunch of other serial steps. You can't overlap that with anything. Hauling off to a printf() accelerator block generally would make zero sense. If you're really spending that much time in printf(), you're better off rewriting the code to use a less general facility.

A final issue with dedicated hardware is that you can't patch it. Someone finds a bug in your printf() and you're back to using a library version. I could go on, but I think I've made my point.

Comment: Re:Why? (Score 1) 278

by Mr Z (#47669053) Attached to: Ask Slashdot: Why Are Online Job Applications So Badly Designed?

I only enter mine into two, but both are crap.

The crappier of the two, amazingly, was not written in-house. We apparently bought that turd! (A site-customized/bastardized version of SumTotal Unified Workforce Interface (assuming I found a link to the correct turd), in case you're curious. It's non-modular!)


Idiot Leaves Driver's Seat In Self-Driving Infiniti, On the Highway 406

Posted by samzenpus
from the bad-idea dept.
cartechboy writes Self-driving cars are coming, that's nothing new. People are somewhat nervous about this technology, and that's also not news. But it appears self-driving cars are already here, and one idiot was dumb enough to climb out of the driver's seat while his car cruised down the highway. The car in question is a new Infiniti Q50, which has Active Lane Control and adaptive cruise control. Both of which essentially turn the Q50 into an autonomous vehicle while at highway speeds. While impressive, taking yourself out of a position where you can quickly and safely regain control of the car if needed is simply dumb. After watching the video, it's abundantly clear why people should be nervous about autonomous vehicles. It's not the cars and tech we need to worry about, it's idiots like this guy.

Comment: Re:Kill auto-play videos (Score 1) 316

by Mr Z (#47614923) Attached to: Verizon Throttles Data To "Provide Incentive To Limit Usage"

Maybe it's AT&T's network, or maybe my phone (BB10), but the videos often don't load quickly enough for me to notice them until it's too late. My only hint is that the browser gets strangely unresponsive, and then 5 seconds later it pops over to full screen nonsense.

I've been known to just kill the browser app outright when that happens, as it's quicker than trying to get the video player to quit.

I freely admit that some of the trouble may be phone specific. Still, auto-play videos suck.

Comment: Re:Kill auto-play videos (Score 1) 316

by Mr Z (#47614223) Attached to: Verizon Throttles Data To "Provide Incentive To Limit Usage"

There are a couple popular news sites that seem to have moved to HTML 5 videos that don't need a flash plugin. I don't know how to block their videos on my phone. Turning off flash doesn't help, since it isn't involved.

The browser does have a switch between 'mobile' mode, which gives me a turn-of-the-century web browsing experience (not what I want), and 'desktop' mode, which usually (but not always) much better.

Comment: Kill auto-play videos (Score 1) 316

by Mr Z (#47612297) Attached to: Verizon Throttles Data To "Provide Incentive To Limit Usage"

I'm not on Verizon, nor am I on an unlimited plan. Still, I seem to hit my bandwidth cap more regularly these days. What seems to kill my utilization these days are websites with auto-play videos that I can't kill simply by blocking Flash.

What's really annoying is that the videos load in the background, and on a few occasions, have started playing after I've already locked the display and set my phone down. I only notice them because my phone starts making noise (when I don't have it set to 'silent'). It kills my battery and eats the bits I paid for on the assumption I'd be using them for things I actually wanted.

I honestly don't have a problem with throttling actual abusers. But, modern website design seems to make "abusers" out of more of us than there otherwise would be.

For the unlimited crowd, perhaps there should be tiers there, also. How about two levels? The lower tier would be "no overage fees" unlimited, meaning you don't get random dings for going over arbitrary caps, but you might get throttled occasionally. Rather than a hard cap, there's a soft limit. The upper tier would be "no limits, no throttling," meaning you could stream all the video and download all the torrents you want, but you pay a significantly higher fee for it. I'd happily sign up for the former service just to avoid the fees associated with the occasional data-heavy month. Folks who want to treat their phone as a cable-less cable modem can pay a few bucks more to avoid the throttle.

I think the problem currently is that 95%+ fall into the first group, and the remaining 5% or fewer really need a different class of service. The current "unlimited" label doesn't really make a sufficient distinction between the two.

Of course, the cynical would point out that such a tiering system would open itself to a whole new brand of marketing abuse...

Computers are not intelligent. They only think they are.