Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Web-skewed (Score 1) 241

Anyone can put up a web page, and Javascript and PHP have a large footprint there. (I guess Java, on the enterprise server side?) It's not hard to imagine there's lots of folks that have to deal with these languages as part of their larger duties, but aren't really trained as programmers in any traditional sense. That could fuel a bunch of StackOverflow traffic for sure...

Whichever ranking you look at will be skewed by the methodology. It feels like web-oriented languages are overemphasized in this cut.

Of course, my own worldview is skewed, too. I deal more with low-level hardware, OS interactions, etc. You won't find a lick of Javascript or PHP anywhere near any of the stuff I work on daily. Lots of C, C++, some Go and Python.

Comment Re:It does almost nothing very very fast (Score 1) 205

Ah, OK, so it is more or less the latest version of ASaP/ASaP2. I just made a post up-thread about my memory of ASaP. It looked interesting, but as you point out, it has some real practical issues.

At the time we spoke with them, it sounded like whenever you loaded an algorithm chain, you had to map it to the specific chip you were going to run it on, even, to account for bad cores, different core speeds, etc. Each core has a local oscillator. Whee...

Comment Re:I guess this is great (Score 1) 205

I'm familiar with Dr. Baas' older work (ASaP and ASaP2). He presented his work to a team of processor architects I was a part of several years ago.

At least at that time (which, as I said, was several years ago), one class of algorithms they were looking at was signal processing chains, where the processing steps could be described as a directed graph of processing steps. The ASaP compiler would then decompose the computational kernels so that the compute / storage / bandwidth requirements were roughly equal in each subdivision, and then allocate nodes in the resulting, reduced graphs to processors in the array.

(By roughly equal, I mean that each core would hit its bottleneck at roughly the same time as the others whenever possible, whether it be compute or bandwidth. For storage, you were limited to the tiny memory on each processor, unless you grabbed a neighbor and used it solely for its memory.)

The actual array had a straightforward Manhattan routing scheme, where each node could talk to its neighbors, or bypass a neighbor and reach two nodes away (IIRC), with a small latency penalty. Communication was scoreboarded, so each processor ran when it had data and room in its output buffer, and would locally stall if it couldn't input or output. The graph mapping scheme was pretty flexible, and it could account for heterogenous core mixes. For example, you could have a few cores with "more expensive" operations only needed by a few stages of the algorithm. Or, interestingly, avoid bad cores, routing around them.

It was a GALS design (Globally Asynchronous, Locally Synchronous), meaning that each of the cores were running slightly different frequencies. That alone makes the cores slightly heterogeneous. IIRC, the mapping algorithm could take that into account as well. In fact, as I recall, you pretty much needed to remap your algorithm to the specific chip you had in-hand to ensure best operation.

The examples we saw included stuff familiar to the business I was in—DSP—and included stuff like WiFi router stacks, various kinds of modem processing pipelines, and I believe some video processing pipelines. The processors themselves had very little memory, and in fact some algorithms would borrow a neighboring core just for its RAM, if it needed it for intermediate results or lookup tables. I think FFT was one example, where the sine tables ended up stored in the neighbor.

That mapping technology reminds me quite a lot of synthesis technologies for FPGAs, or maybe the mapping technologies they use to compile a large design for simulation on a box like Cadence's Palladium. The big difference is granularity. Instead of lookup-table (LUT) cells, and gate-level mapping, you're operating at the level of a simple loop kernel.

Lots of interesting workloads could run on such a device, particularly if they have heterogenous compute stages. Large matrix computations aren't as interesting. They need to touch a lot of data, and they're doing the same basic operations across all the elements. So, it doesn't serve the lower levels of the machine learning/machine vision stacks well. But the middle layer, which focuses on decision-guided computation, may benefit from large numbers of nimble cores that can dynamically load balance a little better across the whole net.

I haven't read the KiloCore paper yet, but I suspect it draws on the ASaP/ASaP2 legacy. The blurb certainly reminds me of that work.

And what's funny, is about 2 days before they announced KiloCore, I was just describing Dr. Baas' work to someone else. I shouldn't have been surprised he was working on something interesting.

Comment Re:Yes. (Score 1) 143

Came here to say the same thing. The nice thing about a compact proof is that it may generalize to other situations or offer greater insights. This is certainly not a compact proof. But, to say it's not a proof is ludicrous. It's a very explicit and detailed proof.

It's the difference between adding up the numbers 1 through 100 sequentially (perhaps by counting on your fingers even), and using Gauss' insight to take a short cut. The computer didn't take any insight-yielding shortcuts, but still got the answer.

________

(And yes, Gauss' story is probably apocryphal; but still the difference between the approaches is what I'm getting at.)

(I say "insight-yielding shortcut" to distinguish it from the many heuristics that modern SAT solvers use, including the one used here.)

Comment Re:Solar Roadway Bull$it (Score 1) 407

Dave's argument starts with real-world numbers regarding solar insolation and PV conversion efficiency to establish a baseline. The exact details of a specific implementation won't change the broad conclusion that the energy balance alone, even if you take out the gee-whiz features of the Solar Freakin' Roadways design such as LEDs and networking, doesn't make sense.

When you add all the other stuff on top, it only gets worse.

Fundamental issues: Only so much sun hits the earth, and PV cells only convert a certain fraction to usable energy. When you mount them flat on the ground, you reduce their efficiency further because they're not perpendicular to the incoming light. When you put them under thick enough glass to support real physical loads such as cars and trucks, you lose even more. And when you distribute them over a large area, transmission losses become a Big Deal.

I'm personally skeptical you could build solar panels that would withstand actual vehicle traffic, at least the way we build roads here in the US. Real world roads aren't flat, and they change shape over time as they wear and as the road bed settles and degrades. But real world glass isn't very plastic, and won't conform to a changing surface. It's more likely to crack and break into many pieces. Likewise for the PV cells under it. You'd have to put some beefy steel plates under these to guarantee a sufficiently flat mounting surface to support the load-bearing glass.

Comment Doesn't this just affect Chrome? (Score 1) 136

Seems like this should just affect Chrome / Chromium and anything derived from those, as it's an implementation issue in the V8 JavaScript interpreter. (V8 is the name for the engine in Chrome.)

That is, it's not a JavaScript / ECMAScript bug in the standard (as implied by the headline), but rather a bug in one company's implementation.

Compare/contrast with the comically bad PRNG enshrined in the C standard itself:

static unsigned long int next = 1;
int rand(void) // RAND_MAX assumed to be 32767
{
next = next * 1103515245 + 12345;
return (unsigned int)(next/65536) % 32768;
}

Thankfully, though, this is just an example, and not required by the standard. But, many simple C compilers use that implementation. It's got plenty of problems, such as always alternating between even and odd values. If the last value was odd, the next value is even....

Comment Set *top* box? (Score 1) 153

So... does anyone actually put a set top box on top of their TV set these days? Once upon a time, TVs were deep enough front-to-back to support this; these days, most aren't.

Or is this a term that was once accurate, but will never be accurate again, like "dialing" a phone? It's been a long time since phones had dials, unless they're being purposefully retro.

Slashdot Top Deals

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...