Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Comment Web-skewed (Score 1) 241

Anyone can put up a web page, and Javascript and PHP have a large footprint there. (I guess Java, on the enterprise server side?) It's not hard to imagine there's lots of folks that have to deal with these languages as part of their larger duties, but aren't really trained as programmers in any traditional sense. That could fuel a bunch of StackOverflow traffic for sure...

Whichever ranking you look at will be skewed by the methodology. It feels like web-oriented languages are overemphasized in this cut.

Of course, my own worldview is skewed, too. I deal more with low-level hardware, OS interactions, etc. You won't find a lick of Javascript or PHP anywhere near any of the stuff I work on daily. Lots of C, C++, some Go and Python.

Comment Re:It does almost nothing very very fast (Score 1) 205

Ah, OK, so it is more or less the latest version of ASaP/ASaP2. I just made a post up-thread about my memory of ASaP. It looked interesting, but as you point out, it has some real practical issues.

At the time we spoke with them, it sounded like whenever you loaded an algorithm chain, you had to map it to the specific chip you were going to run it on, even, to account for bad cores, different core speeds, etc. Each core has a local oscillator. Whee...

Comment Re:I guess this is great (Score 1) 205

I'm familiar with Dr. Baas' older work (ASaP and ASaP2). He presented his work to a team of processor architects I was a part of several years ago.

At least at that time (which, as I said, was several years ago), one class of algorithms they were looking at was signal processing chains, where the processing steps could be described as a directed graph of processing steps. The ASaP compiler would then decompose the computational kernels so that the compute / storage / bandwidth requirements were roughly equal in each subdivision, and then allocate nodes in the resulting, reduced graphs to processors in the array.

(By roughly equal, I mean that each core would hit its bottleneck at roughly the same time as the others whenever possible, whether it be compute or bandwidth. For storage, you were limited to the tiny memory on each processor, unless you grabbed a neighbor and used it solely for its memory.)

The actual array had a straightforward Manhattan routing scheme, where each node could talk to its neighbors, or bypass a neighbor and reach two nodes away (IIRC), with a small latency penalty. Communication was scoreboarded, so each processor ran when it had data and room in its output buffer, and would locally stall if it couldn't input or output. The graph mapping scheme was pretty flexible, and it could account for heterogenous core mixes. For example, you could have a few cores with "more expensive" operations only needed by a few stages of the algorithm. Or, interestingly, avoid bad cores, routing around them.

It was a GALS design (Globally Asynchronous, Locally Synchronous), meaning that each of the cores were running slightly different frequencies. That alone makes the cores slightly heterogeneous. IIRC, the mapping algorithm could take that into account as well. In fact, as I recall, you pretty much needed to remap your algorithm to the specific chip you had in-hand to ensure best operation.

The examples we saw included stuff familiar to the business I was in—DSP—and included stuff like WiFi router stacks, various kinds of modem processing pipelines, and I believe some video processing pipelines. The processors themselves had very little memory, and in fact some algorithms would borrow a neighboring core just for its RAM, if it needed it for intermediate results or lookup tables. I think FFT was one example, where the sine tables ended up stored in the neighbor.

That mapping technology reminds me quite a lot of synthesis technologies for FPGAs, or maybe the mapping technologies they use to compile a large design for simulation on a box like Cadence's Palladium. The big difference is granularity. Instead of lookup-table (LUT) cells, and gate-level mapping, you're operating at the level of a simple loop kernel.

Lots of interesting workloads could run on such a device, particularly if they have heterogenous compute stages. Large matrix computations aren't as interesting. They need to touch a lot of data, and they're doing the same basic operations across all the elements. So, it doesn't serve the lower levels of the machine learning/machine vision stacks well. But the middle layer, which focuses on decision-guided computation, may benefit from large numbers of nimble cores that can dynamically load balance a little better across the whole net.

I haven't read the KiloCore paper yet, but I suspect it draws on the ASaP/ASaP2 legacy. The blurb certainly reminds me of that work.

And what's funny, is about 2 days before they announced KiloCore, I was just describing Dr. Baas' work to someone else. I shouldn't have been surprised he was working on something interesting.

Comment Re:Yes. (Score 1) 143

Came here to say the same thing. The nice thing about a compact proof is that it may generalize to other situations or offer greater insights. This is certainly not a compact proof. But, to say it's not a proof is ludicrous. It's a very explicit and detailed proof.

It's the difference between adding up the numbers 1 through 100 sequentially (perhaps by counting on your fingers even), and using Gauss' insight to take a short cut. The computer didn't take any insight-yielding shortcuts, but still got the answer.


(And yes, Gauss' story is probably apocryphal; but still the difference between the approaches is what I'm getting at.)

(I say "insight-yielding shortcut" to distinguish it from the many heuristics that modern SAT solvers use, including the one used here.)

Comment Re:Well no kidding (Score 1) 94

I was about to mention NYC. I wouldn't say it works fine, but it does work better than most places. The subway stations also have a fairly restricted number of people at a time though, and that is where it works best for me. Also, not everybody business and local is trying to down on it 24/7. I wonder what the peak usage is? I guarantee it is much lower than other places.

Comment Re:"free" never fails to disapoint (Score 1) 94

His point is that the government will never had enough of oversight of itself -- it shouldn't have had this bad of a failure for so long -- to fix these problems. Saying it would all be better if the government just did something it historically never been able to do well is a fool's dream. Lacking a profit motive, governments have very little natural force correcting them, especially when it comes to bureaucrats paid according to union standards and protected by them. They really don't care if anything works out at long as then can show then put forth even the smallest effort.

You asking for more regulation after giving tax money to a corporation reminds me of the Ronald Reagan quote: "If is move, tax it. It is keeps moving, regulate it. If it stops moving, subsidize it."

The problem with government run anything (from democratic socialism to Marxism) is that it never works out like its planned, and those that support it just keep up with the same hitting their head again the wall mantra, "But that wasn't what was supposed to happen; that wasn't real [insert personal economic philosophy]." They never seem to learn that there will always be a huge gulf between theory and practice when it comes to the political economy. Capitalism self corrects around it and uses our worst side -- the greed -- to make the world better.

Submission + - Clean out Distros

wnfJv8eC writes: There needs to a user site to survey distro packages. I just went to remove xfsprogs, some hang over from SGI from the early, very early 2000s. Why is Gnome dependent on this package? Remove it, remove Gnome? Really? The dependency tree is all screwed up. Never mind XFS, which by now I can't imagine anyone using, why aren't such addons a plugin? Why are they still supported. Who uses them. Once Linux dropped support for minix. Now one used it.
It's time for a house cleaning. That starts with a good vote on what is and isn't being used. Then dependency trees can be corrected, not just grandfathered in.
There are many examples of stupid dependencies. For example Rhythmbox requires gvfs-afc, which rpm -qi describes as "This package provides support for reading files on mobile devices including phones and music players to applications using gvfs."
So if I never plug my phone or other mobile device into my computer to play music I must have this thing loaded and running? But remove gvfs-afc, and pull Rhythmbox. The dependency is all wrong.

Comment Re:Alternative headline: (Score 1) 954

Probably the most economically astute comment I've ever read on Slashdot.
more capital = more valuable labor. (See that period. It means end of statement. No exceptions.)

The guy with a shovel make more than the guy with a spoon, and the guy who can run a backhoe makes more than 10 of both of the combined.

Comment Re:Solar Roadway Bull$it (Score 1) 407

Dave's argument starts with real-world numbers regarding solar insolation and PV conversion efficiency to establish a baseline. The exact details of a specific implementation won't change the broad conclusion that the energy balance alone, even if you take out the gee-whiz features of the Solar Freakin' Roadways design such as LEDs and networking, doesn't make sense.

When you add all the other stuff on top, it only gets worse.

Fundamental issues: Only so much sun hits the earth, and PV cells only convert a certain fraction to usable energy. When you mount them flat on the ground, you reduce their efficiency further because they're not perpendicular to the incoming light. When you put them under thick enough glass to support real physical loads such as cars and trucks, you lose even more. And when you distribute them over a large area, transmission losses become a Big Deal.

I'm personally skeptical you could build solar panels that would withstand actual vehicle traffic, at least the way we build roads here in the US. Real world roads aren't flat, and they change shape over time as they wear and as the road bed settles and degrades. But real world glass isn't very plastic, and won't conform to a changing surface. It's more likely to crack and break into many pieces. Likewise for the PV cells under it. You'd have to put some beefy steel plates under these to guarantee a sufficiently flat mounting surface to support the load-bearing glass.

Comment In high school (Score 2) 320

I was a senior in high school that day. My civics class was interrupted by the principal coming on the PA system to tell everyone that Challenger had exploded and crashed into the ocean. We were all rather stunned by that news.

After that class was our morning break period. I immediately went to my next class, which was physics. In the back of the classroom, many of my classmates were huddled around a portable radio, listening to the news. No one said much. (I didn't actually see the video footage of the explosion until I got home that day.)

Yet the gods do not give lightly of the powers they have made,
And with
Challenger and seven, once again the price is paid,
Though a nation watched her falling, yet a world could only cry,
As they passed from us to glory, riding fire in the sky!

- From "Fire In The Sky," written by Jordin Kare

Comment Blatant ripoffs (Score 4, Insightful) 285

So, the health care industry, not content with sucking down one dollar in every five in the U.S. economy, wants to grab a few extra billion?

When you take your car in to be serviced, the law requires that you be given a binding estimate of the costs involved before any work is done, and the mechanic is forbidden to exceed that estimate (within a small margin, like 10%) without getting your permission first. Mechanics who violate that law go to jail. Why do we not have those same kinds of consumer protections in the health care industry?

Pharmaceutical companies routinely charge people in the U.S. more for their products than in other countries, such that a drug which costs $100,000 for a full course of treatment in the U.S. costs only $5,000 in India, or scorpion antivenom that is billed at $40,000 a vial in the U.S. is available for $100 a vial in Mexico. Yet, if you were to go outside the country, buy those drugs, and bring them back to the U.S., you would go to prison, thanks to a law bought and paid for by the pharmaceutical industry, a blatant infringement on the Doctrine of First Sale (which is that, once you buy something, it is yours to do with as you wish). The Supreme Court recently ruled (Kirtsaeng vs. John Wiley & Sons, Inc., 568 U.S. ___ (2013), Docket No. 11-697) that this practice was impermissible in the textbook industry. Why, then, should it be permissible in the pharmaceutical industry?

If we were to get rid of all the special exemptions that the health care industry has under law, and force it to abide by the existing law of the land (such as the Sherman, Clayton, and Robinson-Patman Acts), including prison time for health care and insurance executives where applicable, the cost of health care would drop by 80% or more. Most people could then pay cash for their health care needs for about the same as they pay in a deductible today...meaning "health insurance" would no longer be necessary (except for "catastropic care" policies for unforeseen circumstances, which would cost about the same as your car insurance). Some form of Medicare and Medicaid would still be required for the truly less fortunate, but would cost a lot less. Obamacare would no longer be needed and could be trivially repealed. The economy would experience a massive boost because health care would no longer be draining it, and every government budget deficit problem, Federal, state, and local, would be instantly solved. (Leading to secondary effects such as stopping the erosion of your purchasing power because the government keeps "printing money" to fund its deficit spending.)

Comment "Custom OS" (Score 5, Informative) 277

Some sources claim that Roddenberry's computers ran a "custom OS." However, in those days, CP/M was often customized for different brands of computers, which used different disk formats and layouts (for whatever reason). Roddenberry's machine may have used a particularly obscure layout.

They do mention that the disks had about a 160 Kb capacity, which was fairly standard for Shugart 5-1/4" floppy drives of the time.

Slashdot Top Deals

"Laugh while you can, monkey-boy." -- Dr. Emilio Lizardo