Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment platform lockin is not DRM; let's not confuse them (Score 4, Informative) 260

DRM is a means of limiting the distribution of a purchased (or licensed) digital file by the owner (or licensee). Exclusively locking a subscription service to a platform is not DRM. Rather, it is a means of boosting the sale of the platform by offering additional platform-only services. We can discuss the harm and inconvenience that platform lock-in may cause. However, we should not confuse the issue with DRM. That will just inflame old passions, preventing someone from approaching this new distinct issue from a fresh perspective.

No doubt many people against DRM will also be against platform lock-in. Perhaps others may not. For instance, I am generally against DRM. I purchased a digital file; I would like to be free to make copies of it for my own use. However, with platform-based subscriptions, I just can't get all that upset about it. I don't own an Android device, so I won't subscribe to Google Play. Also, there are a wealth of quality subscription services out there that run on all of the popular platforms. So what's the big deal?

The Military

Navy's New Laser Weapon: Hype Or Reality? 185

Lasrick writes: MIT's Subrata Ghoshroy deconstructs the Navy's recent claim of successful testing with the Laser Weapon System. It seems the test videos released to the press in December were nothing more than a dog-and-pony show with scaled-down expectations so as to appear successful: "When they couldn't get a laser lightweight enough to fit on a ship while still being powerful enough to burn through the metal skin of an incoming nuclear missile, they simply changed their goal to something akin to puncturing the side of an Iranian rubber dinghy." Ghoshroy is an entertaining writer and an old hand in the laser research industry. He gives a explanation here of the history of laser weapons, and how the search for combat-ready tech continues: 'At the end of the day, good beam quality and good SWAP—size, weight and power—still determine the success or failure of a given laser weapon, and we're just not anywhere near meeting all those requirements simultaneously.'

Comment Re:Future of GPU is Open Standards (Score 2) 110

I greatly prefer open standards as well. However, CUDA is considerably less painful to work in than OpenCL. NVIDIA has also demonstrated more commitment to capturing GPGPU business than AMD. For example, the first supercomputer on top500.org with AMD GPUs ranks in at 94th. In contrast, NVIDIA GPUs are used in the 2nd ranked supercomputer. Xeon Phi is gaining in popularity, but Intel wants you to work in CilkPlus not OpenCL.

That said, I believe the future is tight integration (i.e., cache coherence) between the GPU/accelerator and main memory. AMD's HSA is a step in the right direction. CUDA has some catching up to do in this regard.

Comment Re:Cell (Score 1) 338

I really appreciated the Cell BE too. I do hope that that architecture, with cache coherence (local stores are a pain to manage), becomes more common. Have you taken a look at Texas Instrument's KeyStone II? It's ARM + crazy DSPs. It doesn't seem that anyone has really noticed it though. http://www.ti.com/dsp/docs/dsp...

Comment Maybe Perl is just "complete," not dying. (Score 4, Informative) 547

"Perl is an excellent candidate, especially considering how work on Perl6, framed as a complete revamp of the language, began work in 2000 and is still inching along in development."

This does not imply that Perl is on its way out. I don't use the language myself (I despise it, personally), but I know many who use it on a daily basis. It is still a go-to language for many programmers (albeit, who may no longer be in their 20s) who need to quickly hack together a test harness for a larger system. It could merely be that Perl is "complete" for applications where it is useful. Further revision is no longer necessary.

Also, I'd hardly say that C++ is on it's way out, even though C++11 took so long to be ratified.

Submission + - This wearable Robot will give 2 extra fingers to our Hand.

rtoz writes: Researchers at MIT have developed a robot that enhances the grasping motion of the human hand. This wrist-wearable robot gives two extra fingers to our hand.

The robotic fingers are at either side of the the hand — one outside the thumb, and the other outside the little finger.

A control algorithm enables it to move in sync with the wearer's fingers to grasp objects of various shapes and sizes.

With the assistance of these extra fingers, we can grasp objects that are usually too difficult to do with a single hand.

Comment Re:Good scholarship - tenure (Score 4, Interesting) 325

There is a new problem that comes with reliance on adjuncts. Departments rarely monitor the performance of instruction themselves. Departments make decisions on re-hiring or firing an adjunct based upon student reviews and evaluations. Left without recourse, adjuncts are perversely incentivized to teach easy classes and give out high marks---this helps ensure good reviews. (It also continues the trend in grade inflation.) Adjunct professors cannot challenge their students without risking being fired.

Comment You can come back with half the pay and no benefit (Score 4, Insightful) 325

My girlfriend recently graduated with a PhD in history from a department ranked 11th by US News. She's won a number of nationally recognized awards. She still can't find a tenure-track job. She was hired as a visiting professor at a university for this past year. Pay was around $40k with benefits. She got great reviews from her students, so the university offered to re-hire her as an adjunct with the same workload (teaching four classes a semester)... but at *half* the pay and *without* benefits. Her pay and benefits were better as a graduate student! She politely declined the offer. Being valued so little by the same world that qualified you is hard to endure.

Comment Beef already high and dairy is climbing (Score 2) 397

Recent CNN report on the prices of beef and dairy: http://money.cnn.com/2014/04/1...

This will increase the cost to farmers too. That gets passed on to consumers. But perhaps we're all just commenting on the obvious: Production cost of X increases. The production cost of any product Y directly (or transitively) dependent upon X will also increase (or the value/quality of Y may decrease to compensate).

Comment Re:Unobtainium (Score 1) 208

What made Cell a nightmare to program for was the SPU's local store. The local store is great for performance, but a pain to program since the programmer had to explicitly move data back and forth between main memory and the local store (hardware designers back then all thought compilers could solve their problems for them--see Itanium). MIC is cache coherent. All memory references are snooped on the bus(es). MIC programmers don't have to worry about what's loaded in memory and what is not. An instruction merely has to dereference a memory address, and the MIC hardware will be happy to go fetch the needed data for you, automagically. It was not so with Cell.

Comment Re:Programmability? (Score 2) 208

It's not entirely syntactical. Local shared memory is exposed to the CUDA programmer (e.g., __sync_threads()). CUDA programmers also have to be mindful of register pressure and the L1 cache. These issues directly affect the algorithms used by CUDA programmers. CUDA programmers have control over very fast local memory---I believe that this level of control is missing from MIC's available programming models. Being closer to the metal usually means a harder time programming, but higher performance potential. However, I believe NVIDIA has made CUDA pretty programmer friendly, given the architectural constraints. I'd like to hear the opinions of MIC programmers, since I have no direct experience with MIC.

"Free markets select for winning solutions." -- Eric S. Raymond