Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Polls on the front page of Slashdot? Is the world coming to an end?! Nope; read more about it. ×
The Military

Navy's New Laser Weapon: Hype Or Reality? 185

Posted by Soulskill
from the why-phasers-are-where-it's-at dept.
Lasrick writes: MIT's Subrata Ghoshroy deconstructs the Navy's recent claim of successful testing with the Laser Weapon System. It seems the test videos released to the press in December were nothing more than a dog-and-pony show with scaled-down expectations so as to appear successful: "When they couldn't get a laser lightweight enough to fit on a ship while still being powerful enough to burn through the metal skin of an incoming nuclear missile, they simply changed their goal to something akin to puncturing the side of an Iranian rubber dinghy." Ghoshroy is an entertaining writer and an old hand in the laser research industry. He gives a explanation here of the history of laser weapons, and how the search for combat-ready tech continues: 'At the end of the day, good beam quality and good SWAP—size, weight and power—still determine the success or failure of a given laser weapon, and we're just not anywhere near meeting all those requirements simultaneously.'

Comment: Re:Future of GPU is Open Standards (Score 2) 110

by Arakageeta (#48898681) Attached to: Ask Slashdot: GPU of Choice For OpenCL On Linux?

I greatly prefer open standards as well. However, CUDA is considerably less painful to work in than OpenCL. NVIDIA has also demonstrated more commitment to capturing GPGPU business than AMD. For example, the first supercomputer on with AMD GPUs ranks in at 94th. In contrast, NVIDIA GPUs are used in the 2nd ranked supercomputer. Xeon Phi is gaining in popularity, but Intel wants you to work in CilkPlus not OpenCL.

That said, I believe the future is tight integration (i.e., cache coherence) between the GPU/accelerator and main memory. AMD's HSA is a step in the right direction. CUDA has some catching up to do in this regard.

Comment: Re:Cell (Score 1) 338

I really appreciated the Cell BE too. I do hope that that architecture, with cache coherence (local stores are a pain to manage), becomes more common. Have you taken a look at Texas Instrument's KeyStone II? It's ARM + crazy DSPs. It doesn't seem that anyone has really noticed it though.

Comment: Maybe Perl is just "complete," not dying. (Score 4, Informative) 547

by Arakageeta (#48102729) Attached to: Goodbye, World? 5 Languages That Might Not Be Long For This World

"Perl is an excellent candidate, especially considering how work on Perl6, framed as a complete revamp of the language, began work in 2000 and is still inching along in development."

This does not imply that Perl is on its way out. I don't use the language myself (I despise it, personally), but I know many who use it on a daily basis. It is still a go-to language for many programmers (albeit, who may no longer be in their 20s) who need to quickly hack together a test harness for a larger system. It could merely be that Perl is "complete" for applications where it is useful. Further revision is no longer necessary.

Also, I'd hardly say that C++ is on it's way out, even though C++11 took so long to be ratified.

+ - This wearable Robot will give 2 extra fingers to our Hand.

Submitted by rtoz
rtoz writes: Researchers at MIT have developed a robot that enhances the grasping motion of the human hand. This wrist-wearable robot gives two extra fingers to our hand.

The robotic fingers are at either side of the the hand — one outside the thumb, and the other outside the little finger.

A control algorithm enables it to move in sync with the wearer's fingers to grasp objects of various shapes and sizes.

With the assistance of these extra fingers, we can grasp objects that are usually too difficult to do with a single hand.

Comment: Re:Good scholarship - tenure (Score 4, Interesting) 325

by Arakageeta (#47181245) Attached to: Fixing the Humanities Ph.D.

There is a new problem that comes with reliance on adjuncts. Departments rarely monitor the performance of instruction themselves. Departments make decisions on re-hiring or firing an adjunct based upon student reviews and evaluations. Left without recourse, adjuncts are perversely incentivized to teach easy classes and give out high marks---this helps ensure good reviews. (It also continues the trend in grade inflation.) Adjunct professors cannot challenge their students without risking being fired.

Comment: You can come back with half the pay and no benefit (Score 4, Insightful) 325

by Arakageeta (#47181167) Attached to: Fixing the Humanities Ph.D.

My girlfriend recently graduated with a PhD in history from a department ranked 11th by US News. She's won a number of nationally recognized awards. She still can't find a tenure-track job. She was hired as a visiting professor at a university for this past year. Pay was around $40k with benefits. She got great reviews from her students, so the university offered to re-hire her as an adjunct with the same workload (teaching four classes a semester)... but at *half* the pay and *without* benefits. Her pay and benefits were better as a graduate student! She politely declined the offer. Being valued so little by the same world that qualified you is hard to endure.

Comment: Beef already high and dairy is climbing (Score 2) 397

by Arakageeta (#46795877) Attached to: Beer Price Crisis On the Horizon

Recent CNN report on the prices of beef and dairy:

This will increase the cost to farmers too. That gets passed on to consumers. But perhaps we're all just commenting on the obvious: Production cost of X increases. The production cost of any product Y directly (or transitively) dependent upon X will also increase (or the value/quality of Y may decrease to compensate).

Comment: Re:Unobtainium (Score 1) 208

by Arakageeta (#45875567) Attached to: Intel's Knights Landing — 72 Cores, 3 Teraflops

What made Cell a nightmare to program for was the SPU's local store. The local store is great for performance, but a pain to program since the programmer had to explicitly move data back and forth between main memory and the local store (hardware designers back then all thought compilers could solve their problems for them--see Itanium). MIC is cache coherent. All memory references are snooped on the bus(es). MIC programmers don't have to worry about what's loaded in memory and what is not. An instruction merely has to dereference a memory address, and the MIC hardware will be happy to go fetch the needed data for you, automagically. It was not so with Cell.

Comment: Re:Programmability? (Score 2) 208

by Arakageeta (#45868103) Attached to: Intel's Knights Landing — 72 Cores, 3 Teraflops

It's not entirely syntactical. Local shared memory is exposed to the CUDA programmer (e.g., __sync_threads()). CUDA programmers also have to be mindful of register pressure and the L1 cache. These issues directly affect the algorithms used by CUDA programmers. CUDA programmers have control over very fast local memory---I believe that this level of control is missing from MIC's available programming models. Being closer to the metal usually means a harder time programming, but higher performance potential. However, I believe NVIDIA has made CUDA pretty programmer friendly, given the architectural constraints. I'd like to hear the opinions of MIC programmers, since I have no direct experience with MIC.

Comment: Continued PREEMPT_RT development & NVIDIA supp (Score 2) 201

by Arakageeta (#45693129) Attached to: Under the Hood of SteamOS

I'm glad to see SteamOS has picked up PREEMPT_RT. I hope they stick with it. The PREEMPT_RT developers recently reported that they lacked the man-power to continue development ( Maybe Valve can contribute money or man-power?

Also, since NVIDIA is keen to support SteamOS, this means that NVIDIA must officially support PREEMPT_RT. NVIDIA's driver support for PREEMPT_RT has always been spotty. At best, hacks to the driver's GPL layer were required to make it work. I hope those days are over. NVIDIA has really improved their Linux driver over th years in order to better serve the Android and HPC markets. PREEMPT_RT support should make it even better (PREEMPT_RT can often uncover pre-existing bugs).

Neutrinos are into physicists.