Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment Undesired Side-Effects (Score 0) 559

Normally, I would agree, but I must disagree in this case. The vast majority of people in the U.S. are science-illiterate and easily swayed by sensational headlines (For example, last week slashdot posted a story on how the background radiation in Fukushima is less than that of Denver, yet people panic over radiation exposure in Japan, but not Colorado.). I worry that a similar backlash against GM crops could negatively affect the world's food supply.

While we can disparage crops that have been crafted to withstand copious amounts of insecticide, please keep in mind that there are 7 billion people on the planet, and all of them need to be fed. Much of the world depends upon the United States' agricultural output. GM helps boost this output. While the American consumer can withstand a few cents increase in cost due to decreased food supply, the same increase can trigger food riots in less fortunate countries. If the United States' agricultural output is enhanced by GM, then I'm all for it. I worry that shunning GM food in the US could hurt further investment/development.

Comment Whoa there. You're plainly wrong. (Score 1) 158

CUDA was released, supported by NVIDIA GPUs, in early 2007. The first OpenCL specification was not released until late 2008 (OpenCL has not been around for 4 years, as you claim). As for which is more popular, I'm afraid that you have this backwards too. The dominant market force for GPU computing is supercomputing. How many of the top 5 supercomputers used AMD GPUs? Zero. How many use NVIDIA GPUs? Three. And they're all using CUDA because it's more feature rich---it can do fancy things like direct memory copies between infiniband interconnects and GPU memory.

FYI: OpenCL on NVIDIA is implemented on top of CUDA, so you're still using CUDA if you're using OpenCL on NVIDIA.

Comment Poor software, not poor GPUs (Score 1) 158

Surprise, surprise, I have the feeling that most of you haven't actually read the article. The article is not arguing that GPUs are inherently flawed. Also, the article is not an NVIDIA-vs-AMD competition. Rather, the author tests software on each platform. It's the software that is bad, not the GPUs themselves. For instance, the NVIDIA GPU does quite well with Arcsoft and Xilisoft; this wouldn't be possible if GPUs were somehow broken for transcoding. After all, as others have pointed out here, floating point support is actually quite good on modern GPUs.

Still, poor software shouldn't come too much as a surprise. While CUDA and OpenCL certainly make GPU-based computing easier, it is still a relatively new technology that only a few programmers know how to use efficiently. I'm also not sure that the market pressure is there yet from consumers for efficient GPU-based applications (how many of them actually know what a GPU is?).

Comment We built a ~9.1 TFLOPS system for $10k last year. (Score 4, Interesting) 205

What does SLI give you in CUDA? The newer GeForce cards support direct GPU-to-GPU memory copies, assuming they are on the same PCIe bus (NUMA systems might have multiple PCIe buses).

My research group built this 12-core/8-GPU system last year for about $10k:

The system has a theoretical peak ~9.1 TFLOPS, single precision (simultaneously maxing out all CPUs and GPUs). I wish the GPUs had more individual memory (~1.25GB each), but we would have quickly broken our budget had we gone for Tesla-grade cards.

Comment Geology Geeking: Carlsbad Caverns (Score 1) 363

If you are already going to be in New Mexico to see the Very Large Array, try to swing by the Carlsbad Caverns:

Sure, it's not tech-oriented, but I'm sure you can get your geology geeking on. It's not often one is in the area (BFE New Mexico), so take the opportunity. The caverns are not to be missed!

Slashdot Top Deals

The only thing worse than X Windows: (X Windows) - X