Forgot your password?
typodupeerror

Comment Re:It's still natural selection (Score 1) 313

This research is not about the trait itself, but the topology of transfer. Darwinian evolution admits two major processes: random mutation and natural selection through survival of offspring. In other words, genes are transferred only vertically, from parent to offspring. The authors showed through simulation that this Darwinian mechanism alone cannot explain the universality of the code itself or its error-correcting properties. If one takes into account horizontal gene transfer, i.e. direct exchange of genetic material between members of the same generation, these properties themselves are the natural product of evolution. They further assert that in the early stages of evolution, this horizontal transfer was dominant over the vertical. As complexity grew and biological pathways became more sensitive, horizontal transfer became more likely a hindrance than a help, and vertical transfer became dominant. The article is worth a read and states this much more clearly than my terse summary.

Comment Re:This definitely (Score 1) 447

Mathew 16:17-19:
Jesus said to him in reply, "Blessed are you, Simon son of Jonah. For flesh and blood has not revealed this to you, but my heavenly Father. And so I say to you, you are Peter, and upon this rock I will build my church, and the gates of the netherworld shall not prevail against it. I will give you the keys to the kingdom of heaven. Whatever you bind on earth shall be bound in heaven; and whatever you loose on earth shall be loosed in heaven."

Jesus did found his church upon Peter, as He said. And that same church, with the authority that Jesus gave it, and led by the Holy Spirit promised to guide it, believed that the office was intended to be propagated. After all, if Jesus believed that that church needed a head, a preeminent shepherd (see John 21:15-17), would it only need one for the couple of decades left to Peter? After Peter would be martyred, did Christ intend to leave His church again "...like sheep without a shepherd"?

Comment Re:GPU accuracy (Score 2, Interesting) 127

I have not tried it for two reasons. First, to my knowledge there are no large public machines in the US being planned using AMD GPUs, so there is relatively little incentive to port the code to OpenCL. We run on large clusters and it appears for the moment that NVIDIA has the HPC cluster market tied up. Second, while OpenCL is quite similar to CUDA in many respects, it's also significantly less convenient from a coding perspective. NVIDIA added a few language extensions that makes launching kernels nearly as simple as a function call. As a pure C library, OpenCL requires much more setup code for each kernel invocation. If there was a strong incentive, such as the construction of a large NSF or DOE machine with AMD GPUs, I'd probably port it anyway, but without such a machine, it's not worth the time and effort. It's important to note that on GPUs, peak performance data often doesn't translate into actual performance numbers. The 4870 had a higher peak floating point rate than the G200, but in graphics and some other benchmarks, the G200 usually came out ahead. I don't know if this will also be the case with Fermi vs. 5870's. Finally, another large consideration is that AMD is pretty far behind on the software end. Besides mature compilers for both CUDA and OpenCL, NVIDIA provides profilers and debuggers that can debug GPU execution in hardware, and there is a growing ecosystem of CUDA libraries. For the sake of competition, I hope AMD adoption grows, but I've gotten the impression they are just not investing that much in general-purpose GPU computing.

Comment Re:GPU accuracy (Score 5, Informative) 127

Presently the G200 GPUs in this machine support double-precision, but at about 1/8 the peak rate of single-precision. In practice, since most codes tend to be bandwidth limited, and pointer arithmetic is the same for single and double precision, double-precision performance is usually closer to 1/2 that of single-precision performance, but not always. With the Fermi GPUs to be released early next year, double-precision peak FLOPS will be 1/2 of single-precision peak, just like on present X86 processors. Also note that many scientific research groups, such as my own, have found that contrary to dogma, single-precision is good enough for most of the computation, and that a judicious mix of single and double-precision arithmetic gives high-performance with sufficient accuracy. This is true for some, but not all, computational methods.

Comment Re:But how can you trust the results? (Score 1) 260

I agree with this in principle, but, in practice, it doesn't seem to come up as often as one might think. I frequently use NCSA's Lincoln cluster with 384 Teslas. Early on, I discovered some "hard" memory errors (repeatable bad bits or rows). These were very early boards, which apparently hadn't been fully tested. This prompted the admins at NCSA to write the GPU equivalent of memtest86, which they ran for about a month if I recall. After removing the boards with bad memory (about 3-4, if I recall), they didn't encounter any "soft" errors (i.e. random bit flips). NVIDIA's Fermi will have ECC, which is reassuring, but I have found the present generation, without ECC, to be quite reliable. I should also note that the hard errors I found always resulted in NANs/INFs, etc., which are very obvious. I'd be more concerned with "silent" errors that subtly change the results.

Slashdot Top Deals

Diplomacy is the art of saying "nice doggy" until you can find a rock.

Working...