Follow Slashdot stories on Twitter


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Relational stuff scales (Score 1) 222

Other comments, I think, are talking about just throughput, but just in case anyone is talking about processing the data I would like to note the following: if you think that PostgreSQL does not provide the processing performance you want to scale to, then you have not been paying attention: check out my Field Forge announcement (currently the top news item on the web site) and then go check out Amazon's EC2 GPU cloud announcement. Put them together and I am not sure anyone knows yet how high you can scale to. You probably now have available more distributed, high performance parallel processing power than you can imagine.

Woman Wins Libel Suit By Suing Wrong Website 323

An anonymous reader writes "It appears that Cincinnati Bengals cheerleader Sarah Jones and her lawyer were so upset by a comment on the site that they missed the 'y' at the end of the name. Instead, they sued the owner of, whose owner didn't respond to the lawsuit. The end result was a judge awarding $11 million, in part because of the failure to respond. Now, both the owners of and are complaining that they're being wrongfully written about in the press — one for not having had any content about Sarah Jones but being told it needs to pay $11 million, and the other for having the content and having the press say it lost a lawsuit, even though no lawsuit was ever actually filed against it."

Comment Re:No (Score 1) 187

Really modern GPUs (Fermi) can do OK on branches if the branches occur at a warp level or stay to a small number (less than 16) or stop--partially because of support for concurrent kernels. (Check out if you think that concurrent kernels can not be used.) You are incorrect or imprecise to state that there is a problem if it splits in half--if a warp splits in half, then there is less performance but if half of the warps split at a branch then there is not a performance loss (i.e. it splits at the warp level). There is no "penalty" in these cases.

It is wrong to say: "a relational database, they fall over flat. A normal CPU creams them performance wise.". It depends on what you are talking about. As one example, for SQL window functions over partitions that involve any calculations (not just lag and lead--thank you), the GPU can cream the CPU. Check out: for some throughput numbers. The relational database is more the bottleneck--not the GPU. Other examples for relational databases that could benefit from the parallel processing of the GPU might be calculating the optimal query plan with large numbers of joins, sorting, merging keys (joins), etc. (if the relational database is written to process blocks of tuples instead of a tuple at a time--a relational database written to process blocks of tuples would also be more efficient on the CPU because of call overhead and cache).

Comment Parallel computation libraries (Score 1) 137

If you wish for your computations to be parallel at a level higher than algorithm steps (i.e. you can build libraries upon libraries that are efficient parallel computation throughout the layers of libraries), then neither the CUDA driver or the CUDA runtime API (or OpenCL or DirectCompute) are very good. An example of this for CUDA is that even usage of the Fermi concurrent kernel execution feature is not generally possible using all (or even very many) CUDA kernels in a program by just using the CUDA APIs.

MPI (message passing interface) gives parallel computation at the clustering level and the Kappa Library gives you this at the library component level. If somebody knows about something other than MPI or Kappa that does this and is available for general use, I would be interested to hear about it.

Comment Re:CUDA (Score 2, Informative) 137

Indeed. With Cuda, DirectCompute, and OpenCL, nearly 100% of your code is boilerplate interfacing to the API. There needs to be a language where this stuff is a first-class citizen and not just something provided by an API.

If you use CUDA, OpenCL or DirectComputeX it is--try the Kappa library--it has its own scheduling language that make this much easier. The next version that is about to come out goes much further yet.

Comment Re:A whole new level of parallelism (Score 3, Interesting) 137

The article and everybody else are ignoring one large, valid use of GPUs in the data center--whether you call it business intelligence or OLAP--it needs to be in the data center and it needs some serious number crunching. There is not as much difference between this and scientific number crunching as most people might think. I have been involved in both crunching numbers for financials at a major multinational and had the privilege of being the first to process the first full genome (complete genetic sequence--terabytes of data) for a single individual and actually the genomic analysis was much more integer based than the financials. Based on my experience with both, I created the Kappa library for doing CUDA or OpenMP analysis in a datacenter--whether for business or scientific work.

Slashdot Top Deals

"We learn from history that we learn nothing from history." -- George Bernard Shaw