High performance FFT on GPUs 274
A reader writes: "The UNC GAMMA group has recently released a high performance FFT library which can handle large 1-D FFTs. According to their webpage, the FFT library is able to achieve 4x higher computational performance on a $500 NVIDIA 7900 GPU than optimized Intel Math Kernel FFT routines running on high-end Intel and AMD CPUs costing $1500-$2000. The library is supported for both Linux and Windows platforms and is tested to work on many programmable GPUs. There is also a link to download the library freely for non-commerical use."
It's nice... (Score:5, Informative)
Re:Uhh.. (Score:5, Informative)
FFT; (Score:2, Informative)
Re:Uhh.. (Score:3, Informative)
One useful way to think of the FFT is as transform of signal data from the time domain (raw samples) to the frequency domain (the constituent sine waves). This is useful for all sorts of purposes such as being the first step in speech recognition, the basis of JPEG/MPEG compression,
Re:Math library for sale? (Score:2, Informative)
If this is true, to be able to make the same computations in a fourth of the time is a pretty nice thing, and using a little more of a GPU is likely to be acceptable in a decent number of applications.
Re:$1500-$2000? (Score:5, Informative)
They're running against dual-processor systems (Opteron and Xeon).
Re:$1500-$2000? (Score:3, Informative)
Re:Uhh.. (Score:5, Informative)
FFTW is the 'Fastest Fourier Transform in the West', a cute name for the work of a number of graduate students who use several techniques to turn the FFT from 'Numerical Recipes in C' into a freaking speed daemon.
GPUFFTW is much the same thing, but ported to your video card's GPU - which is generally more optimized for doing the 'apply a floating point matrix to an array' thing - thus speedin the FFTW up even more while relieving the main processor from doing the work.
If you don't have a high-powered video card, this means nothing for you. If you do, it means the above operations (compression, spectrum analysis, etc) can be done faster and without eating up processes.
That's where PCIe is useful (Score:4, Informative)
Re:FFT=Fast Fourier Transforms (Score:3, Informative)
Re:Any 64 bit GPU's? (Score:3, Informative)
Implementing 'big numbers', or numbers larger than the proccessor's spec, is actually quite computationally heavy when compared to the operations you're replacing. As such, a 4x increase in the speed of computation can translate to a (to pull a number from my arse) 0.25x loss of performance when dealing with larger floats.
However...
With CPU/GPU cooperation, the floating gap can be handled by using the CPU to generate a lookup table of high-precision trig as, say, a texture, and treating the numbers as mere pointers to an array. Addition is a relatively light bignum math, and with the FFT, you could implement the addition and lookup math quite speedily on the GPU side.
Of course, from reading in, I'm pretty sure that's what's in-process for higher precision.
Precision limit. (Score:3, Informative)
Most GPU can do max up to 32-bit floating point operations (depending on the brand and the model), where as most scientific applications use 64-bit and higher (the old FP unit could do 80bit FP math, SSE registers in recent processors can do 128-bits FP math).
So some user will be happy, like for sound processing (GPU have already been used to process reverberation to create realistic sound environnement - too lazy to do the search for the slashdot reference)
But other application (cryptography maybe) will probably need more FP precision.
Not to mention that most scientific applications run mostly under *nix like Linux or BSD, for which GFX driver support isn't always incredible, specially for recent models, (website mentions performance hit).
(And also remember that soon Vista will have an interface that'll completly clog the GPU and leave less free cycles to do general purpose calculation).
Bytes/bits? (Score:3, Informative)
The Video RAM will determine the maximum array length that can be sorted on the GPU. A rough guideline for performing FFT on 32-bit floats is: Maximum array length in millions = Video RAM in MB / 32
Max array length equals video RAM in megabytes divided by 32... bits? Correct me if i'm dumb but shouldn't it rather be "Video RAM in MB / 4"?
What's an FFT (Score:5, Informative)
The Fast Fourier Transform is an algorithm to turn a set of data (as amplitude vs. time) into a set of waves (as amplitude vs. frequency). Say that I have a recording of a piano playing an A at 440 Hz. If I plot the actual data that the sound card records, it'll come out something like this picture [pianoeducation.org]. There's a large fading-out, then the 440 Hz wave, then a couple of overtones at multiples of 440 Hz. The Fourier series will have a strong spike at 440 Hz, then smaller spikes at higher frequencies: something like this plot [virtualcomposer2000.com]. (Of course, that's not at 440, but you get the idea.)
The reason we like Fourier transforms is that once you have that second plot, it's extremely easy to tell what the frequency of the wave is, for example - just look for the biggest spike. It's a much more efficient way to store musical data, and it allows for, e.g., pitch transformations (compute the FFT, add your pitch change to the result, and compute the inverse FFT which uses almost the same formula). It's good for data compression because it can tell us which frequencies are important and which are imperceptible - and it's much smaller to say "Play 440 Hz, plus half an 880 Hz, plus..." than to specify each height at each sampling interval.
The FFT is a very mathematics-heavy algorithm, which makes it well suited for a GPU (a math-oriented device, because it performs a lot of vector and floating-point calculations for graphics rendering) as opposed to a general-purpose CPU (which is more suited for data transfer and processing, memory access, logic structures, integer calculations, etc.) We're starting to see a lot of use of the GPU as the modern equivalent of the old math coprocessor.
If you're looking for more information, Wikipedia's FFT article is a good technical description of the algorithm itself. This article [bu.edu] has some good diagrams and examples, but his explanation is a little non-traditional.
Re:Any 64 bit GPU's? (Score:4, Informative)
Re:It's nice... (Score:5, Informative)
most highly parallel GPU-type chips lack support for gradual underflow, for example, one of those "ill-defined corners of the number space" where 754 has been a tremendous boon. flush-to-zero is fine if you're decoding MP3s or unpacking texture maps, but it causes a lot of problems when you start trying to do more general scientific computations. sometimes those low order bits matter a whole lot; sometimes they're the difference between getting an answer accurate to 4 digits and an answer with *no* correct digits.
"simple trig functions" have their own problems on these architectures; try writing an acceptable range-reduction algorithm for sin or cos without having correctly rounded arithmetic ops. sin and cos are, in fact, two the hardest operations in the math lib on which to get acceptable accuracy.
admittedly, none of these objections are an issue with FFTs. but the reason that FFTs will perform acceptably on such an architecture is that the operations are (usually) constrained to the domain in which you don't encounter the problems i mention, not because the operations themselves are inherently safe. the lack of support for gradual underflow will cause you to lose many, many bits in frequency components that have nearly zero magnitude, but you usually don't care about those components when you're doing FFTs, anyway.
Re:Any 64 bit GPU's? (Score:3, Informative)
Re:If you need gigaflops... (Score:2, Informative)
SRC computers puts out FPGA based systems that has a nice
32bit floating point fft library already in their development
environment. Most customers using the fft are for radar image
processing where the best PC solution is 50 times slower
then the fpga based solution. Think UAVs with smart
tracking off their radar.
http://www.srccomputers.com/ [srccomputers.com]
8xx verses 2xx (Score:2, Informative)
Re:It's nice... (Score:5, Informative)
Why? Because the x86 isn't a DSP?
The x86 is a general-purpose CPU. It isn't brain dead; historically it's almost always been at least half as fast as the latest expensive processor fad du jour, and sometimes it has actually been the fastest available general purpose processor. As these fads have come and gone, the x86 has quietly kept improving by incorporating many of their best ideas.
The cell processor is basically a POWER processor core packaged with a few DSPs tacked onto the die. That sounds like a kludge to me, but if it turns out to be a success, there's nothing stopping people from tacking DSPs onto an x86 die.
All a DSP is good at is fast number crunching. It usually has little in the way of an MMU, along with a memory architecture tuned mainly for vector-like operations, branch prediction tuned only for matrix math, etc. DSPs would make a bad choice for running general purpose programs, especially with cache and branch issues becoming the dominant performance bottleneck in recent times. DSPs would a horrible choice for running an OS with any kind of security enforcement. Using a GPU as a poor-man's DSP is interesting, but it suffers even more from these same limitations. If DSPs really offered a better solution for general-purpose problems, they would have replaced other CPU architectures decades ago.
Re:Correction on usages (Score:1, Informative)
Re:Rush hour math. (Score:5, Informative)
Re:Cryptography? (Score:1, Informative)
The basic trick there is that integer multiplication is essentially convolution on the pieces of the numbers, and convolution is something that is trivial to compute in frequency space, which is where the FFT comes in. Check out the "Fourier Transform Methods" section of the wikipedia article on multiplication algorithms [wikipedia.org].
Re:What's an FFT (Score:2, Informative)
FFT is to naive FT as quicksort is to insertion sort. Both quicksort and the FFT are considered to be among the top ten algorithms [colorado.edu] of the 20th century.
Re:What's an FFT (Score:3, Informative)
Be careful with the terminology; you correctly referred to "naive FT algorithm" above, but this sentence might give the impression that the Fourier transform itself is an algorithm. FT is a function whereas the FFT is an algorithm that computes the function. It would be more appropriate to say that FFT is to the Fourier transform what quicksort is to sorting.
Re:Great for audio! (Score:2, Informative)
Nope! (Score:4, Informative)
For the latter, you need a PSD (power spectral density) plot, which is obtained by finding the square of the magnitude of the freq-domain FFT (complex) outputs.
And the term "FFT" usually describes a specific class of algorithms that finds a Discrete Fourier Transform of a signal in much less than O(N^2) time, where N is the number of elements/samples considered.
However, the FFT is also useful to perform fast polynomial multiplication (and even fast multiplication of very very very long numbers). This application has nothing to do with power or frequencies in a signal.
Re:stating the obviou... (Score:2, Informative)
Go read a tutorial on CPU architecture and where it talks about pipelining replace the pipeline that takes 5 cycles end to end with one that takes thousands of cycles and imagine how you could write basic algorithms. Worse yet, imagine how you would handle IO or interrupts.
Re:Errr, I don't want to sound skeptical... (Score:3, Informative)
Re:Interesting question... (Score:5, Informative)
Our tests on nVidia 5600 series AGP cards (this was several years ago) showed that the net SETI@home throughput using the GPU was at best 1/5 of what we could obtain with the CPU. This was primarily due to transfers out of graphics memory and into main memory.
PCI Express allows for symmetric bandwidth to graphics memory and graphics memories are now typically larger than the size of our working set. The difficulty will be in benchmarking to see which is faster for a specific GPU/CPU combination.
At any rate it's a fairly simple job to swap FFT routines in SETI@home. The source is available [berkeley.edu]. Someone may have done it by now...
Your error analysis is totally wrong (Score:5, Informative)
In floating-point arithmetic, the algorithm was proved in 1966 to have an upper bound for the error that grows only as O(log N), and the mean (rms) error grows only as O(log N). (See this page [fftw.org] for more info.) (Errors in fixed-point arithmetic are worse, going as N.)
Even in single precision, the errors for their FFT sizes are probably quite reasonable, assuming they haven't done something silly like use an unstable trigonometric recurrence.
erratum (Score:4, Informative)
Re:Based on FFTW? (Score:2, Informative)
We are not porting FFTW to GPUs and our project is not related to FFTW. FFTW is a more general library designed mainly for CPUs. GPUFFTW uses some cache optimizations to obtain maximum memory performance on GPUs.