Most of the time, when people talk about taking the FFT of a signal, they really mean the STFT, which breaks the signal into small blocks, and then takes the FFT of each block indepently, the result being known as the analysis frames of the signal. Either way, it fundamentally comes down to this: you have a discrete (sampled) signal of finite-length N (N could be really large for the full FFT, or something like 1024 for an STFT), and the amount of information (bandwidth) of that signal is fundamentally limited by the time interval between the samples (see Shannon-Nyquist sampling theorem).
Forget Fourier for a second, linear algebra alone says that you are free to transform that signal into another representation by multiplying it with a set of basis functions, and it will always take just N coefficients, so long as your basis functions are linearly independent. The best sets of linearly independent basis functions are also orthogonal, because that means each of your N coefficients is representing an independent part of your original signal.
As it turns out, the set of sinusoids that are harmonics of the sinusoid with period N (up until the sinusoid with a period of just two samples) are linearly independent and orthogonal, and therefore, are completely sufficient to describe a signal of length N. Fourier was one of the first to use sinusoidal bases, hence his eponymous transform. Mind you, once again, that N could be the length of the original signal, or just the length of the STFT blocks of your signal fragments. We typically use STFT because for most real-world signals, meaningful ideas of frequency content are a time-local thing. When you listen to music, for example, frequency only really has meaning over windows of a sound signal that are up to 1/20 of a second. So taking the FFT of an entire audio file all at once will give you the frequency content of the entire file, but that has little perceptual or musical meaning.
But, there is a problem with the STFT. Using exactly N sinusoids to represent a block of data doesn't work correctly if your data has frequencies that don't fit into exactly N samples. In other words, the FFT only produces meaningful measurements of frequency if the signal is periodic in the window length. In reality, this is always the case, because the STFT process of chopping the signal into blocks is not at all careful about making sure those blocks don't snip the signal at awkward points. The result of taking the FFT of a block that has awkwardly clipped edges is known as "spectral leakage": the appearance of frequency content in sidebands close to the real frequency, which makes the output not the ideal spike at the measured frequency. To combat this, we use windowing functions, which don't totally fix the problem, but they do make it much, much better.
Hopefully that sheds a little light on the topic
For numbers that are small enough to fit in your processor's registers, multiplication is a constant-time process. But when dealing with numbers much larger than your processor's word-length (more than 4 bytes), you have to resort to the long-mulitplication algorithm. As it turns out, multiplying polynomials is exactly the same as convolution, and is O(n^2), and long multiplication is a special case of polynomial multiplication, where the variable is fixed:
525 * 343 = (5x^2 + 2x + 5) (3x^3 + 4x + 3), for x = 10
So for two large numbers, rather than going through the long process of convolving the digits (long-multiplying), you do the following process:
I agree 100%. To throw in my 2 cents, I was very much interested in upgrading from my Canon G10 to a MILC. What I wanted was a similarly compact camera, but with interchangeable lenses and a larger sensor. I had been holding my breath for Canon to jump into the MILC sector. I had it in my mind that they were completely missing the boat on the revolution. But when I started doing my research, I realized that I shouldn't be holding my breath.
MILC's right now are terribly expensive compared with more capable DSLR's. What you gain in portability, you lose in lens selection, sensor size, and upgradability. The biggest problem though is that manufacturers are apparently terribly confused as to who the target audience of the MILC's is, because the particular design decisions to make the MILC a step up from the bridge camera or an alternative to the DSLR are often contradictory. I think Nikon's 1-mount is clear evidence of this. It uses a comically small sensor size, comically odd mix of features and limitations, and comically high price.
Some people are already writing the obituary for the DSLR's, but they aren't going anywhere any time soon. Maybe in 10 years, MILC's will dominate the market, but if you want a camera today with great image quality, smooth upgrade paths, and low cost, you can't go wrong with the high-end consumer DSLR's. I ended up with the Nikon D5100, which is basically equivalent to the Canon T3i. I bought mine with a 18-55mm kit lens and a 55-200mm telephoto zoom for $760. I am absolutely loving it.
Getting the job done is no excuse for not following the rules. Corollary: Following the rules will not get the job done.