...and yet, it does. It's become so routine, so reliable, so well-understood and well-controlled, that doctors and researchers know they can rely on it as a matter of course. They still have to be aware of the errors and distortions that can arise, but that's true of every imaging or monitoring system, all the way down to the stethoscope and the fever thermometer.
The problem with the activation maps is precisely that one is NOT looking at an image, so there's no way to fine tune the algorithms. Therefore, fMRI is NOT well understood in the way that CT or MRI are.
Consider that in imaging, you have the luxury of comparing the output of a brain scan to the known physical structure of the brain. Is there a hippocampus? No? Well then it didn't work, go back and fiddle until you can show me a hippocampus.
In fMRI, apart from low level sensory corticies (where visual field mapping techniques can reproduce broad level retinotopic maps), researchers are operating in a vacuum in which there is no hard and fast error signal to fine tune the methods.
Science has to proceed very cautiously in such a situation. This is particularly true when one has hundreds of thousands of voxels to sift through because it's easy to find any pattern in noise, if you have enough noise.
So I would argue that fMRI offers a very different set of challenges compared to MRI and CT scans, and therefore it's very important to keep a sharp, critical eye on the statistics used, as these authors are doing.
To illustrate this point further, here is a link to a poster in which someone put a dead salmon into a magnet and found that (in the absence of proper statistical controls) its decomposing brain was apparently reacting to the emotional content of pictures: