As a fellow researcher in fMRI with a doctorate myself, I have to take exception with how easily you think you can "classify" people with terrorist (or just even simply dishonest) thoughts.
1. Your study most likely stimulated the subject several times, of which an average was taken and then classified. This would not work in the real world, as repeating the same question over and over ("Are you a terrorist? Are you a terrorist? ...") would not work similarly to having a subject push a button repeatedly. For brain imaging technology to work, you have to be able to image individual thoughts, not averages over many similar thoughts. So called time-resolved fMRI tries to do this, but makes several compromises to do so.
2. Motor tasks activating the primary motor cortex are very simple from a cognitive processing standpoint. That's why it so easy to classify motor tasks. Studies of other primary sensory areas, such as the visual and auditory cortices, are also relatively easy areas to classify. In the real world, you have to classify abstract thinking, which is much more diffuse and complex in nature, involving several areas of the brain, at significantly less activation strength. Yes, there have been fMRI studies that demonstrated some success at discerning people who lie from those who tell the truth, or even democrat from republican, but the success was significantly worse than your theoretical 96%, and additionally required a battery of several questions on highly cooperative subjects averaged together.
3. You listed several reasons why fMRI is impractical, but you left out the most pressing one: motion. There are motion correction algorithms, but they sacrifice significant spatial resolution to work, and they often fail for large movements. There is no way to ensure that a subject is cooperative enough to stay still throughout the scan. If I were a terrorist, I would simply keep moving my head during the scan, rendering any result inconclusive.
4. As was hinted at in a previous post, the statistics of the situation make brain imaging an even bleaker prospect. In the general population (say, those who fly on planes), terrorists represent and extremely low prevalence, or prior probability. You would need a test with incredibly high sensitivity to be of any practical use in this situation. For example, if the prevalence was 1:100000, a 96% sensitive test has a positive predictive value of 0.02%--that is, you would have ~5000 falsely positive terrorists before finding a true terrorist.