powrogers - We are indeed submitting it as a letter/commentary at one of the major neuroimaging journals. We feel that is the proper way to address the topic, not as though we have discovered something new. The poster was a little more sarcastic in that regard, but the paper/commentary is very straightforward.
I would prefer not to name journal names at this time, since we are just now finishing up our complete review of all 2008 articles in seven major journals. Suffice it to say that if you are in the field of neuroimaging you have probably read a paper from these sources. You are right that the trend has been very good in terms of requiring new papers to have correction. Our end goal is to make it required unless there is a justifiable reason not to.
venicebeach - Again, good points. The trouble is that multiple comparisons correction is not the de dacto standard in any neuroimaging journal. Some journals, like NeuroImage and HBM, have become quite good about requiring correction in the results. Still, even they are not 100%. Other journals with a lower impact factor are quite a bit worse, with uncorrected statistics used in almost 50% of the studies. So, either people know about the problem and are willingly choosing to ignore it when they publish or they are unaware of the seriousness of the problem and need a salient reason to begin correcting. We believe it is the latter, which is why we published the Salmon.
As for the argument about it being counter-productive, I fully agree. We presented the poster at the Organization for Human Brain Mapping meeting last June, which was our target audience. I then uploaded the poster to my website so those researchers could grab a copy. The poster got picked up by a few weblogs and eventually spiraled into what you see on Slashdot. We were quite content to publish the paper in a sleepy corner of neuroimaging and wanted it to remain as a discussion piece among scientists.
Yeah, it is making the job of scanning through the comments a bit difficult. It is what it is though - doesn't make me love Slashdot any less...
powrogers - You are right that the conclusions were made many years ago. So, why does a sizable percentage (up to 50% in some journals) of imaging results still report only uncorrected statistics? That is our motivation with the Salmon poster - to get all fMRI researchers on board in using multiple comparisons correction in their work. I would agree that the poster has little in terms of scientific novelty, but its significance to the field lies in helping to set proper standards publishing fMRI results. Correction should be mandatory, unless you have a seriously good reason not to.
Also, no, we didn't cover autocorrelation. We thought we would take it one statistical-issue-that-people-don't-seem-to-correct-for at a time.
AC - You are incorrect when you state that nobody in fMRI would publish without FDR or FWER correction. The percentage of articles published using uncorrected statistics is still quite high, which is the entire reason why we published the Salmon results. The big fMRI journals like NeuroImage and HBM are pretty good these days, but I would still challenge you to look through one edition and not see some uncorrected statistics. The problem is worse depending on what journal you read. The whole point we are trying to convey is that uncorrected thresholds and minimum cluster sizes are an inappropriate control for the multiple comparisons problem and that all researchers should be doing it with their data.
AC - Our poster/paper is not about proving definitively the necessity of multiple comparisons correction. You are correct that this has already been done by folks like Benjamini, Hochberg, Friston, and Worsley (to name a few) - they all tackled this issue back in the 90s. Our commentary is targeted at the sizable fraction of individuals who do not use multiple comparisons correction for their fMRI results. You are right that we don't add a lot that is new to the technical discussion of why correction is necessary. However, we are of the opinion that the Salmon poster adds a great deal to the debate regarding why everyone should be using correction on their own results. Hopefully you see the distinction.
daenris - Great clarification between the multiple comparisons problem and the non-independence error. The only thing I would add is that while the majority of published fMRI papers do correction there is still a sizable minority that do not. That is why we wrote up the salmon results as a poster.
kozar - That is almost exactly how we prepared the salmon that we scanned. It was delicious.
ardeaem - At face value you are absolutely right. The majority of cognitive neuroscientists do use multiple comparisons correction in their research. Our commentary is targeted at the remainder of researchers who continue to use uncorrected statistics. The percentage is larger than you might believe, and my co-authors and I are of the opinion that we need to get our statistical house in order for the field to mature.
venicebeach - It is good to see some other imagers commenting on the poster. The entire point of our commentary is that you should be using FDR and FWER in your research. These methods address the multiple comparisons problem in fMRI and allow you to report what the probability of a false positive is across the whole brain. Simply having a high threshold (p-value) and a minimum cluster size (8 voxels) is an unknown control for multiple comparisons that may, or may not, be appropriate for your data.
You point about 'when you do your thresholding wrong you get meaningless results' is spot on. A sizable fraction of reported studies do not use multiple comparisons correction. This poster, and our forthcoming paper, argue that they should.
grcumb - I am the author of the Salmon poster, and I wish I had some mod points for your comment. Awesome.
owlstead - I hear you - I have been a fellow
powrogers - Thanks for stopping by our poster last June. I like your comment quite a bit but would add one point. While the correction of multiple comparisons in fMRI has been well understood for quite some time (you mention 15 years) the current problem is that not everyone does it while conducting their research. Having a high p-value and a minimum cluster threshold is an unknown, soft control to the problem. Our argument is that true correction methods that control for the FDR or FWER should be employed in standard fMRI experiments.
joepa - You have a lot of very good points. Most neuroscientists are aware of the multiple comparisons problem and, at minimum, try to control for it using increased statistical thresholds (high p-values) and minimum clustering values (have to have several contiguous voxels). The trouble with this approach is that it is a soft control of the multiple comparisons problem. You still have no idea of what the false positive rate will be across the whole brain, only on a quasi voxel-to-voxel basis. Using techniques like false discovery rate (FDR) or Gaussian random field familywise error correction (FWER) you are able to have a much stronger case regarding what degree of your results are true or false.
You are also correct that a majority of neuroscience results are corrected using FDR, FWE, or another correction method like permutation. The trouble is that a sizable fraction of articles still report values that are uncorrected. The Salmon paper is our argument that most, if not all, fMRI research needs strong multiple comparisons correction.
An inclined plane is a slope up. -- Willard Espy, "An Almanac of Words at Play"