Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Image

The Real 'Stuff White People Like' 286

Here's an interesting and funny look at 526,000 OkCupid users, divided into groups by race and gender and all the the things each groups says it likes or is interested in. While it is far from being definitive, the groupings give a glimpse of what makes each culture unique. According to the results, white men like nothing better than Tom Clancy, Van Halen, and golfing.

Comment Re:Peer review (Score 1) 287

powrogers - We are indeed submitting it as a letter/commentary at one of the major neuroimaging journals. We feel that is the proper way to address the topic, not as though we have discovered something new. The poster was a little more sarcastic in that regard, but the paper/commentary is very straightforward.

I would prefer not to name journal names at this time, since we are just now finishing up our complete review of all 2008 articles in seven major journals. Suffice it to say that if you are in the field of neuroimaging you have probably read a paper from these sources. You are right that the trend has been very good in terms of requiring new papers to have correction. Our end goal is to make it required unless there is a justifiable reason not to.

Comment Re:Straw man (Score 2, Informative) 287

venicebeach - Again, good points. The trouble is that multiple comparisons correction is not the de dacto standard in any neuroimaging journal. Some journals, like NeuroImage and HBM, have become quite good about requiring correction in the results. Still, even they are not 100%. Other journals with a lower impact factor are quite a bit worse, with uncorrected statistics used in almost 50% of the studies. So, either people know about the problem and are willingly choosing to ignore it when they publish or they are unaware of the seriousness of the problem and need a salient reason to begin correcting. We believe it is the latter, which is why we published the Salmon.

As for the argument about it being counter-productive, I fully agree. We presented the poster at the Organization for Human Brain Mapping meeting last June, which was our target audience. I then uploaded the poster to my website so those researchers could grab a copy. The poster got picked up by a few weblogs and eventually spiraled into what you see on Slashdot. We were quite content to publish the paper in a sleepy corner of neuroimaging and wanted it to remain as a discussion piece among scientists.

Comment Re:Peer review (Score 1) 287

powrogers - You are right that the conclusions were made many years ago. So, why does a sizable percentage (up to 50% in some journals) of imaging results still report only uncorrected statistics? That is our motivation with the Salmon poster - to get all fMRI researchers on board in using multiple comparisons correction in their work. I would agree that the poster has little in terms of scientific novelty, but its significance to the field lies in helping to set proper standards publishing fMRI results. Correction should be mandatory, unless you have a seriously good reason not to.

Also, no, we didn't cover autocorrelation. We thought we would take it one statistical-issue-that-people-don't-seem-to-correct-for at a time. :)

Comment Re:Discussion (Score 1) 287

AC - You are incorrect when you state that nobody in fMRI would publish without FDR or FWER correction. The percentage of articles published using uncorrected statistics is still quite high, which is the entire reason why we published the Salmon results. The big fMRI journals like NeuroImage and HBM are pretty good these days, but I would still challenge you to look through one edition and not see some uncorrected statistics. The problem is worse depending on what journal you read. The whole point we are trying to convey is that uncorrected thresholds and minimum cluster sizes are an inappropriate control for the multiple comparisons problem and that all researchers should be doing it with their data.

Comment Re:Discussion (Score 1) 287

AC - Our poster/paper is not about proving definitively the necessity of multiple comparisons correction. You are correct that this has already been done by folks like Benjamini, Hochberg, Friston, and Worsley (to name a few) - they all tackled this issue back in the 90s. Our commentary is targeted at the sizable fraction of individuals who do not use multiple comparisons correction for their fMRI results. You are right that we don't add a lot that is new to the technical discussion of why correction is necessary. However, we are of the opinion that the Salmon poster adds a great deal to the debate regarding why everyone should be using correction on their own results. Hopefully you see the distinction.

Comment Re:Of course its been turned down for publication. (Score 2, Informative) 287

ardeaem - At face value you are absolutely right. The majority of cognitive neuroscientists do use multiple comparisons correction in their research. Our commentary is targeted at the remainder of researchers who continue to use uncorrected statistics. The percentage is larger than you might believe, and my co-authors and I are of the opinion that we need to get our statistical house in order for the field to mature.

Comment Re:Straw man (Score 1) 287

venicebeach - It is good to see some other imagers commenting on the poster. The entire point of our commentary is that you should be using FDR and FWER in your research. These methods address the multiple comparisons problem in fMRI and allow you to report what the probability of a false positive is across the whole brain. Simply having a high threshold (p-value) and a minimum cluster size (8 voxels) is an unknown control for multiple comparisons that may, or may not, be appropriate for your data.

You point about 'when you do your thresholding wrong you get meaningless results' is spot on. A sizable fraction of reported studies do not use multiple comparisons correction. This poster, and our forthcoming paper, argue that they should.

Comment Re:Any questions? (Score 2, Informative) 287

owlstead - I hear you - I have been a fellow /. reader for years and have observed firsthand the waxing and waning of articles. The above post was mostly a courtesy if anyone was genuinely curious about some aspect of the poster. That and I felt somewhat compelled to post a comment - as a longtime reader it is quite an honor to see some aspect of your own work on the Slashdot main page, even if it was for a dead fish.

Thanks.

Comment Re:Discussion (Score 1) 287

powrogers - Thanks for stopping by our poster last June. I like your comment quite a bit but would add one point. While the correction of multiple comparisons in fMRI has been well understood for quite some time (you mention 15 years) the current problem is that not everyone does it while conducting their research. Having a high p-value and a minimum cluster threshold is an unknown, soft control to the problem. Our argument is that true correction methods that control for the FDR or FWER should be employed in standard fMRI experiments.

Comment Re:well known problem, almost always corrected for (Score 2, Informative) 287

joepa - You have a lot of very good points. Most neuroscientists are aware of the multiple comparisons problem and, at minimum, try to control for it using increased statistical thresholds (high p-values) and minimum clustering values (have to have several contiguous voxels). The trouble with this approach is that it is a soft control of the multiple comparisons problem. You still have no idea of what the false positive rate will be across the whole brain, only on a quasi voxel-to-voxel basis. Using techniques like false discovery rate (FDR) or Gaussian random field familywise error correction (FWER) you are able to have a much stronger case regarding what degree of your results are true or false.

You are also correct that a majority of neuroscience results are corrected using FDR, FWE, or another correction method like permutation. The trouble is that a sizable fraction of articles still report values that are uncorrected. The Salmon paper is our argument that most, if not all, fMRI research needs strong multiple comparisons correction.

Slashdot Top Deals

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...