Comment You're all misunderstanding this (Score 1) 408
Discussing the public perspective on science is fine. It's waning, why? That's a good question. But that's not the point here.
Talking about how scientists get "good" data is also a great topic, statistics, error bars, how we come by those numbers. Great points. But also not what's important here.
What is being highlighted, and has been since Ioannidis' publication, is that the overall CONCLUSIONS of these works are just wrong. The data is good. We collect great data, we put it out there, it's within reasonable error and is a true and observed phenomenon. The major problem is that we tend to do the WRONG experiments, or we direct our research toward answering a specific question, instead of gathering data comprehensively and looking to that data exclusively to find appropriate questions.
This bad rigor is prevalent, at least in biological science publications. I can't speak with any authority to any other discipline. Data taken out of context, and worse still, ignoring "inconvenient" observations that debunk proposed models is where a lot of this "bad" science comes from. Scientists will proclaim until they are blue in the face that they'd "never do this" - but I see it happen every single day, from major researchers. I fight with myself every day to not act this way, and it's still very hard, because I'd like to move forward in my career - and some crappy piece of data that shoots holes in my hypothesis is staring me in the face. The easy route is to just ignore that. "Oh, yes, this is a caveat we don't want to get into as it muddies the publication. We'll approach this problem in the next article". That's crap. You can't observe something and then ignore it and publish to the contrary of what that data says just to have a neat little story.
What I've outlined above is the underlying problem. Most of what is published, the conclusions being made, are flat out WRONG. The data is right. But what researchers are saying that data means based on their myopic and "I need to publish this" drive causes tunnel vision and bad conclusions.
I have no idea what the answer is, but I think it's important we recognize what the problem is and what the question is, if we have any hope of getting to an answer.
Talking about how scientists get "good" data is also a great topic, statistics, error bars, how we come by those numbers. Great points. But also not what's important here.
What is being highlighted, and has been since Ioannidis' publication, is that the overall CONCLUSIONS of these works are just wrong. The data is good. We collect great data, we put it out there, it's within reasonable error and is a true and observed phenomenon. The major problem is that we tend to do the WRONG experiments, or we direct our research toward answering a specific question, instead of gathering data comprehensively and looking to that data exclusively to find appropriate questions.
This bad rigor is prevalent, at least in biological science publications. I can't speak with any authority to any other discipline. Data taken out of context, and worse still, ignoring "inconvenient" observations that debunk proposed models is where a lot of this "bad" science comes from. Scientists will proclaim until they are blue in the face that they'd "never do this" - but I see it happen every single day, from major researchers. I fight with myself every day to not act this way, and it's still very hard, because I'd like to move forward in my career - and some crappy piece of data that shoots holes in my hypothesis is staring me in the face. The easy route is to just ignore that. "Oh, yes, this is a caveat we don't want to get into as it muddies the publication. We'll approach this problem in the next article". That's crap. You can't observe something and then ignore it and publish to the contrary of what that data says just to have a neat little story.
What I've outlined above is the underlying problem. Most of what is published, the conclusions being made, are flat out WRONG. The data is right. But what researchers are saying that data means based on their myopic and "I need to publish this" drive causes tunnel vision and bad conclusions.
I have no idea what the answer is, but I think it's important we recognize what the problem is and what the question is, if we have any hope of getting to an answer.