The question is, when do you know the science is "finished"? I swear every year there is a new study saying something is good for you, followed by more studies saying it's bad for you, then more saying it's good for you.
My guess is you don't read the studies, but just go by what you read in the press.
Look, learning about new discoveries from the newspaper/radio/TV/Internet isn't a bad thing -- but it's only a first step. Unless you're reading specialized research magazines or journals, however, you're probably not getting more than a glimmer of useful information.
As an example (not to be confused with a real-life example; I'm too tired to start researching references for a /. post...), a researcher may do a study, and release a paper saying "X dose of compound Y, commonly found in alcoholic beverage B, lessens human cell type Z mortality by K% in a petrie dish". It will also include a whole pile of caveats, and probably the statement "more research is needed" somewhere towards the end, along with a section on some future research directions.
Mr. Science Desk Writer hears about it from somewhere, looks up the average human lifespan L, and publishes an article with the headline "New research shows drinking alcoholic beverage B will increase your lifespan by K%!". You read it, and decide to go and hit your alcohol cabinet for a drink.
Of course, the column doesn't properly represent the research. It takes data out of context: in this case a) by not properly correlating the dose X of compound Y with how much alcohol you would need to drink to get those dose, b) whether that chemical is converted by your liver to something else before it gets to where it needs to go, c) whether that chemical is even absorbed by the bloodstream to get to human cell type Z, d) whether what was found to be true in a petrie dish will also be true inside a human body, e) that what is found to be true of cell type Z may not hold true for other types of cells, f) whether any benefits attributable to cell type Z are negated by negative effects on any other types of human cells, or any other part of human physiology, and g) that you can't correlate a reduction of cell death to a whole body (i.e.: you can't just take two numbers you read in studies and multiply them together, in this case human lifespan and some cell death reduction percentage).
On top of this, as things turn out, a year later some other researcher (Researcher B) gets some research funding, and decides to replicate the experiments from the original paper. Maybe they do this because they find the statistical methods used by the original researcher don't make for a particularly strong case, or maybe they just want to verify the results. Researcher B sets up the experiment, but does it on a much larger scale (perhaps they have better funding) in order to get more data. After doing a detailed analysis of the data, they find that the effect either a) isn't anywhere as strong as the original paper described, or b) the effect doesn't exist at all, or c) that there was an error in the experimental setup that accounts for the effect (i.e: they found out that in order to help prevent compound Y from breaking down too easily for the experiment, that it had been refrigerated to 5C before being applied to the cells in the culture, and this had a sort of cryonic effect on the cells that extended their lifespan -- that the nature of the compound Y had nothing to do with the measured results). They publish their own paper, again with "more research needed" caveat, and a list of possible future directions.
Mr. Science Desk Writer, of course, may never see this paper, and so may continue to report on the original research. Or they may have moved on to some other area, like whatever the latest NASA probe photographed. Or perhaps they even wind up reporting on the latest research -- and you read it, and suddenly decide "those crazy scientists don't know what they're talking about! They keep changing their minds!".
The disconnect, however, is a) the bad science reporting in the first place, which over-emphasizes what the research actually says, and b) people who get their science reporting from mass media and take it "As What Science Says About The Subject", as if it were written in the Holy Bible of Science.
If you really want to know what science has to say, read the papers. I'll admit that can be difficult if you don't have access to a University library online database, and don't subscribe to journals in every area of science -- but that's the only way you'll actually know what the science says. Typically, you'll find the research so filled with caveats, especially with new discoveries that haven't been replicated, that you won't find the discovery quite as interesting anymore. That doesn't make it bad research -- they make for starting points. Many will lead down blind alleys. Some will lead to places we don't currently find useful. Some may lead to interesting discoveries. A precious few might give some other researcher a spark of an idea that leads down a completely different avenue of discovery. That is what makes such papers important to scientists.
So go to the source when you can. Don't trust the science reporting you read in the papers, especially if they're based on a single paper or a single researcher (or even research group). Read the papers when you can, even if just the executive summaries and the limitations sections (and I'll admit -- I'm not conversant in every research area of science; some of them have math and language hard to penetrate if you come from a completely different background. Still, you can usually get some idea of what was actually claimed by reading the abstract, and what the researcher feels is the limits of their research by reading the appropriate section of the paper(s)).
(Given all of the above, it amazes me how quick people are to believe that red wine extends your life, or that vaccines cause autism based on single papers being published, but doubt human-centric climate change which has thousands of papers backing it.)