That's also been my experience, for the most part. In the past when a Slashdot post has revealed enough information for me to dig through the edit history on Wikipedia to see what happened, I've sided with the Wikipedia editors.
One time I put quite a bit of effort into cleaning up an article about a fellow who set a dubious record long ago (less stupid than winning World Sauna Championships, but still inadvisable). There was a great deal of misinformation spread in the aftermath of this stunt. It was a tricky business to make correct logical assertions through the minefield of popular misinformation that ensued. A doctor who supervised this did eventually publish in a peer-reviewed journal enough of a factual synopsis to sort out which stories were candyfloss bullshit, and which weren't.
A week later another editor came along and "simplified" my careful prose into the language of careless, naked assertions. I chalked this up to a lesson learned.
The vast majority of my edits have fared better than that. These days I mainly restrict myself to adding isolated statements.
If anyone digs into the article's history, there's a version of the page with carefully worded prose. My contribution wasn't erased, it was merely buried. I wonder sometimes how many pages on Wikipedia have far superior text buried in deep sub strata of their page histories.
The real problem with the model is that there's no underlying arrow of progress. Given their editorial guidelines, credible sources are the foundational object. But sources are not first class objects on Wikipedia. Pastiches of credible sources (the actual articles) are the primary first class object. For the highly inculcated, formal dispute resolutions might also be considered first class objects (in many cases, rather fuzzy first class objects).
Until there's some method, at least semi-automatic en route to the semantic web, to enforce the use of a good source over a bad source at the level of isolated assertions, nothing much is going to change. Editors become possessive of pages because the effort of volunteers is the only force retaining any of the historical quality of an article from death by a thousand well-intentioned word changes.
The least reliable articles are often the ones apparently riddled with careful references. In many cases I've dug into the source and found it doesn't support the claim in any fashion whatsoever, or outright contradicts the claim in some larger frame of consideration.
One could define software engineering, if one wished to, as the art of pushing entropy up hill with quality control. The opposite of software engineering is politics. This can work for a while, until your free labour quits in disgust.
The other remark I'll make is that most people vastly underestimate the utility of mediocre information conveniently packaged. On just about any subject, fifteen minutes at Wikipedia is all I need to put together a mental game plan about what I need to pursue and how, and what is likely to be the most productive place to begin. Underneath the curling, worm-eaten, multi-coloured leaves of factual assertion, there's a pretty decent semantic graph lurking in the page structure, even if sometimes it's closer to the lyrics of Dem Bones than Gray's Anatomy.
The social graph is full of shit, too, lest we forget. One can glean a lot from a social graph full of shit, and many companies do.