That's some pretty convoluted logic there, at least by my reckoning. If the user hasn't purchased the item in question, how exactly are you assuming he/she knows the product sufficiently that they're in a suitable position to review it, judging it's strengths and weaknesses?
I don't, but with the exception of books and movies, you also can't assume that people who bought the product know the product well enough to review it. The majority of reviews are posted within a couple of weeks after buying a product, for better or worse. For anything more complicated than a toaster, by the time people really know the product well enough to give it a thorough review, they've owned it for at least six months.
Worse, if you assume a typical one-year product cycle, that means half the purchasers won't understand the product well enough to give it a good review until after the next product is on the market and nobody cares about the one they bought.
The way I approach buying products consists of different approaches for different types of information:
- Product failures: Analyze first in aggregate based on the number of people reporting failures in the product, then historically based on the number of people reporting failures in previous similar products by that manufacturer, under the assumption that most failures will occur after the next model comes out.
- Product support by the manufacturer (e.g. firmware upgrades): Analyze historically based on similar products in previous years.
- Comparison of features and usability: Seek out people who mention other products in their reviews, either because they chose to buy those other products instead or because they chose to buy this product over the others. Ignore all other reviews, because they rarely contain enough objective data to be of value.
Now that last one isn't precisely true; sometimes other posts do contain objective data, though they are a lot less likely to do so. I usually skim a few 5/5 and 1/5 reviews to see if I spot patterns, and if so, I then decide whether those patterns are indicative of device malfunction or user malfunction... but that's the last step of analysis for products that I didn't rule out in the previous, easy steps. :-)
Also, if the product sucks, assuming the product isn't so bad that folks return it, people who own the product are more likely to feel the need to give it better reviews to justify the money they spent.
Wow... again. I'd bet that people who have purchased a product and are unhappy with it are actually *more* likely to review it harshly in an effort to punish the company for their poor product, and at least warn others against a crappy purchase. There are some old marketing saws that say similar things, I believe.
It's not my theory. We even have a term for people who do that frequently: fanboys. Worse, those rare people who understand a product well enough to give it a thorough review in the first few weeks of ownership are much more likely to be fanboys, because that usually only happens if they've already owned a similar product from that same manufacturer. So the least accurate reviews are likely to be the positive reviews that look the most accurate....
At the very least, that holds true for me. I've purchased a couple of stinkers, and I made damn sure to leave a one or two star review, and explain in detail *why* it was such a terrible product.
Me, too. I've also often posted reviews on products with obvious design flaws that I chose not to buy, in which I explained in detail why it was a terrible product. And invariably when I do, I get a bunch of whining idiots asking me how I can possibly know how well something will work without buying it. And my answer is something like "because I know what a fulcrum is". It is as though people magically think that a design flaw only exists if someone was foolish enough to pay for the product before discovering it, or that an obvious design flaw will magically go away if you wish hard enough, either of which just boggles my mind.