#3 Who has more time to manipulate an open source web engine index? A do gooder looking to relfect bad SEO in a search result, or an SEO not looking to pump their own numbers floating their own crap to the top (through whatever carrot/stick-like measures implemented).
Google had some extremely bad queries some years back due to every SEO on earth trying to game the system. The only reasonable solutions on the top of my head are:
1. some sort of real-id-type verification system that requires actual investment in an acocunt being considered having weight (even then, it makes compromized systems a lot more valuable to hackers)
2. Devise a systemic pattern manipulation which is known to specifically target and down-rate results (like results that only get linked through blog comments or reuse heavily from their referenced pages for instance) -- Google seems to do some form of this
3. Individualized / group based blackack-lists -- Pain in the butt to curate, causes false positives to be burried (forever?), relies on people with associations comparable to their own (eh, ban all GBLT/alternate religions / pre-xyz sites / etc..) and of course having individualized search curation is a butt ton of extra data that needs to be floating around on servers waiting for your specific user ID to hit said server. I think that is may have been one reason blocked sites died in Google. Its just a pain in the ass to distribute the user's search preferences to every possible hosting node (or having slower responses due to limited numbers of nodes being able to respond to them).