Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet

Could Open Source Lead to a Meritocratic Search Engine? 148

Slashdot contributor Bennett Haselton writes "When Jimmy Wales recently announced the Search Wikia project, an attempt to build an open-source search engine around the user-driven model that gave birth to Wikipedia, he said his goal was to create "the search engine that changes everything", as he underscored in a February 5 talk at New York University. I think it could, although not for the same main reasons that Wales has put forth -- I think that for a search engine to be truly meritocratic would be more of a revolution than for a search engine to be open-source, although both would be large steps forward. Indeed, if a search engine could be built that really returned results in order of average desirability to users, and resisted efforts by companies to "game" the system (even if everyone knew precisely how the ranking algorithm worked), it's hard to overstate how much that would change things both for businesses and consumers. The key question is whether such an algorithm could be created that wouldn't be vulnerable to non-merit-based manipulation. Regardless of what algorithms may be currently under consideration by thinkers within the Wikia company, I want to argue logically for some necessary properties that such an algorithm should have in order to be effective. Because if their search engine becomes popular, they will face such huge efforts from companies trying to manipulate the search results, that it will make Wikipedia vandalism look like a cakewalk." The rest of his essay follows.

This will be a trip into theory-land, so it may be frustrating to users who dislike talk about "vaporware" and want to see how something works in practice. I understand where you're coming from, but I submit it's valuable to raise these questions early. This is in any case not intended to supplant discussion about how things are things are currently progressing.

First, though, consider the benefits that such a search engine could bring, both to content consumers and content providers, if it really did return results sorted according to average community preferences. Suppose you wanted to find out if you had a knack for publishing recipes online and getting some AdSense revenue on the side. You take a recipe that you know, like apple pie, and check out the current results for "apple pie". There are some pretty straightforward recipes online, but you believe you can create a more complete and user-friendly one. So you write up your own recipe, complete with photographs of the process showing how ingredients should be chopped and what the crust mixture should look like, so that the steps are easier to follow. (Don't you hate it when a recipe says "cut into cubes" and you want to throttle the author and shout, "HOW BIG??" It drove me crazy until I found CookingForEngineers.com.) Anyway, you submit your recipe to the search engine to be included in the results for "apple pie", and if the sorting process is truly meritocratic, your recipe page rises to the top. Until, that is, someone decides to surpass you, and publishes an even more user-friendly recipe, perhaps with a link to a YouTube video of them showing how to make the pie, which they shot with a tripod video camera and a clip-on mike in their well-lit kitchen. In a world of perfect competition, content providers would be constantly leapfrogging each other with better and better content within each category (even a highly specific one like apple pie recipes), until further efforts would no longer pay for themselves with increased traffic revenue. (The more popular search terms, of course, would bring greater rewards for those listed at the top, and would be able to pay for greater efforts to improve the content within that category.) But this constant leapfrogging of better and better content requires efficient and speedy sorting of search results in order to work. It doesn't work if the search results can be gamed by someone willing to spend effort and money (not worth it for the author of a single apple pie recipe, but worth it for a big money-making recipe site), and it doesn't work if it's impossible for new entrants to get hits when the established players already dominate search results.

Efficient competition benefits consumers even more for results that are sorted by price (assuming that among comparable goods and services, the community promotes the cheapest-selling ones to the top of the search results, as "most desirable"). If you were a company selling dedicated Web hosting, for example, you would submit your site to the engine to be included in results for "dedicated hosting". If you could demonstrate to the community that your prices and services were superior to your competitors', and if the ranking algorithm really did rank sites according to the preferences of the average user, your site could quickly rise to the top, and you'd make a bundle on new sales -- until, of course, someone else had the same idea and knocked you out of the top spot by lowering their prices or improving their services. The more efficient the marketplace, the faster prices fall and service levels rise, until the prices just covered the cost of providing the service and compensating the business owner for their time. It would be a pure buyer's market.

It's important to precisely answer the question: Why would this system be better than a system like Google's search algorithm, which can be "gamed" by enterprising businesses and which doesn't always return the results first that the user would like the most? You might be tempted to answer that in an inefficient marketplace created by an inefficient search result sorting algorithm, a user sometimes ends up paying $79/month for hosting, instead of the $29/month that they might pay if the marketplace were perfectly efficient. But this by itself is not necessarily wasteful. The extra $50 that the user pays is the user's loss, but it's also the hosting company's gain. If we consider costs and benefits across all parties, the two cancel out. The world as a whole is not poorer because someone overpaid for hosting.

The real losses caused by an inefficient search algorithm, are the efforts spent by companies to game the search results (e.g. paying search engine optimization firms to try and get them to the top Google spot), and the reluctance of new players to enter that market if they don't have the resources to play those games. If two companies each spend $5,000 trying to knock each other off of the top spot for a search like "weddings", that's $5,000 worth of effort that gets burned up with no offsetting amount of goods and services added to the world. This is what economists call a deadweight loss, with no corresponding benefit to any party. The two wedding planners might as well have smashed their pastel cars into each other. Even if a single company spends the effort and money to move from position #50 to position #1, that gain to them is offset by the loss to the other 49 companies that each moved down by one position, so the net benefit across all parties is zero, and the effort that the company spent to raise their position would still be a deadweight loss.

On the other hand, if search engine results were sorted according to a true meritocracy, then companies that wanted to raise their rankings would have to spend effort improving their services instead. This is not a deadweight loss, since these efforts result in benefits or savings to the consumer.

I've been a member of several online entrepreneur communities, and I'd conservatively estimate that members spend less than 10% of the time talking about actually improving products and services, and more than 90% of the time talking about how to "game" the various systems that people use to find them, such as search engines and the media. I don't blame them, of course; they're just doing what's best for their company, in the inefficient marketplace that we live in. But I feel almost lethargic thinking of that 90% of effort that gets spent on activities that produce no new goods and services. What if the information marketplace really were efficient, and business owners spent nearly 100% of their efforts improving goods and services, so that every ounce of effort added new value to the world?

Think of how differently we'd approach the problem of creating a new Web site and driving traffic to it. A good programmer with a good idea could literally become an overnight success. If you had more modest goals, you could shoot a video of yourself preparing a recipe or teaching a magic trick, and just throw it out there and watch it bubble its way up the meritocracy to see if it was any good. You wouldn't have to spend any time networking or trying to rig the results, you just create good stuff and put it out there. No, despite whatever cheer-leading you may have heard, it doesn't quite work that way yet -- good online businessmen still talk about the importance of networking, advertising, and all the other components of gaming the system that don't relate to actually improving products and services. But there is no reason, in principle, why a perfectly meritocratic content-sorting engine couldn't be built. Would it revolutionize content on the Internet? And, could Search Wikia be the project to do it, or play a part in it?

Whatever search engine the Wikia company produced, it would probably have such a large following among the built-in open-source and Wikipedia fan base, that traffic wouldn't be a problem -- companies at the top of popular search results would definitely benefit. The question is whether the system can be designed so that it cannot be gamed. I agree with Jimmy Wales's stated intention to make the algorithm completely open, since this makes it easier for helpful third parties to find weaknesses and get them fixed, but of course it also makes it easier for attackers to find those weaknesses and exploit them. If you think Microsoft paying a blogger to edit Wikipedia is a problem, imagine what companies will do to try and manipulate the search results for a term like "mortgage". So what can be done?

The basic problem with any community that makes important decisions by "consensus" is that it can be manipulated by someone who creates multiple phantom accounts all under their control. Then if a decision is influenced by voting -- for example, the relative position of a given site in a list of search results -- then the attacker can have the phantom accounts all vote for one preferred site. You can look for large numbers of accounts created from the same IP address, but the attacker could use Tor and similar systems to appear to be coming from different IPs. You could attempt to verify the unique identity of each account holder, by phone for example, but this requires a lot of effort and would alienate privacy-conscious users. You could require a Turing test for each new account, but all this means is that an attacker couldn't use a script to create their 1,000 accounts -- an attacker could still create the accounts if they had enough time, or if they paid some kid in India to create the accounts. You could give users voting power in proportion to some kind of "karma" that they had built up over time by using the site, but this gives new users little influence and little incentive to participate; it also does nothing to stop influential users from "selling out" their votes (either because they became disillusioned, or because they signed up with that as their intent from the beginning!).

So, any algorithm designed to protect the integrity of the Search Wikia results would have to deal with this type of attack. In a recent article about Citizendium, a proposed Wikipedia alternative, I argued that you could deal with conventional wiki vandalism by having identity-verified experts sign off on the accuracy of an article at different stages. That's practical for a subject like biology, where you could have a group of experts whose collective knowledge covers the subject at the depth expected in an encyclopedia, but probably not for a topic like "dedicated hosting" where the task is to sift through tens of thousands of potential matches and find the best ones to list first. You need a new algorithm to harness the power of the community. I don't know how many possible solutions there are, but here is one way in which it could be done.

Suppose a user submits a requested change to the search results -- the addition of their new Site A, or the proposal that Site A should be ranked higher. This decision could be reviewed by a small subset of registered users, selected at random from the entire user population. If a majority of the users rate the new site highly enough as a relevant result for a particular term, then the site gets a high ranking. If not, then the site is given a low ranking, possibly with feedback being sent to the submitter as to why the site was not rated highly. The key is that the users who vote on the site have to be selected at random from among all users, instead of letting users self-select to vote on a particular decision.

The nice property of this system is that an attacker can't manipulate the voting simply by having a large number of accounts at their control -- they would have to control a significant proportion of accounts across the entire user population, in order to ensure that when the voters were selected randomly from the user population, the attacker controlled enough of those accounts to influence the outcome. (If an attacker ever really did spend the resources to reach that threshold point, and it became apparent that they were manipulating the votes, those votes could be challenged and overridden by a vote of users whose identities were known to the system. This would allow the verified-identity users to be used as an appeal of last resort to block abuse by a very dedicated adversary, while not requiring most users to verify their identity. This is basically what Jimmy Wales does when he steps in and arbitrates a Wikipedia dispute, acting as his own "user whose identity is known".)

This algorithm for an "automated meritocracy" (automeritocracy? still not very catchy at 7 syllables) could be extended to other types of user-built content sites as well. Musicians could submit songs to a peer review site, and the songs would be pushed out to a random subset of users interested in that genre, who would then vote on the songs. (If most users were too apathetic to vote, the site could tabulate the number of people who heard the song and then proceeded to buy or download it, and count those as "votes" in favor.) If the votes for the song are high enough, it gets pushed out to all users interested in that genre; if not, then the song doesn't make it past the first stage. If there are 100,000 users subscribed to a particular genre, but it only takes ratings from 100 users to determine whether or not a song is worth pushing out to everybody, that means that when "good" content is sent out to all 100,000 people but "bad" content only wastes the time of 100 users, the average user gets 1,000 pieces of "good" content for every 1 piece of "bad" content. New musicians wouldn't have to spend any time networking, promoting, recruiting friends to vote for them -- all of which have nothing to do with making the music better, and which fall into the category of deadweight losses described above.

An automeritocracy-like system could even be used as a spam filter for a large e-mail site. Suppose you want to send your newsletter to 100,000 Hotmail users (who really have signed up to receive it). Hotmail could allow your IP to send mail to 100,000 users the first time, and then if they receive too many spam complaints, block your future mailings as junk mail. But if that's their practice, there's nothing to stop you from moving to a new, unblocked IP and repeating the process from there. So instead, suppose that Hotmail stores your 100,000 received messages temporarily into users' "Junk Mail" folders, but selectively releases a randomly selected subset of 100 messages into users' inboxes. Suppose for arguments' sake that when a message is spam, 20% of users click the "This is spam" button, but if not, then only 1% of users click it. Out of the 100 users who see the message, if the number who click "This is spam" looks close to 1%, then since those 100 users were selected as a representative sample of the whole population, Hotmail concludes that the rest of the 100,000 messages are not spam, and moves them retroactively to users' inboxes. If the percentage of those 100 users who click "This is spam" is closer to 20%, then the rest of the 100,000 messages stay in Junk Mail. A spammer could only rig this system if they controlled a significant proportion of the 100,000 addresses on their list -- not impossible, but difficult, since you have to pass a Turing test to create each new Hotmail account.

The problem is, there's a huge difference between systems that implement this algorithm, and systems that implement something that looks superficially like this algorithm but actually isn't. Specifically, any site like HotOrNot, Digg, or Gather that lets users decide what to vote on, is vulnerable to the attack of using friends or phantom users to vote yourself up (or to vote someone else down). In a recent thread on Gather about a new contest that relied on peer ratings, many users lamented the fact that it was essentially rigged in favor of people with lots of friends who could give them a high score (or that ratings could be offset unfairly in the other direction by "revenge raters" giving you a 1 as payback for some low rating you gave them). I assume that the reason such sites were designed that way is that it just seemed natural that if your site is driven by user ratings, and if people can see a specific piece of content by visiting a URL, they should have the option on that page to vote on that content. But this unfortunately makes the system vulnerable to the phantom-users attack.

(Spam filters on sites like Hotmail also probably have the same problem. We don't know for sure what happens when the user clicks "This is spam" on a piece of mail, but it's likely that if a high enough percentage of users click "This is spam" for mail coming from a particular IP address, then future mails from that IP are blocked as spam. This means you could get your arch-rival Joe's newsletter blacklisted, by creating multiple accounts, signing them up for Joe's newsletter, and clicking "This is spam" when his newsletters come in. This is an example of the same basic flaw -- letting users choose what they want to vote on.)

So if the Wikia search site uses something like this "automeritocracy" algorithm to guard the integrity of its results, it's imperative not to use an algorithm vulnerable to the hordes-of-phantom-users attack. Some variation of selecting random voters from a large population of users would be one way to handle that.

Finally, there is a reason why it's important to pay attention to getting the algorithm right, rather than hoping that the best algorithm will just naturally "emerge" from the "marketplace of ideas" that results from different wiki-driven search sites competing with each other. The problem is that competition between such sites is itself highly inefficient -- a given user may take a long time to discover which site provides better search results on average, and in any case, it may be that Wiki-Search Site "B" has a better design but Wiki-Search Site "A" had first-mover advantage and got a larger number of registered users. When I wrote earlier about why I thought the Citizendium model was better than Wikipedia, several users pointed out that it may be a moot point, for two main reasons. First, most users will not switch to a better alternative if it never occurs to them. Second, for sites that are powered by a user community, it's very hard for a new competitor to gain ground, even with a superior design, if the success of your community depends on lots of people starting to use it all at once. You could write a better eBay or a better Match.com, but who would use it? Your target market will go to the others because that's where everybody else is. Citizendium is, I think, a special case, since they can fork articles that started life on Wikipedia, so Wikipedia doesn't have as huge of an advantage over them as they would if Citizendium had to start from scratch. But the general rule about imperfect competition still applies.

It's a chicken-and-egg problem: You can have Site A that works as a pure meritocracy, and Site B that works as an almost-meritocracy but can be gamed with some effort. But Site B may still win because the larger environment in which they compete with each other, is not itself a meritocracy. So we just have to cross our fingers and hope that Search Wikia gets it right, because if they don't, there's no guarantee that a better alternative will rise to take its place. But if they get it right, I can hardly wait to see what changes it would bring about.

This discussion has been archived. No new comments can be posted.

Could Open Source Lead to a Meritocratic Search Engine?

Comments Filter:
  • Seriously: What could an OSS-based user-submitted search algorithm do that Pigeon Rank - http://www.google.com/technology/pigeonrank.html - couldn't? If a team of highly trained pigeons can build an empire like Google, then I seriously doubt that user-based indexing would work.

    Am I wrong?
    • I think so. The problem with Google for me is the crap results I get on my searches, that wind up near the top of the results. This comes from the focus of ad revenue, which is never discussed. Certainly I am not alone when I find that I've wasted time visiting a page that has NOTHING to do with what I'm looking for... but it's got a lot of ads for it!
  • by UbuntuDupe ( 970646 ) * on Wednesday February 14, 2007 @01:09PM (#18013422) Journal
    I like the essay except for this:

    "The real losses caused by an inefficient search algorithm, are the efforts spent by companies to game the search results (e.g. paying search engine optimization firms to try and get them to the top Google spot), and the reluctance of new players to enter that market if they don't have the resources to play those games. If two companies each spend $5,000 trying to knock each other off of the top spot for a search like "weddings", that's $5,000 worth of effort that gets burned up with no offsetting amount of goods and services added to the world. This is what economists call a deadweight loss, with no corresponding benefit to any party."


    This issue has long bugged me and it's hard to get answers about it. I don't understand how this is a deadweight loss (DWL) by his definition. Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party. How is this DWL different from the "non-DWL" example directly preceding, in which someone overpaid for hosting, but that was the hosting company's gain?

    Does anyone have a rigorous DWL definition that can be backed up by a valid example?
    • Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party.

      The SEO expert? I don't really know about deadweight loss, but it does seem that nothing was gained by the exercise that was described, except somebody got to leech money off of the companies paying for SEO.

    • Re: (Score:2, Insightful)

      by maxume ( 22995 )
      The company that loses put money into the advertising system; that money can very likely be re-purposed for other keywords or whatever. The time their employees spent gaming the system(to no benefit for the company) could have been spent on activities that were beneficial to the company. The employee doesn't care, but the employer would have been better off sending him for donuts or whatever.

      The amounts seem unlikely(of month of employee time with no realized benefit? bah.), but the concept is sound.
    • Re: (Score:3, Informative)

      by pkulak ( 815640 )
      Because the first example is equivalent to someone just handing the hosting company 50 bucks a month as a free gift. Money is exchanged, but nothing happens. In the second example, money is exchanged AND people work very hard for a long time to earn it and yet produce nothing. It would be like me paying you to dig a hole and then fill it in. The time you spend doing that is time you can't spend curing cancer.
    • Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party.

      Yes, but it's just a transfer of money from one party to another; it's a zero-sum game. No wealth has been produced in the sense of some useful work being done. With respect to the hosting company example, the hosting company received the market price for a useful service, a positive benefit to both parties. (As far as I can see, the company did not overpay for hosting in the example).

      Tha
      • Well, no, that's not the theory, hence the problem. The definition of the DWL given in the essay (and in treatments of the topic) is a loss "with no corresponding benefit for another party". Whoever got the $5000 benefited; hence it cannot be a DWL. The loss of the search-engine-gamers was the gain of whoever they paid. It doesn't matter if wealth/useful-work has been produced or hasn't. Even in a zero-sum transfer, someone benefits. For it to be a true DWL (by the definition), it must be that no one
        • Whoever got the $5000 benefited; hence it cannot be a DWL. The loss of the search-engine-gamers was the gain of whoever they paid. It doesn't matter if wealth/useful-work has been produced or hasn't.

          I think the dead loss is the time and effort expended by the workers of the SEO company and the administration of the paying company. All that work for a *net* benefit of no wealth.
          • I understand there's no *net* wealth generated, but that's not what DWL refers to. For it to be a DWL, it must be the case that *no one* benefited. For example, stealing $X from me and giving it to you would have no net benefit, but it would not be called a DWL since my loss was your gain. In the example given in the original essay, the loss of the search engine manipulators and search engine users was the gain of the workers, who benefited from the money they got. That's a non-DWL in the same sense tha
            • In short, the flaw in trying to call this (or anything, IMHO) a DWL is that you have to ignore the person who was paid as a result of the futile competition.

              As you define it, I agree. I can't think of anything that would count as a DWL (except maybe literally throwing money away -- but then I suppose you've increased the value of everyone else's cash haven't you?)

              Anyway, Wales's point makes sense, even if his definition of a DWL doesn't agree with the literature (I've no idea if it does or not) and there is
              • First, I don't think it was Jim Wales that made the DWL point, but the contributor who quoted him. Otherwise, we're in agreement. I understand how there can be an efficiency loss as a result of the effort expended with no net benefit. But it can't be explained through the mechanism of a deadweight loss, which I consider theoretically unsound. If there were something that truly benefited no one (*as they perceive it*) and had a high cost, it would not be done to begin with, rendering the point moot.

                Small
    • The Search Engine Optimisation expert gets the money. One of the things he will likely do is make the site compliant with the W3C's accessibility guidelines, as this will likely improve search ranking. That does benefit society as a whole. But other techniques such as url cloaking and keyword stuffing do not benefit society as a whole, so having scarce resources devoted to these tasks is suboptimal as far as the economy is concerned.
    • This issue has long bugged me and it's hard to get answers about it. I don't understand how this is a deadweight loss (DWL) by his definition. Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party. How is this DWL different from the "non-DWL" example directly preceding, in which someone overpaid for hosting, but that was the hosting company's gain?

      The search-engine received all the benefits of the efforts, but those benefits cancelled each

      • The search-engine received all the benefits of the efforts, but those benefits cancelled each other out.

        No, the SE didn't benefit (or at least not primarily). Rather, the workers they paid to game it, benefited. Thus it can't be a DWL by the definition -- the loss of some corresponded to the gain of those workers. It doesn't matter that there was a net loss after summing over all agents; that's not what DWL refers to. Hence my confusion with the concept.
        • No, the SE didn't benefit (or at least not primarily). Rather, the workers they paid to game it, benefited. Thus it can't be a DWL by the definition -- the loss of some corresponded to the gain of those workers. It doesn't matter that there was a net loss after summing over all agents; that's not what DWL refers to. Hence my confusion with the concept.

          Total social wealth is decreased when people employ others to perform useless tasks, such as battling over a search-engine slot. The SEO industry, like the

    • by shmlco ( 594907 )
      The example is bad because it's defining the wrong outcome. If both spend five grand to have the "top spot", and the SEO can actually effect this type of outcome, then one of them will actually have the top spot and one the second. So using his "logic" the first one "won" and the second one "lost", so the second one's money was wasted, and a good portion of the first one's mony was spend simply competing with number two.

      However, since there are other results in the search the net result of "winning" and "lo
    • The problem with "deadweight loss" is that it assumes that there is some sort of value to economic activity, and that some activities have more value (or "benefit") than others. So if I plant some seeds in the ground, and later sell the harvested grain on the open market, my planting activity is considered to be beneficial. Whereas if I spit those seeds at my bothersome neighbor, who has been ripping up my turnips in the middle of the night because I scoffed at his explanation of economic theory, I (and my

      • by dave1g ( 680091 )
        Thats only a fair analogy if there was a limit on the number of semi's available for transporting vaccine and one of them was used for a monster truck show instead.

        In that case I'm certain if you asked the attendees if they would be willing to take their money back so that the semi could go save lives you would get a near 100% approval.
        • I'm certain if you asked...(etc)

          Certainly, the first time you ask the attendees, they will agree wholeheartedly. And perhaps even the second or third times. But in a purely utilitarian world, there could never be any monster truck rallies until everyone had their vaccine, and all the little boys who had fallen down wells had been dug out, and all the lonely puppies and kittens in the animal shelters got to go to nice homes. After four or five of these episodes of re-routing semi's, people would start to

  • Google (Score:1, Insightful)

    by Anonymous Coward
    This is what Google already does - using linking as a proxy for the average desirability others have to see the content at the links end. As with all systems, it can be gamed. But it sure does a good job of returning results. It is so good, in fact, Google has not had to update its search syntax available to the general public in order to stay ahead of the competition. I wish Google would. Maybe some one else coming up with another way to have a meritocratic search engine will be the impetus for Google
    • by GoCanes ( 953477 )
      Actually, Google is highly regressive in how it displays search results. Companies that can afford SEO tricks, renting links from other high PR sites, hiring staff to write useless content on blogs with links, etc will get the best results. The small company with a better mousetrap gets very little attention from Google. The chicken-and-egg problem will exist in the meritocracy too --- there's no way to rise unless people can find you.
      • I don't think Google is highly regressive in the way you describe, but I suppose it certainly depends on your definition of regressive.

        Google is definitely regressive from the point of view that it tries to represent the average total mindshare about search terms - NOT the average CURRENT mindshare. So if you want to find the up and coming site that's ABOUT to be the new hotness but hasn't reached critical mass yet, you need something like the derivative of Google's PageRank.

        But this is definitely NOT what
  • Wikia search site uses something like this "automeritocracy" algorithm to guard the integrity of its results, it's imperative not to use an algorithm vulnerable to the hordes-of-phantom-users attack

    That right there is a billion-dollar idea that I'm sure more than a small horde of devs are working on for themselves or for vulture capitalists.

    Will Mr. Wales own the magic algorithm to use as he sees fit or what?
  • All you have to do to substantially reduce "gaming" the system is to not make it worthwhile.

    Since you can pay Google to have your site link placed right at the top of the search results, for less that what you'd pay someone to game the system to reach a similar position, it wouldn't make sense for large companies to try to "game" Google at all.

    If it weren't for the advertising, we'd probably see a lot more of this on Google.

    Maybe this project could implement something similar.
  • by SirGarlon ( 845873 ) on Wednesday February 14, 2007 @01:15PM (#18013512)
    I seriously doubt this will turn into anything useful because it relies on a collective definition of "merit." When you and I search for information on the same topic, your needs and my needs may be totally differnt (I may be looking for a little bit of general background and you may be looking to compare and contrast the opinions of two recognized experts in the field). Even if all the hurdles against manipulation can be overcome, I don't see how "merit" rankings will amount to anything more than a popularity contest.
    • The main reason Google was so much better than AltaVista was that it sorted the results according to a "popularity contest" based on how many other pages referred to it. This was way more useful than sorting according to how often your search term occur in it.

      Don't dismiss popularity contests, the popular choice will, almost by definition, usually be the most interesting choice for most people. You may not feel you belong to "most people", most people don't, but if you leave your feeling of elitism and/or
      • by ivan256 ( 17499 )
        This doesn't preclude the need for a good baseline though, something that would put roses higher than dog poo in a "things that smell great" list.

        That's exactly what this kind of system *doesn't* need. (Well, it needs it because if we don't use the same definition of "merit" for all users, or at least limit the numbers of definition of "merit" that are available this will become a computationally infeasible project... But let's talk theoretically).

        This theoretical system should learn through user feedback e
    • by nine-times ( 778537 ) <nine.times@gmail.com> on Wednesday February 14, 2007 @01:47PM (#18013914) Homepage

      In fairness, I don't think that "merit" is relative with respect to search-engine results. In a simplified example, if I search for "sony", I'm probably looking for one of three things:

      1. The Sony website
      2. A website that sells Sony products
      3. A website that gives reviews of Sony products

      Therefore, the top results should reflect that. Most likely, I'm not looking for porn. I remember the days where search engines would return porn for any and all searches. The fact that Google was able to avoid this is part of what brought about its rise to power.

      Of course, not every example is so simple, but clearly there are results that are or are not correct for a given search.

    • With a few script changes, that whole spambot army out there could easily be rejigged as meritbots.

      Companies could very easily request/encourage/force employees to do a merit update every morning.

      Any system is open to abuse. At least the Google model is pretty easy to understand.

    • Re: (Score:3, Interesting)

      by timeOday ( 582209 )

      I seriously doubt this will turn into anything useful because it relies on a collective definition of "merit."

      Good point. But furthermore, I can guarantee you this won't work, simply because web page rankings and spam filtering are essentially the same thing, and the spam issue has not been solved. That is, even when we don't have the problem of multiple conflicting opinions and all we're trying to do is model the preferences of a single recipient, we still can't do it!

  • Let's see, when does Google Patents run out?
  • ...then I think the benefits could be tremendous, but whenever I hear the term "meritocracy" or it's derivatives, I start to get skeptical and/or nervous. One person's eyesore of a website could be someone else's lovingly tended but badly coded page that is popular with all their friends. Also, by definition, those who are willing to spend time in a "modified wiki" project such as this will likely be more technically oriented and likely have a bias against poor design and/or poor coding. Bear in mind that o
  • I especially like his point on the economic inefficiencies that result from Google's vulnerability to results manipulation or 'tweaking'. In a certain unnamed, small internet company I worked for, fully 10% of our staff were SEM/SEO people, and a good chunk of our development time was spent on projects led by them trying to optimize our page rankings. I'm sure we're not the only ones.

    If a theoretical "merit-based" search engine existed, those non-trivial resources would be spent building a better mouset

    • by knewter ( 62953 )
      I think, realistically, if a merit-based search engine existed then the same number of resources would be spent trying and failing. Did you have marked improvement for the effort spent on SEO?
  • StumbleUpon (Score:4, Informative)

    by EricBoyd ( 532608 ) <(moc.oohay) (ta) (dyobcirerm)> on Wednesday February 14, 2007 @01:22PM (#18013618) Homepage
    It's not a "search engine" per-say but a lot of your talk of "automated meritocracy" sounds exactly like what StumbleUpon [stumbleupon.com] does in order to recommend content to users. People vote on a page, those votes are passed through an automated collaborative filtering system, and then the page is shown to more users who are predicted to like it, rinse lather and repeat. Good content rises to the top of the recommendation queue, so that new users (or people who just joined a category) are shown the things which the vast majority of people liked, in order to build up a rating history to personalize that person's recommendations.
    • With the data they have they can probably build a very personalized search engine. With everything at the top having very positive votes from many users I imagine it would be less susceptible to gaming.
    • You, and many others, are missing the point, and failed even to RTFS (as long as it may be) it is worth reading if you beleive that this is an extension of Google's PageRank, or of any other voting site out there. The primary concept is this: Users cannot select what to vote for, preventing a subsect of users to overwhelm the voting. Instead random users are chosen (like meta-moderating on Slashdot) to vote yes or no. In order for SEO to be possible, the SEO company would have to own more than a majority
  • by currivan ( 654314 ) on Wednesday February 14, 2007 @01:23PM (#18013632)
    There are two main directions where search can improve. One is better understanding of natural language, to disambiguate query terms and provide results where the wording on pages is different from the wording of the query.

    The other, which this approach can address, is to improve the term relevance scores and overall page quality metrics that mainstream search engines are based on. Google had its initial success because of two features of this type: one was Page Rank, a measure of overall topic-independent site popularity, and two, bettor use of anchor text, the words people write when linking to other pages.

    In both cases, they mined the link structure of the web, which was essentially aggregate community generated information about site quality that wasn't being spammed at the time. As they succeeded, regular people put less effort into writing their own link text, and spammers took over.

    The next source of this type of community generated content will probably be something incidental instead of deliberately created. If you build a central repository of reviews of web sites, you both make it easy for people to game your results, and you open yourself up to lawsuits from interested parties.

    However, untapped information already exists on what people find useful on the web in the form of their browsing histories, a special case of this being their bookmarks. Someone who could aggregate this information on what millions of people ended up looking at after they ran a particular search query would be in an excellent position to improve the traditional search engine scoring algorithm beyond link data.

    • Re: (Score:3, Interesting)

      by russellh ( 547685 )

      There are two main directions where search can improve. One is better understanding of natural language, to disambiguate query terms and provide results where the wording on pages is different from the wording of the query.

      I'm highly skeptical about this path because NL works best in a specified (narrow) context. So if you can specify the context, then you must have already put web pages into context - driven by what? the semantic web? If you've done that, then NL is almost redundant. Like, maybe I want

    • That will never work. Understanding natural language is hugely difficult for people, and mind-bogglingly difficult for computers. You have to account for the fact that meaning is contextual, meaning is not fixed, and that people make mistakes in their use of language.

      There is a whole branch of philosophy dedicated to theory of language, and I'd recommend books, but they're by and large so hopelessly abstruse that it would be little more than intellectual hazing if you don't already have pretty solid knowled
    • You know, if what they are looking for is "what the average community perfers" Why don't they just implement a decent search engine, even better if they can just use googles. And then record clicks. If a result gets more clicks put it farther to the top. Only problem with this is it puts a barrier to new guys getting in. So maybe add some randomization to put non favorites towards the top but make sure the top 8 or so get on the front page no matter what.
      • We tried using clickthroughs, but the data is very noisy. Users often don't know if a page is useful until they go to it, and they often open many pages from the same list of results. The best application turned out to be "how often is this link the last one people click on", but that assumes they're using the back button rater than opening several links in tabs.

        You also don't know if the user finds what they really want linked off of a result page, or if they give up. The skewing of clicks toward the t
        • Hmm, good point. If there only was a way for force a user to say. Yea.. this is what I wanted.. But alas... If you implemented search as a sidebar or something that was more integrated into the browser this would probably be easy.
  • Bootvis' Theorem states:
    It is not possible to create an algoritm that takes as input any dataset and a search query and outputs the results 'best' matching the query.

    I have truly marvellous proof but ...
    • Arrow's Theorem (Score:4, Informative)

      by attonitus ( 533238 ) on Wednesday February 14, 2007 @04:23PM (#18015830)
      Such a theorem does exist and is proven! Arrow's Theorem [wikipedia.org] states that it's impossible to design a voting system that satisfies three really basic conditions:

      a) The removal of one candidate from the race would not affect the rank of the others;

      b) If everyone prefers candidate A to candidate B then the algorithm should rank A above B;

      c) There is no dictator (i.e. there's more than one person voting).

      The same criteria should also apply to a perfect search engine - the removal of one page from the web should not affect the relative ranking of the others, if everyone thinks page A is better than page B, page A should come first and, to be practical, the engine should take as input the priorities of more than just one person (it's not feasible to build a customized search engine that knows exactly the priorities of each individual user).

      Therefore, a perfect search algorithm does not exist

  • I like how the post talks about making search an efficient market, but completely discounts another important market that is already a lot closer to efficient: labour. If you're good enough to write an ungameable search engine, you're going to have substantial job offers from at least Google, Yahoo, and Microsoft.
    • I think any search algorithm is going to be as un-gameable as DRM is uncrackable. The goal would be to convince the big corporations that the algorithm is un-gameable long enough to collect the money.
  • An open source search engine would be a good idea, except that the index would have to be hosted somewhere and indexed somehow.

    I'd gladly donate some spare processor cycles, hard drive space, and bandwidth to an open source search engine like a BOINC project.

    • >I'd gladly donate some spare processor cycles, hard drive space, and bandwidth

      If it's along the lines of P2P apps, DHT [wikipedia.org]s etc., this could really work.
      Kad [wikipedia.org] already does a pretty good job of searching. Use something like it to point to Internet content, and use swarming for downloads... There's a Firefox extension waiting to happen.
  • by Bananatree3 ( 872975 ) * on Wednesday February 14, 2007 @01:29PM (#18013722)
    There already exists a distributed, open source engine which has been around a while, which is called Majestic 12 [majestic12.co.uk]. It uses a client-based search engine, which crawls the web for hundreds of millions of URLS, and then sends the data back to central servers. The servers than compile the data and use user-based searching algorithms to perform the search. While the algorithms are still very much in alpha, it is still a very noteworthy project. Also, its URL base is currently around 30-35 Billion URLs.
  • by Anonymous Coward
    In general I see the termed "gamed" as subjective. When outcomes are matched to an individual's expectations, they see the system as working, when they disagree with the outcome, they call it gaming.

    As long as people are the engine behind this "pure meritocracy," the system will be gamed. I find the google results to be good enough that I am not looking for an alternative. Google provides the basis for research. If you want the best deals, you still have to shop around and do the due diligence. If you want
    • by Kelson ( 129150 ) * on Wednesday February 14, 2007 @01:43PM (#18013864) Homepage Journal

      In general I see the termed "gamed" as subjective. When outcomes are matched to an individual's expectations, they see the system as working, when they disagree with the outcome, they call it gaming.

      Very true. For an example, look no further than the subset of SEO that sees no difference between settings up hundreds of automatically-generated pages linking to a site for the sole purpose of increasing search rankings and hundreds of individual people independently writing about (and linking to) a site. I've actually seen people in the linkfarm business claim that they're not doing anything different from bloggers.

      This is basically equivalent to saying that there's no difference between one person writing 10 letters to a politician under assumed names, and 10 people writing their own letters.

  • by Anonymous Coward

    The use of a ranking system (even a fair and un-gamable one) is biased against a true meritocracy. If I'm looking for apple pie recipies, I (and likely anyone else looking for apple pie recipies) will pluck one from the top-ranked choices.

    This "top-10-cherry-picking" makes it highly unlikely that the possibly-superior newcomers will be seen. You have to be seen in order to be ranked up.

    It's only through "outside" mention (blogs, word-of-mouth, etc.) that newcomers have much of a chance of being looked

  • Which Community? (Score:3, Insightful)

    by RAMMS+EIN ( 578166 ) on Wednesday February 14, 2007 @01:42PM (#18013852) Homepage Journal
    ``First, though, consider the benefits that such a search engine could bring, both to content consumers and content providers, if it really did return results sorted according to average community preferences.''

    It's also interesting to ask "which community?" There is a small number of categories of things that define some high percentage of the things I search for. I am pretty sure there is a very small intersection of those categories with the categories of things the world's population as a whole searches for. There are also differences based on location and language. In short, my preferences are almost certainly very different from the average of all searchers.

    On the other hand, there are definitely groups of searchers whose preferences coincide with mine. For example, people who are involved in open source development, *nix users, computer scientists, environmentalists, English speakers, and people in the Netherlands probably have preferences that largely overlap with mine.

    This suggests to me that some sort of machine learning might be used, where the system guesses your search preferences based on what links you have followed in the past, and what links other people have followed in the past. In other words, the system (implicitly) tries to determine which communities you are part of, and gives you results that are prefered by members of these communities.
  • Sounds like the algorithm he really wants to talk about is the one Highlander names "peer ranking system" on his page at Everything2.com: http://www.everything2.com/index.pl?node_id=152171 2 [everything2.com]

    I somehow believe that Google is quite aware of this algorithm and has already implemented it.
  • If you know from which Indiana Jones movie this scene is from tell me. I remember Jones facing off
    some huge Samurai with swords in the middle of a market place. The Samurai twirls his swords
    and delivers one hell of a impressive martial arts show before challenging Jones to attack.
    Jones instead just shrugs, draws his colt and shoots the Samurai point blank.

    With this analog in mind, it's easy for me to draw my colt and shoot this long missive down with
    one single argument: A Wikipedia-like process for a search
    • I think you're confused with Kill Bill. In Raiders of the Lost Arc, Indy's first encounter was with an Arab Egyptian and a scimitar. In Temple of Doom, it was Sikh with a khunda.
    • Was he actually a Samurai?

      Anyway... I do remember hearing that the scene was an accident. Basically, Harrison Ford had diarrhea at the time. It was actually supposed to be a nice long fight, swords vs Indy's whip, but when you gotta go...

      Ah, the useless facts you pick up watching movies... After Morpheus' fully-VR PowerPoint-like talk about the Real World, and Neo gets unplugged and staggers around saying "I don't believe it..." Then Cypher goes "He's gonna pop" and Neo pukes... That was real. Apparently t
  • by DysenteryInTheRanks ( 902824 ) on Wednesday February 14, 2007 @01:51PM (#18013980) Homepage
    He's thinking about this all wrong.

    A true open source search engine would let anyone roll their own algorithm. Each algorithm would be a sort of "plug in."

    The index would be the shared, open source part, collaboratively crawled (via PC software or browser plugin) by everyone who elects to participate.

    Algorithms would either work on the index after the fact, or, if they need access to the indexing process itself, would be part of a series of plugins run on the full HTML of each page.

    The index itself would have an open API, so people could build their own front end search websites.

    Trying to design the right algorithm up front is a premature optimization. I have no interest in helping Jimmy Wales become the next Sergey Brin. But I *would* participate in something that gives _me_ a shot, however distant, at founding the next Google, minus the massive spider farm.
    • The index would be the shared, open source part, collaboratively crawled (via PC software or browser plugin) by everyone who elects to participate.

      The real trick is making this truly open in the Freenet kind of way -- no centralized servers at all (other than existing DNS and such).

      Think for a moment: Suppose Google allowed anyone to write a plugin of sorts to allow specialized kinds of searches, and extended their API to support any kind of frontend accessing these plugins. So, anyone could use Google's

  • Such an auto meritocracy could truly work if the self-pruning clustering algorithm created semantically-bound transactions in a feedback system that was designed at the outset to rival capitalism. I know that Google could be tweaked to do this, were it not for capitalist noses being unable to pick up on the scent of profit.
  • During the talk, Jimmy acknowledge that the Beta of the engine is gonna suck and the media is gonna shit all over it. When the beta is released, they're gonna type in bold letters "We know this sucks" to curve some of that negative karma from the press. At least he's realistic about the project. Check it out the video of Jimmy's NYU talk here: http://video.google.com/videoplay?docid=-741696809 2951113589 [google.com] or download the MP3 here: http://homepages.nyu.edu/~gd586/Jimmy%20Wales%20-% 20NYU%20-%201-31-07.mp3 [nyu.edu]
  • by Animats ( 122034 ) on Wednesday February 14, 2007 @01:59PM (#18014102) Homepage

    Rating by asking random users has been tried. At IBM. See United States Patent 7,080,064, Sundaresan July 18, 2006, "System and method for integrating on-line user ratings of businesses with search engines". Sundaresan has several patents related to schemes for asking users for ratings and using that info to adjust search rankings.

    The basic trouble with this approach is that, if you ask random users to rate random sites, they don't have enough time, energy, or effort to do a good job of it. If you ask self-selected users of the sites, the system can be gamed.

    This sort of thing only works where the set of things to rate is small compared to the interested user population. So it's great for movies, marginal for restaurants, and poor for websites generally.

  • I sometimes think that we already know the way to do searching - and Google has a patent on it.
  • by Bluesman ( 104513 ) on Wednesday February 14, 2007 @02:01PM (#18014120) Homepage
    >The extra $50 that the user pays is the user's loss, but it's also the hosting company's gain.
    >If we consider costs and benefits across all parties, the two cancel out.
    >The world as a whole is not poorer because someone overpaid for hosting.

    And thus the broken window fallacy continues...

    Wealth is created through increased efficiency. A decrease in efficiency is a decrease in wealth, regardless of who benefits.

    By the "world is not poorer" logic, we might as well all ride horses, since we'd be paying oat producers and horseshoe manufacturers instead of the auto industry, so the world as a whole wouldn't be poorer.

    By paying more for inefficient hosting, that takes money away from more efficient uses.
  • by logicnazi ( 169418 ) <gerdesNO@SPAMinvariant.org> on Wednesday February 14, 2007 @02:29PM (#18014442) Homepage
    The author of this piece takes about meritocratic search as if it were some real fixed ordering of the search results that we just have to be smart enough to uncover. This is anything but the case. For instance is the recipe for apple pie that makes better tasting pie but is too complicated for the inexperienced chief to make better or worse than the one which is extremely easy to follow but isn't as good? When talking about pie this sort of issue might not be a big deal but what happens when we start talking about things like climate science. Is the best result some sort of environmental activists site, a mass media story, a global warming skeptic's site or the actual scientific results that are too technical for most of the public to understand?

    Sure, wikipedia makes these compromises quit well but the idea of content neutral encyclopedia entries provides a well defined goal. The second that we get to a search engine we can no longer cling to content neutrality because we must choose how to rank the advocacy sites on both sides of the spectrum. Unlike wikipedia where one can neutrally remark that some people believe X and others Y in a search engine the community has to decide if "unwanted pregnancy" is going to take someone to the planned parenthood site, an abortion clinic or an anti-abortion site.

    In short there is no notion of the meritocratic search order, there are just tradeoffs between different sorts of searchers. Google is already navigating this maze of tradeoffs, including looking at what users like, so I fail to see the argument that a community search will obviously make better tradeoffs than Google.

    In fact anyone who has spent much time on the Internet realizes that every community tends to develop its own prejudices and biases pushing away those who disagree and attracting those who agree. Slashdot attracts open source zealots and repels the technically inept. Whatever community develops this search engine will have its own biases which will discourage participation by those who don't agree. This is just human nature.

    Likely I might enjoy the results returned by such a search since I suspect the participants are likely to be technically sophisticated nerds and others who have similar views as I do. However, it seems doubtful that they will provide the results people who are very different than those who run the search engine will appreciate.

    Besides, this whole project just smells hokey to me. It sounds like Wales is drunk on his success with wikipedia and advocating it as THE solution to any problem. Problems are pragmatic things and they shouldn't be solved by ideologies.
    • This is where a tagging system would make a lot of sense. Websites should be able to be meta-filtered by content type...

      Professional Grade Article - does it contain an abnormal number of jargon/professional terms or have a number of equations beyond a threshold level, tag it as a professional article (Use standard algorithms to determine it's popularity and popular authority... leave it up to those who know to determine it's accuracy)

      Consumer Grade Article - does it contain few if any jargon/professional te
  • The world as a whole is not poorer because someone overpaid for hosting.

    Erm, yes it is. That difference in price could have been used to produce value. If you believe that the world is in fact not poorer, then you believe that the point of an economy is just to shuffle money around.

    See the broken window fallacy: http://en.wikipedia.org/wiki/Broken_window_fallacy [wikipedia.org]

  • by Animats ( 122034 ) on Wednesday February 14, 2007 @02:38PM (#18014554) Homepage

    We hadn't planned to announce this quite yet, but this is a good opportunity.

    We have a new answer to search - SiteTruth. [sitetruth.com] It's working, but not yet open to the public.

    Other search engines rate businesses based on some measure of popularity - incoming links or user ratings. SiteTruth rates businesses for legitimacy.

    What determines legitimacy? The sources anti-fraud investigators tell you to check, but nobody ever does. Corporate registrations. Business licenses. Better Business Bureau reports. The contents of SSL certificates. Business addresses. Business credit ratings. Credit card processors. All that information is available. It's a data-mining problem, and we've solved it. The process is entirely automated.

    Most of the phony web sites, doorway pages, and other junk on the web have no identifiable business behind them. Try to find out who really owns them, and you can't. When we can't, we downgrade their ranking. With SiteTruth, you can create all the phony web sites you want, but they'll be nowhere the beginning of any search result.

    Creating a phony company, or stealing the identity of another company, is possible, but it's difficult, expensive and involves committing felonies. Thus, SiteTruth cannot be "gamed" without committing a felony. This weeds out most of the phonies.

    SiteTruth only rates "commercial" sites. If you're not selling anything or advertising anything, SiteTruth gives you a neutral or blank rating. If you're engaged in commerce, you can't be anonymous. In many jurisdictions, it's a criminal offense to run a business without disclosing who's behind it. That's the key to SiteTruth.

    Our tag line: "SiteTruth - Know who you're dealing with."

    The site will open to the public in a few months. Meanwhile, we're starting outreach to the search engine optimization community to get them ready for SiteTruth. We want all legitimate sites to get the highest rating to which they're entitled. An expired corporate registration or seal of trust hurts your SiteTruth ranking, so we want to remind people to get their paperwork up to date.

    The patent is pending.

    • The crappy front page makes it look like a scam.

      • by Animats ( 122034 )

        The crappy front page makes it look like a scam.

        To some extent, that page was made to discourage unwanted attention during the early phases. But it's all real.

  • You have a major problem with the scale of providing search results. What you are proposing here is that individual users *rate* websites in some manner according to their merit. Leaving aside the fact that users are no inherently qualified to rate websites, the fact that a given website may have great merit for a given subject, but not others, and the fact that people will actively find a way to "game" this system just as they have all the others, how big exactly is the internet? lets assume there are 10m
  • Sounds like they're reinventing Open Directory [dmoz.org], which has been doing just fine for many years. I believe Google actually uses Open Directory as one of its seeds for the pagerank algorithm. The Wikimedia foundation keeps on starting up projects, many of which ever become very successful. Wikibooks, for instance, has never achieved its original, grandiose goals, and it's been struggling for years now without making much headway. Its only big area of success was gaming guides (not the college textbooks it was
    • DMOZ has been slow and incomplete for ages. It's also an example of hierarchies gone bad. I never really know where I should submit my site (edified.org). Shouldn't /California/Camping/RV and /Camping/RV/California be the same? Errr
  • I read this essay (long on words, short on content.) The summary:

    "I have a new idea for a search engine. You should be allowed to suggest a modification to the search results. Your modification will be anonymously reviewed, Slashdot-moderation style, by a small, random subset of search engine users. It's nice to learn that the algorithm solves a problem that does not exist with contemporary link-network algorithms, but does with a hypothetical bad idea (the sockpuppetry issue.)"

    Now can we talk about the ide
  • The big problem with this proposal is that it assumes that there is only one definition of "good". For instance, look at the example of searching for a web hosting firm. Am I interested in the same criteria as you? Yes, cheap is nice, but maybe I want to pay a bit more to survive a slash-dotting. Maybe I want "five nines" reliability. Maybe I want to run CGI scripts written in Haskell instead of PHP or Python. Or maybe I just want to run a generic Wordpress blog. Different firms provide different cap

  • A search for Ford should yield Ford Motor Co., as the correct first answer, Wales said.

    Why are companies more important than people?

    Why not a page [whitehouse.gov] about former U.S. President Gerald Ford as the 'correct first answer'?

    Or a page [wikipedia.org] about actor Glenn Ford as the 'correct first answer'?

    Aren't people more important than companies which are nothing more than legal constructs created by people to facilitate commerce amongst themselves?

    Anyway, I belive a 'fair' search engine would not use linking to determine populari
  • Utlimately at first you'd need users to tell you what was spam and what was not to help develop an anti-system gaming algorithm, also you could offer financial rewards for outing people gaming the search engine.

    Ideally you'd need users to help you fight the constant battle with those trying to game the search results, but users would need some kind of incentive or payment to keep the search engine running smoothly. Ideally maybe you could select random samples of people and pay them to filter out garbage?
  • One potential weakness is that attackers could perennially throw up searches on certain topics for re-examination. The problem lies not in the fact that I have to vote _once_ that bank X provides the best mortgage, but that might have to vote twenty times ('yes' 'I said yes' 'You know I already said yes') to establish this, because some bozo wanted a vote on this every day for years on end. After all, if I, as a user, press 'I do not agree with this order, take X to the top' twenty times a second (which I

It is easier to write an incorrect program than understand a correct one.

Working...