
Could Open Source Lead to a Meritocratic Search Engine? 148
This will be a trip into theory-land, so it may be frustrating to users who dislike talk about "vaporware" and want to see how something works in practice. I understand where you're coming from, but I submit it's valuable to raise these questions early. This is in any case not intended to supplant discussion about how things are things are currently progressing.
First, though, consider the benefits that such a search engine could bring, both to content consumers and content providers, if it really did return results sorted according to average community preferences. Suppose you wanted to find out if you had a knack for publishing recipes online and getting some AdSense revenue on the side. You take a recipe that you know, like apple pie, and check out the current results for "apple pie". There are some pretty straightforward recipes online, but you believe you can create a more complete and user-friendly one. So you write up your own recipe, complete with photographs of the process showing how ingredients should be chopped and what the crust mixture should look like, so that the steps are easier to follow. (Don't you hate it when a recipe says "cut into cubes" and you want to throttle the author and shout, "HOW BIG??" It drove me crazy until I found CookingForEngineers.com.) Anyway, you submit your recipe to the search engine to be included in the results for "apple pie", and if the sorting process is truly meritocratic, your recipe page rises to the top. Until, that is, someone decides to surpass you, and publishes an even more user-friendly recipe, perhaps with a link to a YouTube video of them showing how to make the pie, which they shot with a tripod video camera and a clip-on mike in their well-lit kitchen. In a world of perfect competition, content providers would be constantly leapfrogging each other with better and better content within each category (even a highly specific one like apple pie recipes), until further efforts would no longer pay for themselves with increased traffic revenue. (The more popular search terms, of course, would bring greater rewards for those listed at the top, and would be able to pay for greater efforts to improve the content within that category.) But this constant leapfrogging of better and better content requires efficient and speedy sorting of search results in order to work. It doesn't work if the search results can be gamed by someone willing to spend effort and money (not worth it for the author of a single apple pie recipe, but worth it for a big money-making recipe site), and it doesn't work if it's impossible for new entrants to get hits when the established players already dominate search results.
Efficient competition benefits consumers even more for results that are sorted by price (assuming that among comparable goods and services, the community promotes the cheapest-selling ones to the top of the search results, as "most desirable"). If you were a company selling dedicated Web hosting, for example, you would submit your site to the engine to be included in results for "dedicated hosting". If you could demonstrate to the community that your prices and services were superior to your competitors', and if the ranking algorithm really did rank sites according to the preferences of the average user, your site could quickly rise to the top, and you'd make a bundle on new sales -- until, of course, someone else had the same idea and knocked you out of the top spot by lowering their prices or improving their services. The more efficient the marketplace, the faster prices fall and service levels rise, until the prices just covered the cost of providing the service and compensating the business owner for their time. It would be a pure buyer's market.
It's important to precisely answer the question: Why would this system be better than a system like Google's search algorithm, which can be "gamed" by enterprising businesses and which doesn't always return the results first that the user would like the most? You might be tempted to answer that in an inefficient marketplace created by an inefficient search result sorting algorithm, a user sometimes ends up paying $79/month for hosting, instead of the $29/month that they might pay if the marketplace were perfectly efficient. But this by itself is not necessarily wasteful. The extra $50 that the user pays is the user's loss, but it's also the hosting company's gain. If we consider costs and benefits across all parties, the two cancel out. The world as a whole is not poorer because someone overpaid for hosting.
The real losses caused by an inefficient search algorithm, are the efforts spent by companies to game the search results (e.g. paying search engine optimization firms to try and get them to the top Google spot), and the reluctance of new players to enter that market if they don't have the resources to play those games. If two companies each spend $5,000 trying to knock each other off of the top spot for a search like "weddings", that's $5,000 worth of effort that gets burned up with no offsetting amount of goods and services added to the world. This is what economists call a deadweight loss, with no corresponding benefit to any party. The two wedding planners might as well have smashed their pastel cars into each other. Even if a single company spends the effort and money to move from position #50 to position #1, that gain to them is offset by the loss to the other 49 companies that each moved down by one position, so the net benefit across all parties is zero, and the effort that the company spent to raise their position would still be a deadweight loss.
On the other hand, if search engine results were sorted according to a true meritocracy, then companies that wanted to raise their rankings would have to spend effort improving their services instead. This is not a deadweight loss, since these efforts result in benefits or savings to the consumer.
I've been a member of several online entrepreneur communities, and I'd conservatively estimate that members spend less than 10% of the time talking about actually improving products and services, and more than 90% of the time talking about how to "game" the various systems that people use to find them, such as search engines and the media. I don't blame them, of course; they're just doing what's best for their company, in the inefficient marketplace that we live in. But I feel almost lethargic thinking of that 90% of effort that gets spent on activities that produce no new goods and services. What if the information marketplace really were efficient, and business owners spent nearly 100% of their efforts improving goods and services, so that every ounce of effort added new value to the world?
Think of how differently we'd approach the problem of creating a new Web site and driving traffic to it. A good programmer with a good idea could literally become an overnight success. If you had more modest goals, you could shoot a video of yourself preparing a recipe or teaching a magic trick, and just throw it out there and watch it bubble its way up the meritocracy to see if it was any good. You wouldn't have to spend any time networking or trying to rig the results, you just create good stuff and put it out there. No, despite whatever cheer-leading you may have heard, it doesn't quite work that way yet -- good online businessmen still talk about the importance of networking, advertising, and all the other components of gaming the system that don't relate to actually improving products and services. But there is no reason, in principle, why a perfectly meritocratic content-sorting engine couldn't be built. Would it revolutionize content on the Internet? And, could Search Wikia be the project to do it, or play a part in it?
Whatever search engine the Wikia company produced, it would probably have such a large following among the built-in open-source and Wikipedia fan base, that traffic wouldn't be a problem -- companies at the top of popular search results would definitely benefit. The question is whether the system can be designed so that it cannot be gamed. I agree with Jimmy Wales's stated intention to make the algorithm completely open, since this makes it easier for helpful third parties to find weaknesses and get them fixed, but of course it also makes it easier for attackers to find those weaknesses and exploit them. If you think Microsoft paying a blogger to edit Wikipedia is a problem, imagine what companies will do to try and manipulate the search results for a term like "mortgage". So what can be done?
The basic problem with any community that makes important decisions by "consensus" is that it can be manipulated by someone who creates multiple phantom accounts all under their control. Then if a decision is influenced by voting -- for example, the relative position of a given site in a list of search results -- then the attacker can have the phantom accounts all vote for one preferred site. You can look for large numbers of accounts created from the same IP address, but the attacker could use Tor and similar systems to appear to be coming from different IPs. You could attempt to verify the unique identity of each account holder, by phone for example, but this requires a lot of effort and would alienate privacy-conscious users. You could require a Turing test for each new account, but all this means is that an attacker couldn't use a script to create their 1,000 accounts -- an attacker could still create the accounts if they had enough time, or if they paid some kid in India to create the accounts. You could give users voting power in proportion to some kind of "karma" that they had built up over time by using the site, but this gives new users little influence and little incentive to participate; it also does nothing to stop influential users from "selling out" their votes (either because they became disillusioned, or because they signed up with that as their intent from the beginning!).
So, any algorithm designed to protect the integrity of the Search Wikia results would have to deal with this type of attack. In a recent article about Citizendium, a proposed Wikipedia alternative, I argued that you could deal with conventional wiki vandalism by having identity-verified experts sign off on the accuracy of an article at different stages. That's practical for a subject like biology, where you could have a group of experts whose collective knowledge covers the subject at the depth expected in an encyclopedia, but probably not for a topic like "dedicated hosting" where the task is to sift through tens of thousands of potential matches and find the best ones to list first. You need a new algorithm to harness the power of the community. I don't know how many possible solutions there are, but here is one way in which it could be done.
Suppose a user submits a requested change to the search results -- the addition of their new Site A, or the proposal that Site A should be ranked higher. This decision could be reviewed by a small subset of registered users, selected at random from the entire user population. If a majority of the users rate the new site highly enough as a relevant result for a particular term, then the site gets a high ranking. If not, then the site is given a low ranking, possibly with feedback being sent to the submitter as to why the site was not rated highly. The key is that the users who vote on the site have to be selected at random from among all users, instead of letting users self-select to vote on a particular decision.
The nice property of this system is that an attacker can't manipulate the voting simply by having a large number of accounts at their control -- they would have to control a significant proportion of accounts across the entire user population, in order to ensure that when the voters were selected randomly from the user population, the attacker controlled enough of those accounts to influence the outcome. (If an attacker ever really did spend the resources to reach that threshold point, and it became apparent that they were manipulating the votes, those votes could be challenged and overridden by a vote of users whose identities were known to the system. This would allow the verified-identity users to be used as an appeal of last resort to block abuse by a very dedicated adversary, while not requiring most users to verify their identity. This is basically what Jimmy Wales does when he steps in and arbitrates a Wikipedia dispute, acting as his own "user whose identity is known".)
This algorithm for an "automated meritocracy" (automeritocracy? still not very catchy at 7 syllables) could be extended to other types of user-built content sites as well. Musicians could submit songs to a peer review site, and the songs would be pushed out to a random subset of users interested in that genre, who would then vote on the songs. (If most users were too apathetic to vote, the site could tabulate the number of people who heard the song and then proceeded to buy or download it, and count those as "votes" in favor.) If the votes for the song are high enough, it gets pushed out to all users interested in that genre; if not, then the song doesn't make it past the first stage. If there are 100,000 users subscribed to a particular genre, but it only takes ratings from 100 users to determine whether or not a song is worth pushing out to everybody, that means that when "good" content is sent out to all 100,000 people but "bad" content only wastes the time of 100 users, the average user gets 1,000 pieces of "good" content for every 1 piece of "bad" content. New musicians wouldn't have to spend any time networking, promoting, recruiting friends to vote for them -- all of which have nothing to do with making the music better, and which fall into the category of deadweight losses described above.
An automeritocracy-like system could even be used as a spam filter for a large e-mail site. Suppose you want to send your newsletter to 100,000 Hotmail users (who really have signed up to receive it). Hotmail could allow your IP to send mail to 100,000 users the first time, and then if they receive too many spam complaints, block your future mailings as junk mail. But if that's their practice, there's nothing to stop you from moving to a new, unblocked IP and repeating the process from there. So instead, suppose that Hotmail stores your 100,000 received messages temporarily into users' "Junk Mail" folders, but selectively releases a randomly selected subset of 100 messages into users' inboxes. Suppose for arguments' sake that when a message is spam, 20% of users click the "This is spam" button, but if not, then only 1% of users click it. Out of the 100 users who see the message, if the number who click "This is spam" looks close to 1%, then since those 100 users were selected as a representative sample of the whole population, Hotmail concludes that the rest of the 100,000 messages are not spam, and moves them retroactively to users' inboxes. If the percentage of those 100 users who click "This is spam" is closer to 20%, then the rest of the 100,000 messages stay in Junk Mail. A spammer could only rig this system if they controlled a significant proportion of the 100,000 addresses on their list -- not impossible, but difficult, since you have to pass a Turing test to create each new Hotmail account.
The problem is, there's a huge difference between systems that implement this algorithm, and systems that implement something that looks superficially like this algorithm but actually isn't. Specifically, any site like HotOrNot, Digg, or Gather that lets users decide what to vote on, is vulnerable to the attack of using friends or phantom users to vote yourself up (or to vote someone else down). In a recent thread on Gather about a new contest that relied on peer ratings, many users lamented the fact that it was essentially rigged in favor of people with lots of friends who could give them a high score (or that ratings could be offset unfairly in the other direction by "revenge raters" giving you a 1 as payback for some low rating you gave them). I assume that the reason such sites were designed that way is that it just seemed natural that if your site is driven by user ratings, and if people can see a specific piece of content by visiting a URL, they should have the option on that page to vote on that content. But this unfortunately makes the system vulnerable to the phantom-users attack.
(Spam filters on sites like Hotmail also probably have the same problem. We don't know for sure what happens when the user clicks "This is spam" on a piece of mail, but it's likely that if a high enough percentage of users click "This is spam" for mail coming from a particular IP address, then future mails from that IP are blocked as spam. This means you could get your arch-rival Joe's newsletter blacklisted, by creating multiple accounts, signing them up for Joe's newsletter, and clicking "This is spam" when his newsletters come in. This is an example of the same basic flaw -- letting users choose what they want to vote on.)
So if the Wikia search site uses something like this "automeritocracy" algorithm to guard the integrity of its results, it's imperative not to use an algorithm vulnerable to the hordes-of-phantom-users attack. Some variation of selecting random voters from a large population of users would be one way to handle that.
Finally, there is a reason why it's important to pay attention to getting the algorithm right, rather than hoping that the best algorithm will just naturally "emerge" from the "marketplace of ideas" that results from different wiki-driven search sites competing with each other. The problem is that competition between such sites is itself highly inefficient -- a given user may take a long time to discover which site provides better search results on average, and in any case, it may be that Wiki-Search Site "B" has a better design but Wiki-Search Site "A" had first-mover advantage and got a larger number of registered users. When I wrote earlier about why I thought the Citizendium model was better than Wikipedia, several users pointed out that it may be a moot point, for two main reasons. First, most users will not switch to a better alternative if it never occurs to them. Second, for sites that are powered by a user community, it's very hard for a new competitor to gain ground, even with a superior design, if the success of your community depends on lots of people starting to use it all at once. You could write a better eBay or a better Match.com, but who would use it? Your target market will go to the others because that's where everybody else is. Citizendium is, I think, a special case, since they can fork articles that started life on Wikipedia, so Wikipedia doesn't have as huge of an advantage over them as they would if Citizendium had to start from scratch. But the general rule about imperfect competition still applies.
It's a chicken-and-egg problem: You can have Site A that works as a pure meritocracy, and Site B that works as an almost-meritocracy but can be gamed with some effort. But Site B may still win because the larger environment in which they compete with each other, is not itself a meritocracy. So we just have to cross our fingers and hope that Search Wikia gets it right, because if they don't, there's no guarantee that a better alternative will rise to take its place. But if they get it right, I can hardly wait to see what changes it would bring about.
I don't think it will beat pigeon ranking... (Score:1, Funny)
Am I wrong?
Re: (Score:1)
relative ranking units (Score:2)
I'll take the Off-topic hit for this (Score:5, Interesting)
This issue has long bugged me and it's hard to get answers about it. I don't understand how this is a deadweight loss (DWL) by his definition. Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party. How is this DWL different from the "non-DWL" example directly preceding, in which someone overpaid for hosting, but that was the hosting company's gain?
Does anyone have a rigorous DWL definition that can be backed up by a valid example?
Re: (Score:2)
Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party.
The SEO expert? I don't really know about deadweight loss, but it does seem that nothing was gained by the exercise that was described, except somebody got to leech money off of the companies paying for SEO.
Re: (Score:2, Insightful)
The amounts seem unlikely(of month of employee time with no realized benefit? bah.), but the concept is sound.
Re: (Score:3, Informative)
Re: (Score:3, Funny)
Re:I'll take the Off-topic hit for this (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
Yes, but it's just a transfer of money from one party to another; it's a zero-sum game. No wealth has been produced in the sense of some useful work being done. With respect to the hosting company example, the hosting company received the market price for a useful service, a positive benefit to both parties. (As far as I can see, the company did not overpay for hosting in the example).
Tha
Re: (Score:2)
Re: (Score:2)
I think the dead loss is the time and effort expended by the workers of the SEO company and the administration of the paying company. All that work for a *net* benefit of no wealth.
Re: (Score:2)
Re: (Score:2)
As you define it, I agree. I can't think of anything that would count as a DWL (except maybe literally throwing money away -- but then I suppose you've increased the value of everyone else's cash haven't you?)
Anyway, Wales's point makes sense, even if his definition of a DWL doesn't agree with the literature (I've no idea if it does or not) and there is
Re: (Score:2)
Small
Re: (Score:2)
Re: (Score:2)
The search-engine received all the benefits of the efforts, but those benefits cancelled each
Re: (Score:2)
No, the SE didn't benefit (or at least not primarily). Rather, the workers they paid to game it, benefited. Thus it can't be a DWL by the definition -- the loss of some corresponded to the gain of those workers. It doesn't matter that there was a net loss after summing over all agents; that's not what DWL refers to. Hence my confusion with the concept.
Re: (Score:2)
Total social wealth is decreased when people employ others to perform useless tasks, such as battling over a search-engine slot. The SEO industry, like the
Re: (Score:2)
However, since there are other results in the search the net result of "winning" and "lo
Re: (Score:2)
The problem with "deadweight loss" is that it assumes that there is some sort of value to economic activity, and that some activities have more value (or "benefit") than others. So if I plant some seeds in the ground, and later sell the harvested grain on the open market, my planting activity is considered to be beneficial. Whereas if I spit those seeds at my bothersome neighbor, who has been ripping up my turnips in the middle of the night because I scoffed at his explanation of economic theory, I (and my
Re: (Score:2)
In that case I'm certain if you asked the attendees if they would be willing to take their money back so that the semi could go save lives you would get a near 100% approval.
Re: (Score:2)
I'm certain if you asked...(etc)
Certainly, the first time you ask the attendees, they will agree wholeheartedly. And perhaps even the second or third times. But in a purely utilitarian world, there could never be any monster truck rallies until everyone had their vaccine, and all the little boys who had fallen down wells had been dug out, and all the lonely puppies and kittens in the animal shelters got to go to nice homes. After four or five of these episodes of re-routing semi's, people would start to
Re: (Score:2)
in strict economics terms dead weight loss is the reduction in overall utility caused by any transaction that is not at the efficient price level or the efficient quantity level.
Okay, but what does that *mean*? The problem here is that the jargon is obscuring understanding of the concept of a DWL. What does it mean for one price or quantity level to be efficient? I think when you unravel the terms, you see it's basically circular. Try if you disagree.
the paying $5000 (
I couldn't resist :-) (Score:2)
http://en.wikipedia.org/wiki/Dead_weight_loss [wikipedia.org]
Re: (Score:2)
I agree you can use a more rigorous conception of tha DWL, but like with the other responders, that wasn't the definition the original author used. In that definition, what makes it a DWL was that there was a loss *not corresponding to any gain*. While the *net* gain (across all people) may be zero, or even negative, the people they paid to (futilely) improve the search engine ranking certainly did gain a
Re: (Score:2)
I'm kind of with you on this.
That's not to say I really care about the finer points of definition of DWL, but I'm baffled by the author's purpose in discussing this.
The article suggests that a dead-weight-loss is bad, because it's money for labour which is in theoretical terms valueless. Fine -- I understand this. However, a non-dead-weight-loss is not a bad thing in the author's eyes. But if that non-DWL is pure profit (as with the hosting in his example), how is it any better?
At the risk of sounding so
Google (Score:1, Insightful)
Re: (Score:1)
is Google REALLY highly regressive (Score:3, Informative)
Google is definitely regressive from the point of view that it tries to represent the average total mindshare about search terms - NOT the average CURRENT mindshare. So if you want to find the up and coming site that's ABOUT to be the new hotness but hasn't reached critical mass yet, you need something like the derivative of Google's PageRank.
But this is definitely NOT what
Get Back To Me On This One (Score:2)
That right there is a billion-dollar idea that I'm sure more than a small horde of devs are working on for themselves or for vulture capitalists.
Will Mr. Wales own the magic algorithm to use as he sees fit or what?
Gaming Google (Score:2)
Since you can pay Google to have your site link placed right at the top of the search results, for less that what you'd pay someone to game the system to reach a similar position, it wouldn't make sense for large companies to try to "game" Google at all.
If it weren't for the advertising, we'd probably see a lot more of this on Google.
Maybe this project could implement something similar.
Re: (Score:2)
This project could do something similar.
Re: (Score:1)
Merit is in the eye of the beholder (Score:5, Insightful)
A popularity contest would be great (Score:2)
Don't dismiss popularity contests, the popular choice will, almost by definition, usually be the most interesting choice for most people. You may not feel you belong to "most people", most people don't, but if you leave your feeling of elitism and/or
Re: (Score:2)
That's exactly what this kind of system *doesn't* need. (Well, it needs it because if we don't use the same definition of "merit" for all users, or at least limit the numbers of definition of "merit" that are available this will become a computationally infeasible project... But let's talk theoretically).
This theoretical system should learn through user feedback e
Re:Merit is in the eye of the beholder (Score:4, Insightful)
In fairness, I don't think that "merit" is relative with respect to search-engine results. In a simplified example, if I search for "sony", I'm probably looking for one of three things:
Therefore, the top results should reflect that. Most likely, I'm not looking for porn. I remember the days where search engines would return porn for any and all searches. The fact that Google was able to avoid this is part of what brought about its rise to power.
Of course, not every example is so simple, but clearly there are results that are or are not correct for a given search.
PSSST: Merit for sale!!! (Score:2)
Companies could very easily request/encourage/force employees to do a merit update every morning.
Any system is open to abuse. At least the Google model is pretty easy to understand.
Re: (Score:3, Interesting)
Good point. But furthermore, I can guarantee you this won't work, simply because web page rankings and spam filtering are essentially the same thing, and the spam issue has not been solved. That is, even when we don't have the problem of multiple conflicting opinions and all we're trying to do is model the preferences of a single recipient, we still can't do it!
Patents (Score:1)
If they get this right... (Score:2)
Economic inefficiencies (Score:2, Informative)
If a theoretical "merit-based" search engine existed, those non-trivial resources would be spent building a better mouset
Re: (Score:2)
StumbleUpon (Score:4, Informative)
Re: (Score:2)
Re: (Score:1)
Two approaches to the search problem (Score:3, Interesting)
The other, which this approach can address, is to improve the term relevance scores and overall page quality metrics that mainstream search engines are based on. Google had its initial success because of two features of this type: one was Page Rank, a measure of overall topic-independent site popularity, and two, bettor use of anchor text, the words people write when linking to other pages.
In both cases, they mined the link structure of the web, which was essentially aggregate community generated information about site quality that wasn't being spammed at the time. As they succeeded, regular people put less effort into writing their own link text, and spammers took over.
The next source of this type of community generated content will probably be something incidental instead of deliberately created. If you build a central repository of reviews of web sites, you both make it easy for people to game your results, and you open yourself up to lawsuits from interested parties.
However, untapped information already exists on what people find useful on the web in the form of their browsing histories, a special case of this being their bookmarks. Someone who could aggregate this information on what millions of people ended up looking at after they ran a particular search query would be in an excellent position to improve the traditional search engine scoring algorithm beyond link data.
Re: (Score:3, Interesting)
I'm highly skeptical about this path because NL works best in a specified (narrow) context. So if you can specify the context, then you must have already put web pages into context - driven by what? the semantic web? If you've done that, then NL is almost redundant. Like, maybe I want
Re: (Score:2)
There is a whole branch of philosophy dedicated to theory of language, and I'd recommend books, but they're by and large so hopelessly abstruse that it would be little more than intellectual hazing if you don't already have pretty solid knowled
Re: (Score:2)
Re: (Score:2)
You also don't know if the user finds what they really want linked off of a result page, or if they give up. The skewing of clicks toward the t
Re: (Score:2)
Bootvis' Theorem (Score:2)
It is not possible to create an algoritm that takes as input any dataset and a search query and outputs the results 'best' matching the query.
I have truly marvellous proof but
Arrow's Theorem (Score:4, Informative)
a) The removal of one candidate from the race would not affect the rank of the others;
b) If everyone prefers candidate A to candidate B then the algorithm should rank A above B;
c) There is no dictator (i.e. there's more than one person voting).
The same criteria should also apply to a perfect search engine - the removal of one page from the web should not affect the relative ranking of the others, if everyone thinks page A is better than page B, page A should come first and, to be practical, the engine should take as input the priorities of more than just one person (it's not feasible to build a customized search engine that knows exactly the priorities of each individual user).
Therefore, a perfect search algorithm does not exist
Efficient Labour markets (Score:2)
Re: (Score:2)
I had this idea a while ago (Score:2)
An open source search engine would be a good idea, except that the index would have to be hosted somewhere and indexed somehow.
I'd gladly donate some spare processor cycles, hard drive space, and bandwidth to an open source search engine like a BOINC project.
Re: (Score:2)
If it's along the lines of P2P apps, DHT [wikipedia.org]s etc., this could really work.
Kad [wikipedia.org] already does a pretty good job of searching. Use something like it to point to Internet content, and use swarming for downloads... There's a Firefox extension waiting to happen.
Already-existing grassroots google (Score:4, Informative)
Re: (Score:2)
Systems by their nature are always "gamed" (Score:1, Insightful)
As long as people are the engine behind this "pure meritocracy," the system will be gamed. I find the google results to be good enough that I am not looking for an alternative. Google provides the basis for research. If you want the best deals, you still have to shop around and do the due diligence. If you want
Re:Systems by their nature are always "gamed" (Score:5, Interesting)
Very true. For an example, look no further than the subset of SEO that sees no difference between settings up hundreds of automatically-generated pages linking to a site for the sole purpose of increasing search rankings and hundreds of individual people independently writing about (and linking to) a site. I've actually seen people in the linkfarm business claim that they're not doing anything different from bloggers.
This is basically equivalent to saying that there's no difference between one person writing 10 letters to a politician under assumed names, and 10 people writing their own letters.
fair and un-gamable rankings <> meritocracy (Score:2, Insightful)
The use of a ranking system (even a fair and un-gamable one) is biased against a true meritocracy. If I'm looking for apple pie recipies, I (and likely anyone else looking for apple pie recipies) will pluck one from the top-ranked choices.
This "top-10-cherry-picking" makes it highly unlikely that the possibly-superior newcomers will be seen. You have to be seen in order to be ranked up.
It's only through "outside" mention (blogs, word-of-mouth, etc.) that newcomers have much of a chance of being looked
Re:fair and un-gamable rankings meritocracy (Score:2)
Which Community? (Score:3, Insightful)
It's also interesting to ask "which community?" There is a small number of categories of things that define some high percentage of the things I search for. I am pretty sure there is a very small intersection of those categories with the categories of things the world's population as a whole searches for. There are also differences based on location and language. In short, my preferences are almost certainly very different from the average of all searchers.
On the other hand, there are definitely groups of searchers whose preferences coincide with mine. For example, people who are involved in open source development, *nix users, computer scientists, environmentalists, English speakers, and people in the Netherlands probably have preferences that largely overlap with mine.
This suggests to me that some sort of machine learning might be used, where the system guesses your search preferences based on what links you have followed in the past, and what links other people have followed in the past. In other words, the system (implicitly) tries to determine which communities you are part of, and gives you results that are prefered by members of these communities.
Sounds really like peer ranking .. (Score:1)
I somehow believe that Google is quite aware of this algorithm and has already implemented it.
Reminds me of Indiana Jones... (Score:1)
some huge Samurai with swords in the middle of a market place. The Samurai twirls his swords
and delivers one hell of a impressive martial arts show before challenging Jones to attack.
Jones instead just shrugs, draws his colt and shoots the Samurai point blank.
With this analog in mind, it's easy for me to draw my colt and shoot this long missive down with
one single argument: A Wikipedia-like process for a search
Re: (Score:2)
Offtopic, but (Score:2)
Anyway... I do remember hearing that the scene was an accident. Basically, Harrison Ford had diarrhea at the time. It was actually supposed to be a nice long fight, swords vs Indy's whip, but when you gotta go...
Ah, the useless facts you pick up watching movies... After Morpheus' fully-VR PowerPoint-like talk about the Real World, and Neo gets unplugged and staggers around saying "I don't believe it..." Then Cypher goes "He's gonna pop" and Neo pukes... That was real. Apparently t
Let a million algorithms bloom (Score:3, Interesting)
A true open source search engine would let anyone roll their own algorithm. Each algorithm would be a sort of "plug in."
The index would be the shared, open source part, collaboratively crawled (via PC software or browser plugin) by everyone who elects to participate.
Algorithms would either work on the index after the fact, or, if they need access to the indexing process itself, would be part of a series of plugins run on the full HTML of each page.
The index itself would have an open API, so people could build their own front end search websites.
Trying to design the right algorithm up front is a premature optimization. I have no interest in helping Jimmy Wales become the next Sergey Brin. But I *would* participate in something that gives _me_ a shot, however distant, at founding the next Google, minus the massive spider farm.
Re: (Score:2)
The real trick is making this truly open in the Freenet kind of way -- no centralized servers at all (other than existing DNS and such).
Think for a moment: Suppose Google allowed anyone to write a plugin of sorts to allow specialized kinds of searches, and extended their API to support any kind of frontend accessing these plugins. So, anyone could use Google's
The Quantum Bookkeepers (Score:2, Interesting)
Jimmy knows it's gonna suck (Score:1)
User-based ranking is patented by IBM (Score:4, Interesting)
Rating by asking random users has been tried. At IBM. See United States Patent 7,080,064, Sundaresan July 18, 2006, "System and method for integrating on-line user ratings of businesses with search engines". Sundaresan has several patents related to schemes for asking users for ratings and using that info to adjust search rankings.
The basic trouble with this approach is that, if you ask random users to rate random sites, they don't have enough time, energy, or effort to do a good job of it. If you ask self-selected users of the sites, the system can be gamed.
This sort of thing only works where the set of things to rate is small compared to the interested user population. So it's great for movies, marginal for restaurants, and poor for websites generally.
Google (Score:2)
Couldn't be more wrong (Score:4, Insightful)
>If we consider costs and benefits across all parties, the two cancel out.
>The world as a whole is not poorer because someone overpaid for hosting.
And thus the broken window fallacy continues...
Wealth is created through increased efficiency. A decrease in efficiency is a decrease in wealth, regardless of who benefits.
By the "world is not poorer" logic, we might as well all ride horses, since we'd be paying oat producers and horseshoe manufacturers instead of the auto industry, so the world as a whole wouldn't be poorer.
By paying more for inefficient hosting, that takes money away from more efficient uses.
Re: (Score:2)
Wouldn't it be true that the extra money they'd make would be better spent elsewhere, as the market would not bear such a price without the artificial scarcity caused by the law? Isn't that the reason everyone complains about monopolies and lack of competition?
In such a case, the world is poorer, because the money isn't being used as eff
Meritocratic Search Doesn't Make Sense (Score:3, Insightful)
Sure, wikipedia makes these compromises quit well but the idea of content neutral encyclopedia entries provides a well defined goal. The second that we get to a search engine we can no longer cling to content neutrality because we must choose how to rank the advocacy sites on both sides of the spectrum. Unlike wikipedia where one can neutrally remark that some people believe X and others Y in a search engine the community has to decide if "unwanted pregnancy" is going to take someone to the planned parenthood site, an abortion clinic or an anti-abortion site.
In short there is no notion of the meritocratic search order, there are just tradeoffs between different sorts of searchers. Google is already navigating this maze of tradeoffs, including looking at what users like, so I fail to see the argument that a community search will obviously make better tradeoffs than Google.
In fact anyone who has spent much time on the Internet realizes that every community tends to develop its own prejudices and biases pushing away those who disagree and attracting those who agree. Slashdot attracts open source zealots and repels the technically inept. Whatever community develops this search engine will have its own biases which will discourage participation by those who don't agree. This is just human nature.
Likely I might enjoy the results returned by such a search since I suspect the participants are likely to be technically sophisticated nerds and others who have similar views as I do. However, it seems doubtful that they will provide the results people who are very different than those who run the search engine will appreciate.
Besides, this whole project just smells hokey to me. It sounds like Wales is drunk on his success with wikipedia and advocating it as THE solution to any problem. Problems are pragmatic things and they shouldn't be solved by ideologies.
Re: (Score:2)
Professional Grade Article - does it contain an abnormal number of jargon/professional terms or have a number of equations beyond a threshold level, tag it as a professional article (Use standard algorithms to determine it's popularity and popular authority... leave it up to those who know to determine it's accuracy)
Consumer Grade Article - does it contain few if any jargon/professional te
Broken window fallacy (Score:1)
Erm, yes it is. That difference in price could have been used to produce value. If you believe that the world is in fact not poorer, then you believe that the point of an economy is just to shuffle money around.
See the broken window fallacy: http://en.wikipedia.org/wiki/Broken_window_fallacy [wikipedia.org]
Our answer for search - SiteTruth (Score:4, Insightful)
We hadn't planned to announce this quite yet, but this is a good opportunity.
We have a new answer to search - SiteTruth. [sitetruth.com] It's working, but not yet open to the public.
Other search engines rate businesses based on some measure of popularity - incoming links or user ratings. SiteTruth rates businesses for legitimacy.
What determines legitimacy? The sources anti-fraud investigators tell you to check, but nobody ever does. Corporate registrations. Business licenses. Better Business Bureau reports. The contents of SSL certificates. Business addresses. Business credit ratings. Credit card processors. All that information is available. It's a data-mining problem, and we've solved it. The process is entirely automated.
Most of the phony web sites, doorway pages, and other junk on the web have no identifiable business behind them. Try to find out who really owns them, and you can't. When we can't, we downgrade their ranking. With SiteTruth, you can create all the phony web sites you want, but they'll be nowhere the beginning of any search result.
Creating a phony company, or stealing the identity of another company, is possible, but it's difficult, expensive and involves committing felonies. Thus, SiteTruth cannot be "gamed" without committing a felony. This weeds out most of the phonies.
SiteTruth only rates "commercial" sites. If you're not selling anything or advertising anything, SiteTruth gives you a neutral or blank rating. If you're engaged in commerce, you can't be anonymous. In many jurisdictions, it's a criminal offense to run a business without disclosing who's behind it. That's the key to SiteTruth.
Our tag line: "SiteTruth - Know who you're dealing with."
The site will open to the public in a few months. Meanwhile, we're starting outreach to the search engine optimization community to get them ready for SiteTruth. We want all legitimate sites to get the highest rating to which they're entitled. An expired corporate registration or seal of trust hurts your SiteTruth ranking, so we want to remind people to get their paperwork up to date.
The patent is pending.
Re: (Score:2)
Re: (Score:2)
The crappy front page makes it look like a scam.
To some extent, that page was made to discourage unwanted attention during the early phases. But it's all real.
Scale Issue - This is unlikely to work... (Score:2)
Open Directory (Score:2)
Getting included (Score:2)
summary + while we're wishing, I'd like a pony (Score:2)
"I have a new idea for a search engine. You should be allowed to suggest a modification to the search results. Your modification will be anonymously reviewed, Slashdot-moderation style, by a small, random subset of search engine users. It's nice to learn that the algorithm solves a problem that does not exist with contemporary link-network algorithms, but does with a hypothetical bad idea (the sockpuppetry issue.)"
Now can we talk about the ide
Who defines "merit"? (Score:2)
The big problem with this proposal is that it assumes that there is only one definition of "good". For instance, look at the example of searching for a web hosting firm. Am I interested in the same criteria as you? Yes, cheap is nice, but maybe I want to pay a bit more to survive a slash-dotting. Maybe I want "five nines" reliability. Maybe I want to run CGI scripts written in Haskell instead of PHP or Python. Or maybe I just want to run a generic Wordpress blog. Different firms provide different cap
Why are companies more important than people? (Score:2)
Why are companies more important than people?
Why not a page [whitehouse.gov] about former U.S. President Gerald Ford as the 'correct first answer'?
Or a page [wikipedia.org] about actor Glenn Ford as the 'correct first answer'?
Aren't people more important than companies which are nothing more than legal constructs created by people to facilitate commerce amongst themselves?
Anyway, I belive a 'fair' search engine would not use linking to determine populari
You'd need users to filter out garbage... (Score:2)
Ideally you'd need users to help you fight the constant battle with those trying to game the search results, but users would need some kind of incentive or payment to keep the search engine running smoothly. Ideally maybe you could select random samples of people and pay them to filter out garbage?
One weakness (Score:2)
Re: (Score:2, Insightful)
Won't work. Here's why, in a nutshell: There are huge numbers of sites on the net. There are not huge numbers of sets of people who will be willing to compare sites for relative merit (and there probably aren't even large numbers of such sets who can do so, even if you paid them for the results, which would be a huge cost that would not repay for most types of sites.)
Sorry. Only computers can handle a task like this. It is automation or failure.
Re:Hrmmm (Score:4)
What the hell does that have to do with the post you replied to? Stop piggybacking on nonsensical early posts to pump up your karma.
MERITOCRATIC! (Score:4, Funny)
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:1, Offtopic)
Considering every human society has a "privileged" class - call it what you will, aristocracy or otherwise - I would think that it's the only way to HAVE a society.
Re: (Score:2)
P.S. you're ugly too.