Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Wikipedia and the Politics of Verification 283

Slashdot regular contributor Bennett Haselton writes "The reports of Sinbad's death become greatly exaggerated. A Wikipedia contributor is unmasked as a fraud, raising questions about why he wasn't called out earlier. NBC airs a piece about how anybody can edit any article on Wikipedia, and errors creep in as a result. (Duh.) But what's most frustrating about all these controversies surrounding Wikipedia is that news reports describe these incidents as if they are a permanent, unsolvable problem with any type of community-built encyclopedia, when in fact there seems to be a straightforward solution." More words follow. Just click the link.

In its simplest form, couldn't a person's academic credentials be verified by sending a confirmation link to their .edu e-mail? (Which could be identified as a faculty address either by a domain name like "faculty.schoolname.edu", or by a Web page in the faculty section of the school's Web site identifying the person's e-mail address?) And then once the user's bona fides have been verified in this or some other way, couldn't they put their seal of approval on any article whose contents need to be considered reliable, or that readers want to cite as an authoritative source? In this way, with only a few minutes of effort and without changing a single word of the article, its value is increased many times -- surely one of the best possible trade-offs in terms of effort versus reward. (As for the question of "What experts would do this?", the answer is, presumably the same people who contribute to sites like Wikipedia currently. If their motives are altruistic in the first place, hopefully they would be willing to take this extra step if they knew it would increase the article's usefulness.)

Something like this model is planned by the operators of Citizendium.org, a Wikipedia alternative (I balk at using the word "rival" although it is inevitable that people will see them that way). The last time I wrote about Citizendium, some thought it sounded like such a valentine to the project that they wondered if I was a shill; actually, sometimes a project just comes along that aligns almost exactly with what I would have done if I could have re-done a popular project like Wikipedia with a few design changes, and when that happens, I just say so. Some others may have wondered if I was sucking up for a board position or something. No, that would be, like, work. But I think they have some good ideas that will make them a more useful alternative in some cases, unless Wikipedia copies back some of their ideas in order to serve both needs at once, which would also be a good thing.

Consider the two major issues on which Citizendium is planning to take a different approach from Wikipedia: (1) user verification, and (2) putting published articles into an "approved" state under the stewardship of a credentialed editor, who has to sign off on any future changes to the article. The issue of user verification can be further divided into two sub-issues: (a) verifying users for the purpose of ascertaining their credentials, and (b) verifying users for the purpose of limiting the amount of vandalism committed by new users under pseudonyms. (While editorial control on Citizendium means that it is not possible to vandalize the public-facing version of an article after it has gone into an "approved" state, users can still vandalize an article while it is a "work in progress" being built up towards the first milestone where it can be approved. Citizendium founder Larry Sanger says that such vandals are surprisingly, pathetically motivated even though their work is only seen by a small audience.)

On the first issue, the one of verifying user credentials, I think the verification of .edu addresses especially would be a cheap and easy way to increase the value of every article that that user writes, or signs off on. I don't think, however, it's necessary to go as far as Citizendium is currently planning on going, by requiring real names and biographies of all users. My thinking is that if an article is synthesized by 100 monkeys with typewriters but the finished product is giving the blessing of a credentialed professor of physics, it's pretty much just as reliable as if the professor had written it themselves. And if the same article gets the blessing of multiple credentialed experts, it could justifiably be considered more reliable than many printed sources written by a single author. The point is that the credentials that matter, are those of the people who stake their reputation on the accuracy of the article, not necessarily those of the people who contribute to it. So on this front, I think that while Wikipedia asks too little of users' backgrounds, Citizendium's current plan would ask too much, because as long as you have the credentials of one person who has signed off on an article, collecting non-verifiable bios of the article's other contributors doesn't actually gain anything.

The other side of verifying credentials is the use of credentials to prevent vandalism. In this situation it's not necessary to verify that the user actually is who they say they are; the system only needs to ensure that the same user is not signing up over and over again after previous accounts get banned for abuse. (You could ban users by IP address, but tools like Tor make it easy for users to connect from what appears to be a different IP address every time.) A blog post from Citizendium founder Larry Sanger lists three possible approaches instead: (a) requiring existing user X to vouch for new user Z before Z can join; (b) requiring new user Z to provide a link to a "credible" Web page establishing their identity; or (c) requiring new user Z to provide a link to a "credible" Web page of some person X who can vouch for Z's identity. I don't know how quickly a system could grow by referrals only -- after all, I was surprised that GMail took off so quickly during the period when you could only join with an "invite" from an existing user. Then again, GMail was giving away something for free that almost everyone could use, so most people who wanted it, would find themselves closely linked to someone else who had it. Citizendium, on the other hand, asks not what they can do for you but what you can do for them, and so might not achieve enough penetration to spread by referrals only.

I suggested that one alternative would be to send a postcard to each new user's physical address with a unique six-digit number, which they would have to enter in order to complete their registration, in order to verify that new users really were unique. The problem here, apart from the privacy concerns, is the delay that users would incur before their registration was complete, which would take away the "instant gratification" that they could get from starting to contribute right away. (You could let users edit before their address is verified, but that would just enable the same person to keep re-creating new accounts with unique but fake addresses, and use them to commit vandalism before the account was found out.)

Another idea would be that for new users, their first, say, three edits would go into a queue to be reviewed by verified users, and once the first three edits have been approved, the user is able to make edits in real time. (Since anybody would be able to review a new user's edits to make sure they were not spam, the new user's edits could be reviewed very quickly, since any Citizendium volunteer who was online, could review the latest entries in the edit queue and approve them.) It's true that a user could game this system by, for example, submitting three minor improvements, and then using their unblocked account to vandalize articles while they're being worked on. However, even in this case, the "vandal" would probably end up having a positive contribution to the site, because of the three small improvements that they'd already made. If a legitimate Citizendium volunteer would have to spend more effort making those three small improvements, than it would take to let a new user make those constructive changes and then ban them and revert their destructive changes once the user is caught committing vandalism (and the latter wouldn't take much effort at all), then Citizendium has actually gotten a good deal out of the "vandal"! (To make this work, a user's first contributions could not be "neutral" changes like replacing one word with a synonym; they would have to be actual improvements, even small ones, thus ensuring that the net effect of a potential "vandal" is positive.) There may be other possible solutions. These are just alternatives in case the model of referral by trusted users turns out not to work.

Now switching to the other side of the reliability issue: Whether the default article that is displayed to the public for a given topic, should be the latest "stable" version approved by credentialed users, or the very latest version incorporating all edits submitted by any user whatsoever. Having talked with members of the Citizendium and Wikipedia communities in their respective forums, there appear to be three schools of thought on the article stability issue. The first is that the whole idea of putting articles into an "approved" state and moderating all changes going forward, goes against the "spirit" of wikis in general and Wikipedia in particular. The second, suggested on the Wikipedia discussion list by Sheldon Rampton, is that it would be a useful feature if credentialed users could select certain page versions in the page history and "sign off" on the accuracy of one of those past versions; the page displayed by default would be the bleeding-edge latest one (with all of the possible vandalism and inaccuracies that entails), but users who wanted a reliable, citable source could look in the history. The third school of thought is that reliability is so valuable, that the default page displayed to the public and carrying the stamp of the project, should be the latest version approved by credentialed editors -- the model that Citizendium currently has in mind.

I'm not really partial to the first view, since I think the success of the project should be defined by how it achieves its goals (whatever you define those goals to be) and not in whether it kept with its original "spirit". Since Wikipedia has far more readers than contributors, if your motivations for contributing to or maintaining Wikipedia are at all oriented towards doing good for other people, presumably meeting the needs of readers is more important than keeping the party going for contributors (provided, of course, that the environment for contributors is at least pleasant enough to keep them contributing). The choice between the second and third points of view is more interesting. There's no obvious best-of-both-worlds choice here, because what motivates many contributors (the fact that their changes go live to the entire world, right away) is also what motivates vandals.

On the other hand, the problem doesn't sound unsolvable. You could go with the Citizendium model of editor-approved changes but create a prioritized system for "urgent" updates, in the case of changes to an article made to incorporate current events. Suppose users (who have been verified using one or more of the methods above) are each issued a certain number of "credits" that they can use to mark a proposed update as an urgent, breaking change. (Misusing these credits to mark changes as "urgent", that really aren't, would be considered abuse tantamount to spamming or vandalism.) Then let's say, for example, Anna Nicole Smith dies. A user could submit this change to the Anna Nicole Smith article, along with a link to a reliable news source (e.g. a wire service story) and a credit marking the change as "urgent". Since an editor would not need any particular expertise to view the article and verify that the change was accurate, any editor could review the "urgent request queue" and approve that particular change for publication, ensuring that the queue was checked frequently throughout the day and urgent updates would get pushed through quickly. Thus the site could keep pace with breaking current events without the kind of inaccuracies that plagued Kenneth Lay's Wikipedia entry when he died.

So there's a trade-off there, between displaying all the latest changes by default and motivating people to contribute but also running the risk of vandalism, versus displaying only the latest editor-approved page. Where there is not a trade-off, that I can see, is in the option of simply having an editor-approved version of a given page -- whether it's displayed by default, or only stored in the version history where people can look for it. To me, both of these steps seem to consist of pure gain for relatively little effort:

  1. Verify credentials of academic professionals by poking their .edu address.
  2. Allow them to give their "blessing" to certain versions of a page in the page history, so that users can rely on those specific page versions and even cite them as sources where appropriate.

So I hope that Citizendium will help bring more prominence to the idea, and that something similar might get incorporated back into Wikipedia. The approval of an identity-verified expert can improve an article's value so much, for such comparitively little extra effort, that it makes no sense not to have that option.

This discussion has been archived. No new comments can be posted.

Wikipedia and the Politics of Verification

Comments Filter:
  • by writertype ( 541679 ) on Wednesday March 28, 2007 @12:19PM (#18516671)
    A bit of an irony here, since we're talking about authorities on a particular subject. My quick Google search [google.com] turns up Hazelton as a twenty-something computer programmer who runs the Peacefire [peacefire.org] web site about filtering software and how to circumvent it.

    That's all fine and good, but I'm not seeing how that qualifies him for an editorial on the Slashdot site. Or is he just a friend of an editor?

    Not trying to troll here, honestly. I'm just curious why he was given the soap box to stand upon.

  • by Peter Trepan ( 572016 ) on Wednesday March 28, 2007 @12:36PM (#18516879)

    3) "Underground experts" such as black-hat security experts value their anonymity greatly.

    Is it possible to provide an anonymous identity that's certified to belong to only one person, so that people could build reputation under a pseudonym? It could be used by the black hats, and more importantly, by whistleblowers and political dissidents.

    If anyone wants to do that, I hereby put this super-cool idea into the public domain.

  • by Frosty Piss ( 770223 ) on Wednesday March 28, 2007 @12:44PM (#18516991)

    Bias. Slander/Libel. Misrepresentation of academic credentials...and other malfeasances. These are unique to wikipedia? Hardly. These very things happen in the mainstream mass media from outlets we all know -- for example, the New York Times. Have they forgotten about Jayson Blair?

    The fact that something occurs in some other forum does not really have any relevance to Wikipedia's problems, especially as an excuse.

    The main problem here is not that Wikipedia has these problems, but that it gets so much undue focus: is it a problem or a strength?

    My issue with Wikipedia is that as much as people bleat on and on about how it's not supposed to be an authoritative source, they also tend to wink when it's used as one, and fanatically defend its value as if it where one.

  • by ZachPruckowski ( 918562 ) <zachary.pruckowski@gmail.com> on Wednesday March 28, 2007 @12:49PM (#18517057)
    I'll address your first and third issues, then your second.

    On your first point, having a .edu address is not the be-all-end-all of being an expert. Experts can verify themselves by other methods, and other proofs can be required beyond having a .edu address. There's nothing that says you have to be a Ph.D. in a subject. And even if you're not an editor, you can still bring a lot to the table and help with the article. The editors might offer guidance and are ultimately involved in approving the article, but that doesn't mean that they are the only ones who can write the article.

    In your second point, you point out that there are some articles where this method doesn't help much. Maybe so. But because we have approved and unapproved articles side-by-side, we can cover those topics in a similar manner to WP, and still approve articles we can approve. We're currently trying to think up methods to get pop culture articles approved. If you have ideas on the subject, join and help us.

    Zach Pruckowski - Citizendium Executive Committee
  • Re:ok I'll bite (Score:3, Interesting)

    by JohnnyComeLately ( 725958 ) on Wednesday March 28, 2007 @12:56PM (#18517147) Homepage Journal
    I agree. When I worked at Sprint, my Director was a EE/ME major with a PH.D. Although he was very astute of certain concepts and academia, he was not able to see applications and long-term implications when using that techology/concept. You could have asked the PH.D in electrical theory about how 3G data network operated and some of the reply would be wrong or misleading (because it would include correct observations based upon incorrect assumptions).

    Some of the brightest people don't have a lick of schooling beyond high school. I have an MBA and am about to finish a MS in Sofware Engineering, but I always keep in mind a stat I read in a business magazine about a decade ago: Most Fortune 500 companies are made successful by leaders who have only a Master of Arts degree (not business). Getting an advanced degree means you have the ability to research and reach conclusions. The credentials mean you've hit milestones in learning. However, the these aren't mutual inclusive. You can be very adept and logical and not have a degree. You can have a degree and yet be, practically, dumb as a rock.

    I applaud the discussion and it's intent. I would just look somewhere besides credentials to lead credibilty to information,IMHO.

  • easy enough today (Score:2, Interesting)

    by davidwr ( 791652 ) on Wednesday March 28, 2007 @12:59PM (#18517175) Homepage Journal
    Say I want to be "davidwr" on Wikipedia. Oh wait, I am.

    I can put a note [wikipedia.org] in Wikipedia saying I am "davidwr" on Slashdot and a note on Slashdot saying I am davidwr on Wikipedia.

    Since the names are password-protected, you have all the proof you need that the same person that made these edits [wikipedia.org] also made these posts [slashdot.org].

    For email and non-password-protected web-postings, PGP and the like work quite nicely.
  • by jonbritton ( 950482 ) on Wednesday March 28, 2007 @01:15PM (#18517399) Journal
    I'm unsure of what credentials are required to get a .edu domain

    I believe it's as simple as, "you must be a regionally accredited four-year university," where "regional accredition" comes from one of the organized bodies in the US for evaluating academic criteria and standards.

    So, yes, the .edu solution is perfect. Stephen Hawking can't comment on Physics or Cosmology, because what does he know? He's British. Want Jean-Paul Sartre to rise from the dead and comment on existentialism (how's that for a mindfuck?) Sorry, he's not an American in the (flawless) American education system. Marx on Marxism? Engineers at Boeing (who's employer doesn't cooperate) on aeronautics and rocketry? Me on just about any subject, including Futurama quotes?

    Sorry. We're boned.
  • Thick skin is a good trait to have on /. You may not have seen it on that Mondragon page, but do you know what the workers are complaining about? The move from a 10x cap to a 50x cap. The richest in that society can't make more than 50 times what the poorest make. Now that, I can live with. Heck, I can go up to 100x. That was coincidentally, the cap in ancient Athens during the birth of democracy. Nobody is worth millions of times what someone else is worth, and no one has the right to say they are and force the rest of us to buy into it.

    Given that human history is full of examples of people using money and power to keep themselves in money and power while keeping others out, I'd say the burden of proof is on you to show why that DOESN'T happen in a capitalist society. I assume you are familiar with the notion of feedback loops? Wealth is a positive feedback loop, it creates more opportunities for the accumulation of wealth. Poverty is a negative feedback loop, it makes it easier for others to take advantage of you and take your wealth because you have fewer options. These are systemic faults in the system.

I've noticed several design suggestions in your code.

Working...