A number of articles have appeared in the online press about Munich. Half of them are just rehashes of press releases - nothing very useful there. Some of them are fairly in-depth (we think CNET and the NY Times had the best coverage), but none of them really give you the big picture. We're going to try to. Let us know how we do.
The first thing that the press is missing is that there are (well, were) two meetings in Munich, not one. The first is the one you heard about: a meeting called by the Bertelsmann Foundation, part of the huge Bertelsmann publishing empire, which sponsored the Internet Content Summit. They're getting together to have a little feel-good session about "self-regulation" of internet content. By self-regulation they don't mean that end-users regulate their own behavior; they mean that ISPs regulate users instead of government doing so directly. Users will still be regulated, of course. And the regulation will be driven by what the national government wants. It's just that government will lay their heavy hands upon the ISPs, and the ISPs will act as the enforcers rather than law enforcement. Think of it as a distributed system - government assumes the role of a second-line rather than first-line manager. At a previous internet content summit, this type of regulation was described as "soft law" versus "hard law", and we think that's a good way to think about it. They are not talking about voluntary, individual actions of corporations - they are talking about imposing laws and restraints on the citizenry through another means. Self-regulation = soft law, but law nonetheless.
The first meeting is interesting for a number of reasons, but not terribly ominous - the people meeting were not previously working together, and all that will come out of it is thoughts and ideas. The second meeting is rather more dangerous.
The second meeting, scheduled in conjunction with the first, was of the principals of INCORE, Internet Content Rating for Europe. This group consists of a number of European corporations and protect-the-children groups and their sole goal is to establish a single rating system for use across Europe (they're also coordinating with Australia). Of course, the members of this group overlap significantly with the first - for example, Jens Waltermann, director of the Bertelsmann Foundation and sponsor of the first meeting, is also one of the prime movers in INCORE - which ought to tell you why the Bertelsmann conference is so slanted towards ratings systems as the sole means of protecting the children.
But why is this going forward? As at least one slashdot poster pointed out in the discussions of last week's article, rating systems have been discussed before, and haven't come to anything yet.
What happened is the government (the European Commission, in this case) decided to get serious. They buckled down, and at the end of 1998, allocated funds to be spent on the development of a global rating system. About $11 million is allocated to be spent on developing this system, so the corporate participants can be reasonably assured of being reimbursed for all their plane fares and hotel costs. (Question: if it's so voluntary, how come the government is paying people to develop it?)
The European Commission's plan runs from January 1999 to December 2002, four years. 1999 is scheduled for development and meetings. 2000 is scheduled for rollout and beta testing. 2001 and 2002 are allocated for the encouragement process and tweaking - making sure everyone is toeing the line. There's plenty of time allocated because it's important to make sure that the resulting rating system aligns with national laws - for instance, since Germany outlaws hate speech, one of the rating categories will involve hate speech, and Germany will outlaw the transmission of any content rated in this category into the country. Laws can be "hung" off the rating categories, if they're set up properly.
The rating system will be based off the American Recreational Software Advisory Council's system, that they originally developed for video games and then, when threatened by Congress with the CDA, transformed for internet content. (The funny thing is, for the first year that RSACi was being promoted for use on webpages, it still had all the original references to video games. Pretty sad.) RSAC was recently folded into the Internet Content Rating Association, basically so they can revamp the RSACi system and submit it to the European Commission for approval and funding. Who is the chairman of ICRA's board of directors? Jens Waltermann again. Are you beginning to see a pattern?
Civil liberties groups world-wide have finally recognized the threat that government-mandated rating systems pose to the internet. The ACLU was the first major group to speak out against them, in their 1997 paper Fahrenheit 451.2: Is Cyberspace Burning?. But for this Munich conference, the chorus was loud and close to unanimous - the Global Internet Liberty Coalition condemned it, the ACLU condemned it, Electronic Frontiers Australia condemned it, Internet Freedom (UK civil liberties group) condemned it.
Several civil liberties groups managed to wrangle themselves invitations to the conference. The Electronic Privacy Information Center is attending and distributing a book free of charge to all participants (besides the attack on free speech, EPIC is irritated because the European Commission has also recommended that online anonymity be strictly prohibited for all European Union residents - after all, if they're anonymous, it's harder to make them obey the law). Nadine Strossen of the ACLU published the statement she's making to the Conference, harshly opposing the labeling requirements; even Esther Dyson, a tremendous supporter of rating systems, expressed her unease at the slant of the conference.
Strossen's comments above neatly summarize the civil liberties community's objections to so-called self-rating systems, and we urge all readers to take a look at that link above. She makes several points:
- Self-Rating Schemes Will Cause Controversial Speech To Be Censored
- Self-Rating Is Burdensome, Unwieldy, and Costly
- Conversation Can't Be Rated
- Self-Ratings Will Only Encourage, Not Prevent, Government Regulation
- Self-Ratings Schemes Will Turn the Internet into a Homogenized Medium Dominated by Commercial Speakers
Strossen is far more eloquent than we are, and she makes the points extremely well. Take a look, it's worth your time.
But back to the conference. The main document to come out of the conference is their Memorandum on Self-Regulation (538K), released yesterday. A number of "internet experts" contributed to the report - mostly these same people we've been seeing, representatives of the companies that want the Net to be kid-friendly (increase profits!) and protect-the-children groups from throughout Europe, and representatives from various governmental agencies. They lay out their censorship proposal in some detail. The basics are laid out in a single phrase: "Content providers worldwide must be mobilized to label their content...".
Prepare to get mobilized.
"It is in the best interest of industry," they say, to take the steps necessary to "enhance consumer confidence" and meet "business objectives." The suits invited must all have nodded their heads to this one: if only they could get the obnoxious people off the net, then all the soccer moms and grandpas would feel safe enough to fire up a browser and finally type in their credit card numbers.
So, problem: naughty stuff on the net. Answer? Open source! <spit>
On p. 59 of the 60-page memo is a neat diagram that looks almost like an API to a multi-layer code library. Except in this case, the bottom slice is the underlying technology of censorship (PICS), and the top slice is the user's experience of censorship (at the browser).
Sitting on top of PICS is Layer 1, in which the content creators - that's you, me, and everyone else who makes anything public on the internet - label our data with a "basic vocabulary" of keywords. If we write porn, we call it porn. Simple enough so far?
Next comes Layer 2, which is where the fun stuff starts to happen. Here, third parties can invent "template profiles." These combine the keywords in interesting ways. The idea is that in one country, the ratings systems will typically rate porn as bad but violence as OK; in another, perhaps the opposite; someone else will invent a profile for use in schools that blocks everything noneducational; a profile for your company's router might block all sports but let profanity through; a national profile for Australia might block all sex but let stupid political grandstanding through; and so on.
These template profiles should be, according to Bertelsmann, "open source."
How are they going to do this? They can't rely on a NetNanny or SurfWatch to rate the net: censorware has been a dismal failure in practice, the software just doesn't work because there's too much of the net and too few censorware employees to evaluate it all.
What they need instead is for you, the author, to do their work for them. Remember that "basic vocabulary" of keywords? It turns out you're not just going to pick porn vs. non-porn. Oh no. After all, you have to provide enough information for the profiles to work with.
That means you're going to be rating everything you publish according to:
"e.g.: gratuitous violence,
explicit sexual acts,
extreme hate speech,
death to humans,
non-explicit sexual acts,
E.g.? E.g.!? There's more?
Well, there has to be more. In fact, Bertelsmann has only scratched the surface. In order for there to be enough "template profiles" to be worth mentioning, the variety of keywords has to be extreme.
Be ready to run down a checklist for everything you write and decide whether it contains gratuitous or non-gratuitous violence, explicit or non-explicit sex acts. Please rate from 1 to 10 how much art and history was in that last post of yours. Don't think you'll have a choice about doing it - your ISP will be enforcing it upon you, as a condition of service.
And the "template profiles" that are provided for the end user? These profiles are just simple sets that group the predefined keywords together. If I'm the CEO of NetSitterPatrol, I group keywords 1, 3, 5, and 12 together and call it "NetSitterPatrol Profile."
And if I'm a national government that's cracking down on porn, violence, hate speech, or vulgar language (your government wouldn't do anything like that, would it?), I'll just add the keywords for indecency, abortion information, hate speech, racism, or whatever else I want to censor, and give the list to the backbone providers in my country to filter out and protect the delicate citizens. Hey look, I'm an open source programmer!
by Michael Sims and Jamie McCarthy