Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
User Journal

Journal Journal: Godwin's Law - "law" or cop-out? 24

I have always been uncomfortable with Godwin's Law. For those unfamiliar with it, it states that "As a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one." Many people infer from this that whoever, during a debate, makes a comparison with Hitler or Nazis, loses the argument automatically.

It wasn't until a recent email conversation with Cory Doctorow, started by a /. comment of mine, that I was forced to introspect and find the reason for my discomfort.

Here, I outline a fictional debate between Cory and I, using various extracts from our conversations and comments, which I hope gives a fair indication of Cory's viewpoint:

Me: As an Irish citizen living in the US - I have decided that it is time to leave this country - it is starting to look, smell, and act as Germany did during the 1930s.

Cory: It's a shame that [you] violated Godwin's Law, as it gives those who would distract us from the real issue here a handy red herring to toss into the fray, i.e., pointless arguments about the appropriateness of a comparison to Nazi Germany.

Me: I think the comparison with *1930s* Germany is apt, although a comparison with 1940s Germany would not be, you can't invoke Godwin's law when the conversation *really is* about Nazi Germany ;-)

Cory: The point for me of G's law is not its aptness -- I happen to agree that it is an apt analogy, and I speak as someone who lost a significant fraction of his family in the death camps.

The point of G's law is that comparisons to Nazi Germany immediately end all discussion about the subject at hand and instead divert the whole debate to an argument about the aptness of the comparison.

Me: In some cases, however, a discussion about the aptness of the comparison is actually useful, and gets to the core of the issue.

Cory: My point is that Doctorow's Corollary To Godwin's Law is that anyone who wishes to be an effective rhetorician should completely expunge the notion of Nazi comparisons from his bag of tricks, because it creates a vulnerability to an attack that is otherwise neutralized ("My opponent is of such poor judgement and callous insensitivity that he believes it's appropriate to make comparisons to Nazi Germany!").

Me: Well, I am not so sure I agree with you there. *If* a comparison to Nazi Germany is pertinent then an effective rhetorician will be sufficiently skilled to counter this kind of ad hominem attack. They say that those who forget history are doomed to repeat it, and what more important lesson for society than the events in Germany during the Nazi period.

Refusing to use such an important lesson of history in debate for fear of exposure to fallacious arguments seems like an unfortunate surrender of a powerful tool for those who wish to fight against fascism. For this reason - I have never been entirely comfortable with Godwin's Law.

Unfortunately this is where the debate must end as I still await Cory's response to my last comment.

I would be curious to hear some third-party opinions on this, since Godwin's Law is one of the Internet debate doctrines that never rang true for me.

Anyway, bottom line is that I now propose:

Clarke's Law: Anyone that invokes Godwin's Law in an argument automatically loses the meta-argument

User Journal

Journal Journal: Unified configuration mechanism 11

One of the biggest problems with Linux (and I am certainly not the first to observe this), is the vast number of configuration files, many with completely different - and often nonsensical layouts, and most of which must be edited manually. It is a mess.

I would propose that someone come up with a unified Linux daemon which handled all configuration information - keeping it all in a well-ordered datastructure, perhaps based on XML. The idea would not be dissimilar to the Windows registry, although it could incorporate a number of features to make it even better:

  • Security
    Different parts of the configuration tree could be given read and write security permissions on a per-user basis
  • Backwards compatability
    Through the use of pipes, older software which doesn't directly support the configuration mechanism can read their configs from a file that is actually a pipe to the config daemon
  • Network support
    Access to the config daemon could be handled over a network, or the local config daemon could be configured to "fall back" to a remote daemon - allowing centralized configuration for software, but still letting the user modify user-specific stuff
  • Cross-configuration
    Often - software needs to base its settings on the settings of another piece of software in the system. This approach would make it easy for one piece of software to check the configuration of some other software.

I think such a mechanism would be one of a variety of necessary stages to creating a new environment which can be built around a Linux kernel to bring it closer to the kind of unified integrated approach we see in OSX.

User Journal

Journal Journal: A WhittleBit of extra intelligence 2

After a long delay I finally have a reasonably reliable implementation of my "learning" web search engine up at WhittleBit.com.

Yeah - weird name, I know. All other comments welcome.

Addendum 8/1/03: Sorry for those that tried this and found it to be down - the problem was that I use a Java daemon to do the donkey-work but I couldn't find a VM that would run on the server I was using (which has a weird setup). Finally I got it working using Kaffe - hopefully it will prove to be stable.

User Journal

Journal Journal: Removing bias in collaborative editing systems 28

A few weeks ago a friend of mine that had been thinking about reader-edited forums (like K5) posed an interesting question. He was concerned about how people's bias would influence their voting decisions and wondered whether there could be any way to identify and filter out the effects of such bias. Of course, in some situations bias is expected, such as political elections, however in other situations, such as when a jury must vote on someone's guilt or innocence, or when a Slashdot moderator must vote on a comment, bias is undesirable. After some thought, I came up with a proposal for such a system.

First, what do we mean by "bias"? It is a difficult question to answer exactly; examples would include political left or right-wing bias, nationalist bias, anti-Microsoft bias, and bias based on race. The dictionary definition is "A preference or an inclination, especially one that inhibits impartial judgment." Implicit in the mechanism I am about to describe is a more precise definition of bias; it is the aptness of this definition that will determine the effectiveness of this approach.

Visitors to websites such as Amazon and users of tools like StumbleUpon will be familiar with a mechanism known as "Automatic Collaborative Filtering" or ACF. Amazon's recommendations are based on what other people with similar tastes also liked, this is an example of collaborative filtering in action. There are a wide variety of collaborative filtering algorithms, which range widely in terms of sophistication and processor requirements, but all are designed to do more or less the same thing: anticipate how much you will like something based on how much similar people liked it. One way to look at it is that collaborative filtering tries to learn your biases and anticipate how they will influence how much you like something.

My idea was to use ACF to determine someone's bias towards or against a particular article, and then attempt to remove the effect of that bias from their vote. The effect of their bias is assumed to be the difference between their anticipated vote based on ACF, and the global average vote for that article. Having determined this, we can then take their vote, and remove the effect of their bias from it.

Let's look at how this might work in practice. Joe is a right-wing Bill O'Reilly fan who isn't very good at setting aside his personal views when rating stories. Joe has just found an article discussing human rights abuses against illegal Mexican immigrants. Joe, not particularly sympathetic to illegal Mexican immigrants, gives the article a score of 2 out of 5. On receiving Joe's rating, our mechanism uses ACF to determine what it might have expected Joe's score to be. It notices that many of the people who tend to vote similarly to Joe (presumably also O'Reilly fans) also gave this article a low score - meaning that according to our ACF algorithm - Joe's expected vote was 1.5. Now we look at the average (pre-adjusted) vote for the story and see that it is 3 - we then assume that Joe's anticipated bias for this story is 1.5 minus 3 or -1.5. We use this to adjust Joe's vote of 2 to make it an actual vote of 3.5 - which means that Joe's adjusted vote for this story is actually above average once his personal bias has been disregarded!

So, how well will this system work in practice - and what is it really doing? What are the implications of this mechanism for determining someone's bias? Is it fair?

I don't pretend to have the answers to these questions, but it might be useful to think of it in terms of punishment: when your vote is adjusted by a large amount, then you are being punished by the system as your vote will have an effect different from that which you intended.

The way to minimize this punishment is for your votes as predicted by the ACF algorithm to be as close to what the average vote is likely to be as possible. The worst thing you can do is to align yourself with a group of people who consistently vote in a manner in opposition to the majority.

I have been trying to think of scenarios where it might be bad for people to do the former, or bad for them to do the latter, but so far I haven't come up with anything. What kind of collective editor would such a system be? What kind of negative side effects might it have? I am curious to hear your opinions.

Privacy

Journal Journal: A draft of a new Freenet article 4

I have been working on an article describing Freenet's "Next Generation Routing" algorithm. You can find my working draft here - comments appreciated, but remember that it is still a draft so please don't link to it except through my blog.

When complete I will probably submit it to /. among other places to get some wider peer review.

Programming

Journal Journal: Progluminators 11

For millennia before the invention of the printing press, people known as "Illuminators" were responsible for the manual copying of religous and other manuscripts. Of course they didn't just copy the books, they would adorn the documents with elaborate paintings and illustrations, often spending weeks just working on the first letter of a chapter. It may have taken months or even years to copy a book - but by golly! - the result was perfect - not just because of its beauty, but because you knew just how much time had gone into its creation.

Back in the early days of the computer, people devoted similar (by today's standards) exacting care and attention to the preparation of punched cards, which contained hundreds or even thousands of painstakingly planned and constructed instructions, which they would feed to the machine which would eventually spit out a response.

Imagine the horror of the medieval illustrators when they first saw the printing press. Now, rather than taking months or years, a book could be created in just hours, not by an artist, but by anyone capable of operating the press! The new machine had automated the sacred task of carefully painting each beautifully crafted letter on the page! Rather than the infinite possibilities where characters are drawn with the flexible tip of the illuminator's brush - each is now stamped out, each identical, forced to conform. Suddenly books were no-longer works of art, each representing vast amounts of personal labor - they were mass-produced to a quality no greater than that which was required to convey their meaning to the reader!

The punched card gave way to terminals where a faulty program could be modified in minutes, rather than the hours necessary to repunch the cards and await one's turn at the altar of the great computer. The binary programming languages were replaced by assembly, and later by compilers - which automated the sacred task of deciding which register should be used to store which byte of data.

It is the same Illuminator's revulsion that we occasionally see in response to modern languages such as Java which (shock horror!) dare to automate tasks like deciding when particular areas of a computer's memory are no longer required. No more can the progluminator carefully craft the exact combination of operating system function calls required to send a byte to the network, now they are forced to conform to the artistically uninspired methods of the cross-platform API!

Of course, just as the printing press - despite upsetting the old and honored profession of illumination - gave birth to a new era of learning and scientific progress, so will programming languages like Java and C# lead to a richer diversity of ideas in computer science, despite upsetting the progluminators of our industry forever regretting the passing of the days when it took two weeks to create a linked list - but by golly it was perfect!

Now, I have - in my time - heard criticism from C++ advocates for my decision to implement Freenet in Java. This always struck me as ironic since generally their opinions were typically reminiscent of the progluminator argument, yet the incredible inelegance of C++, in my view, makes it much more deserving of progluminator scorn than Java could ever be. C++ is essentially an elaborate macro pre-processor for C*, which attempts to crudely duplicate concepts such as Object Orientation and templates, while hammering them into something that can convinently be translated into C. The result is predictably ugly - sure, you can have templates, but be sure you are familiar with the 101 caveats they bring with them due to their underlying implementation. Sure, have your classes and objects, but woe to the programmer that forgets that ultimately they are using an elaborate macro preprocessor.

The bottom line? Don't bother telling me that Freenet should be implemented in C++ unless you are willing to spend months illustrating your code on stretched leather with a carefully prepared pheasant feather while paying particular attention to the initial "#".

* In fact Bjarne Stroustrup's original implementation was just-that, it took C++ code and converted it to C prior to compilation by a C compiler

Patents

Journal Journal: Open Source obfuscating the EU software patents debate 3

As can be determined from previous journal entries, I am extremely concerned about the proposed introduction of software patents in the EU.

There seems to be a misconception in the press that it is only Open Source software that is at risk. I think that this is partly due to the fact that most of the people that are vocal on this issue are Open Source advocates too - and perhaps they are too quick to use anecdotes relating to Open Source to make their point. The reason this is a problem is that it allows pro-patent people to make the fallacious argument that since Open Source software is free, there is no economic impact involved here and people's objections are purely idealogical.

Among the other things that bug me is the fact that these proposed changes are characterized as a "liberalization" of EU software patent law. This is completely backwards, allowing more software patents merely serves to restrict people's freedom, it is the opposite of a "liberalization".

Is it just me, or is starting a new society from scratch on the moon or at the bottom of the ocean looking more attractive every day?

Censorship

Journal Journal: Response to Peacefire's "Distributed Cloud" paper 4

Last week I received some emails from Bennett Haselton quizing me about certain aspects of Freenet. Bennett is the man behind Peacefire, and someone that has done great work in the fight against Internet censorship.

After our conversation I discovered his 2001 paper entitled Why a "distributed cloud" of peer-to-peer systems will not be able to circumvent Internet blocking in the long run. Of course, Freenet is the leading example of a "distributed cloud" architecture.

Needless to say, I didn't entirely agree with his conclusions - and so here is an email I sent to his "Circumventor Design" mailing list, I am still awaiting a response (either from Bennett, or someone else on the list).

Thanks for subscribing me Bennett.

After our interesting off-list conversation, I have read your paper "Why a "distributed cloud" can't work", and have given the matter some thought. here are some preliminary observations, along with self-serving explanations of how this relates to Freenet ;-)

While I agree that this "spidering" attack is theoretically possible, I don't believe that it would be anywhere near practical with a well designed architecture, even for a very well funded and motivated government. I further suspect that this attack will always be a theoretical possibility with any censorship circumvention technology that relies on IP, that is sufficiently usable that it could gain wide acceptance in countries like China (of course I would love somebody to contradict me by describing an easy-to-use architecture that is not vulnerable ;)

This is not to say that there aren't strategies which maximize the cost of such an attack - and I think that Freenet is a good example of this. If you have a situation where an attacker can identify nodes and shut them down, it is important to do the following:

  1. Make any kind of "directed harvesting" difficult or impossible
    By this I mean that the Chinese government cannot easily direct their node address harvesting efforts to those nodes they can block - rather they are forced to wade through a potentially large number of nodes in order to find the ones susceptible to blocking.

    This is pretty-much the case with Freenet, nodes have little control over what nodes wind up in their datastore. A censor would just have to passively collect nodes which would be a slow process. Further, if the censor started to kill every node it's node was seeing, then that node would rapidly become isolated (much like a cop who killed all of his informants). It is an oft-abused and rather questionable saying that the "Internet routes-around censorship", but in Freenet's case there is much truth to it.

    A corollary of this is that the mechanism through which new nodes are added to the network should not provide a shortcut for the censors to identify fresh nodes. This means that the mechanism through which new nodes are added to the network must be as decentralized as possible.

    While Freenet is typically distributed from our web site, we also have a mechanism which we call our distribution servlet, which facilitates "viral" distribution of Freenet. Basically a user can configure his Freenet node to make a web page available from his computer from which other people can download a copy of Freenet which is "seeded" with the nodes in the "parent" nodes routing table. These are made available for a limited time at a randomly generated URL such as:

    http://80-192-4-36.cable.ubr09.na.blueyonder.co.uk:8889/MM9L2lTOmNI/

    which that user can then send to their friends. Note that there isn't anything in this URL that would make it easy for an automated email monitoring tool to spot. Through this mechanism, Freenet can self-procreate without any reliance on a centralized download source or seeding mechanism.

  2. Minimize the effect of shutting down any given node by making the network fault tolerant and spread reliance evenly across the nodes in the network

    Freenet achieves this, in simulations we could shut-down up to 30% of the nodes in the Freenet network without any significant degradation in performance (this means they were all shut down at the same moment). Further, we could shut down the busiest 20% of the nodes in the network without seeing significant problems (see page 9 of [1]).

    It is worth saying that the goal of evenly distributing load across the network is in conflict with the desire to take advantage of resources where available, I think we've reached a good compromise between these two goals in Freenet but it is an area of ongoing development.

I'm not saying this is a comprehensive list of guidelines when defending against this type of threat, but it's all I can think of right now.

Another issue which Bennett and I discussed was the fact that it is likely to be easier for a censor to restrict access to servers outside their country, than between computers inside the country. personally I think that even if an architecture could not support direct communication with servers outside the repressive country, this certainly does not mean that it isn't useful. In fact I think that giving a voice to people inside the repressive country is more valuable than just letting them hear what we have to say. further, it would only take one unrestricted line of communication between the outside world and the internal censorship resistant network to give people inside the country access to external information.

On a different note, it is well known that Freenet does have latency issues, although these have been steadily improving as development continues. We are currently working on a concept we call "Next Generation Routing" which we hope will lead to a dramatic improvement in Freenet's latency. I'm currently working on an article that describes this, but if anyone would like to learn more sign up to the Freenet development mailing list where it is currently a topic of discussion.

All the best,

Ian.

[1] http://freenetproject.org/papers/freenet-ieee.pdf

Patents

Journal Journal: Working the system

I have been corresponding via email with the MEP (Member of the European Parliament) for Leinster, Ireland (where I grew up) about my fears over the introduction of software patents in the EU.

Here is my most recent email (slightly edited), the article I refer to is one written by Arlene McCarthy, a UK MEP that is pushing for software patents in the EU - you can learn about her misguided perspective here. While this isn't the article I refer to in my email, it says pretty-much the same thing.

If you live in the EU - particularly if you are involved in a business that you think could be hurt by software patents, please please please contact your MEP and educate them about these issues - find your MEP here. Mrs Doyle was responsive to email, but some might respond better to fax or even phone calls. It is particularly important to stress the negative economic impact that software patents will have, and specific examples relating to your business will also be useful.

Dear Mrs Doyle,

Many thanks for your email. I am most grateful to you for your interest and help in this area, particularly since this may not have been an issue familiar to you previously. Please feel free to share my concerns with anybody you think can help, including Mr Wuermeling.

In her article Mrs McCarthy says "At a time when many of our traditional industries are migrating to China and Eastern Europe and when we Europeans are having to rely on our inventiveness to earn our living, it is important for us to have the revenue secured by patents and the licensing out of ideas".

Unfortunately, the only Europeans that are likely to benefit from the proposed changes are those that own stock in large American multi-nationals like Microsoft and IBM, and those that make a living as Intellectual Property lawyers. Europeans who consume software, and Europeans that work for smaller software firms that have neither the time or resources to apply for patents on every trivial idea they come up with, will be the victims.

I can categorically state, as someone that has worked in the software industry for my entire professional life and founded three software companies, one in the UK, one in the US, and one with offices in the US and Ireland, that I have never once seen a software patent being used in a manner that would help anybody's economy. Rather, I have seen them used as a way for large software companies to stifle their competition, not by delivering a better cheaper product to their customers, but by aggressively patenting everything in sight then throwing law suits at their competitors. This article describes exactly how this happens - in that case, it was IBM shaking down Sun in the 1980s:

http://www.forbes.com/asap/2002/0624/044_print.html

Some claim that software patents are ok provided that they aren't on trivial or obvious techniques or innovations within the software industry. The problem is that it can be difficult, or even impossible, to confirm that a software patent application meets these criteria - therefore (as has been seen in the United States) the Patent Office will be under pressure to "pass the buck" by granting the patent and letting the courts sort it out down the line. This encourages exactly the kind of litigation that Intellectual Property lawyers love, but which can drive a small software firm out of business.

Intellectual Property lawyers quickly become experts at taking a simple obvious idea, and turning it into a patent application that totally obfuscates what is being patented, and how simple and obvious it really is. I have seen patents on techniques where even the inventor of the technique in question (not the person who filed for the patent) did not recognize that the patent actually covered their idea!

The software industry has thrived without software patents, and where such patents have been permitted their application has done nothing but inhibit progress and competition within the software industry. If the European Patent system must be harmonized, let it be harmonized to something sensible, lets not blindly emulate the mistakes of the United States.

Kind regards,

Ian Clarke

Slashdot.org

Journal Journal: Extracting RSS from /. Journals 6

For some reason - I woke up this morning having decided that my /. journal should be accessible via an RSS feed. After a bit of searching, it appeared that there was no easy way to do it - so I set about writing a PHP script which would scrape a /. Journal and make it available in the appropriate format.

Well, after a few hours of work (which including being reminded how stunningly ugly regular expressions are[1]) I have eventually got something pretty good that complies with the 0.91 RSS specification. I am waiting for the hosting provider that hosts my locut.us domain to upgrade PHP to the latest version (something they say they will have done by tomorrow) before I give this a more permanent home - in the mean time, you can try to run it from my laptop if it is online (as it is about 18 hours a day) :-

http://hawk.freenetproject.org:5080/sdj2rss.php?nick=Sanity

Of course, it should work with anybody's journal - so just for good measure, here is Rob Malda's to prove the point.

  1. Luckily I found this, but what is really needed is for someone to come up with a sane regular expression syntax like append(zeroOrMore(alphaNumericChar), " ") - I recall that someone did do something like this in Python years ago but I can find any reference to it
User Journal

Journal Journal: and in other news... 2

I have found myself being increasingly creative over the past few months, and have been working on a number of diverse projects - many of them a product of me setting a task for myself over a weekend and seeing how far I can get.

Unfortunately, my motivation seems to dissipate when it comes to tidying these things up and making them available to other people. Here's a quick list of a couple of these projects:

  • Whisper
    This is basically an instant messaging application where all communication is encrypted, and an IRC server is used as the back end. The idea is to allow people to communicate secure in the knowledge that nobody's eavesdropping on their conversation. Users are authenticated using PGP-style fingerprints to prevent "man in the middle" attacks, and the communications channel is encrypted using AES. There is other software out there which does this, but typically it is unpolished, difficult to use, and require mucking around with NATs and firewalls as they require direct connections between clients. By using the IRC server network as the back end, Whisper can pretty much work out-of-the-box without any complex network configuration.
    Implementation language: C#
    State of completion: Crypto all works, UI is more or less there, some minor superficial bugs still need to be worked out.
  • WebQuest
    This is an attempt to improve on existing Web search engines by allowing the user to give feedback on the accuracy of search results and then re-search on that basis. The user interface is very simple, it looks much like Google, except for each search results you can indicate whether it is good or bad. It isn't doing any kind of global collaborative filtering so it can't be spammed, all feedback is local to the user's search "session". WebQuest employs some clever statistical analysis to achieve this by adding and removing search terms from the search request that goes to Google.
    Implemetation language: PHP + Java
    State of completion: Working, UI needs to be prettier, core algorithm could benefit from some tweaking
    Try it out: You can sometimes find my development version online here, will upload to a proper web server soon.
  • Kanzi
    This is a joint project I did with Scott Miller around October of last year. The idea was to allow a web developer to make a modification to one web page, and have the same modification automatically made to a bunch of other web pages. The core of this involved creating an XML diff and merge algorithm which I suspect is more sophisticated than any of the current commercial offerings (such as that provided by IBM).
    Implemetation language: Java
    State of completion: Done, we made it available as shareware last year but have agreed to Open Source it as soon as either of us have sufficient time.
    Try it out: You can get the shareware version here, will make it all available as open source as soon as I find the motivation
  • Stickler
    This was a quick hack which Scott and I put together to crawl web pages and alert the authors of those pages to broken links.
    Implemetation language: Java
    State of completion: Done, gathering dust.
  • Locutus
    This is perhaps the most significant project I've been working on, the concept is mine but most of the implementation has been done by my brother, Andrew, who lives in the Republic of Ireland. In essence it is a peer-to-peer search tool which allows users to search for documents on the hard disks of other people within their organization. Locutus incorporates a strong security model to allow you strict control over who can search what files on your computer. It also incorporates sophisticated spam detection capabilities.
    Implemetation language: C#
    State of completion: Beta, 1.0 release soon
    Try it out: here
  • NGRouting
    This is a fundamental reworking of the core of Freenet's routing algorithm. In short, it allows Freenet nodes to much more effectively exploit the data available to them when making routing decisions. A Freenet node will calculate, based on past experience, which other Freenet node is most likely to retrieve the data that is being requested in the shortest amount of time. This is orders of magnitude more sophisticated than the current technique which simply routes a message to whichever node is known to have data close to what is being requested. Further, since actual routing time is taken into account, the Freenet network should adapt to the real world network topology. I have yet to do a detailed writeup, but for the moment you can learn more here.
    State of completion: the core of the code for this is complete and tested but it will be several weeks before it is integrated into Freenet proper.

In addition to these projects we've also been working on something that I'm not at liberty to talk about right now, suffice to say that it's pretty exciting and it pays the rent ;-)

Phew - I am getting tired just thinking about it...

User Journal

Journal Journal: Effective code reuse 2

Well, I have decided to do some code reuse, and rather than try to maintain my own blog - I would simply take advantage of Slashdot's journal system. I have been a long-time /. user and as such I think I am more likely to update more frequently here than using B2 at http://locut.us/blog/ which now redirects here.

Further, I have no personal desire or need to "prove" my PHP/Perl/whatever mettle by setting something up myself - something that I think motivates many who insist upon reinventing the wheel.

Slashdot.org

Journal Journal: Slashdot needs a transparent moderation system 8

I have become increasingly frustrated of-late with people's misuse of moderator privs, it makes posting comments on Slashdot more like appearing on the Rikki Lake show than participating in an intelligent debate. It is almost impossible to say anything contraversial without being moderated as a troll, or flamebait. It is almost enough to make me stop contributing to slashdot altogether.

It seems to me that the Kuro5hin.org approach of making moderations to comments public would help to address abuses of the moderation system more effectively than meta-moderation. The problem with meta-moderation is that in-effect it is too-little, too-late, and the feedback loop for moderators isn't sufficiently transparent to serve as a deterrent to abuse of moderator privs. On kuro5hin, the liklihood that someone will call you on an unfair moderation within minutes of doing that moderation is an extremely effective safeguard against abuse. I feel that if Slashdot adopted this approach, it would benefit the site immensely.

Slashdot Top Deals

Force needed to accelerate 2.2lbs of cookies = 1 Fig-newton to 1 meter per second

Working...