Having a simple idea, I figured I'd write it into the journal as a first usage; and why not let them post it as a story too if they want? Anyway, the basic idea: users ranking stories, users accepting rankings. Why let armchair security experts and self-described IT experts rate your stories? Why not pick the users your think have a clue and only count their votes? Read on to see how this fleshes out in my mind.
The premise for this idea is that some Slashdot stories are good, some are bad, and some come from armchair experts who know nothing and can hype a good but inaccurate and relatively useless pile of FUD. The editors are not experts on everything either, and have let a few FUDs through in the past under the guise of breaking news. At the same time, we can't rely on armchair warriors to tell us whether or not stories are good.
The solution I've come up with is simple. Stories are rated by any user wishing to cast a rating. Users select other users they believe are knowledgeable, selecting which topics they believe the users are knowledgeable in. Each user then sees a story with a rating computed based on the opinions of users he's decided understand what they're talking about; other users are discarded.
At the simplest level, a user has other users he believes understand a topic. A more robust solution is also possible where users can go a certain depth into a web of trust. In order to accomplish this, the user sets whether or not he trusts those knowledgeable users to also recognize other knowledgeable users, and thus considers those users that the knowledgeable user considers knowledgeable as knowledgeable as well.
This trust model can be set to a specific depth, where this evaluation is followed down 1, 2, 5, or 10 steps deep. A full depth evaluation would also be possible; however it would require caching and triggering on modification of the full depth with loop detection, otherwise it would be very slow. Even with caching, a lack of loop detection will allow the system an easy route to an infinite loop. A mandatory maximum depth will prevent this, but will still bring the system to its knees for a short time.
Loop detection is important to avoid a DoS for anything more than even 1 step deep; 3000 users with all of each other trusted will otherwise cause the second step of evaluation to pass through 6,000,000 nodes, fully evaluating each node 3000 times. Simple loop detection will check if the node has been evaluated yet, and skip it if so.
Caching on changes may be the most CPU effective solution, where when any user changes his settings the changes are applied upwards through those users that depend on that setting, to avoid on-the-fly evaluation. On-the-fly evaluation may reach the 9000-node-evaluation problem at only a few steps, where users may trust 30 users who each trust 30 users, giving 9000 total users; building this list at each page view would be too expensive.
As a final measure, some users may just want to know what "the experts" think is hot. This is a low hanging fruit problem; users can simply allow the top 100 most popular, top 5%, or users accepted directly by over 1000 or 1% of Slashdot users in each topic. These selections would not be counted in these statistics; only users directly accepting individual users as knowledgeable would count. In this way, users can pull in ratings based on the users other users think are smart.
I believe this system would be useful in allowing users to weed out useless headlines and promote useful headlines because it would allow users control over who they are relying on to judge the articles. Those users not thought capable of making useful decisions are ignored in this system, giving every Slashdot user a personalized rating. Most appreciated Slashdot users are publicly known, allowing users to blanket accept the most knowledgeable users per topic. This should allow users to customize their Slashdot experience with high quality ratings of a personal value.
The cost of feathers has risen, even down is up!