Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
It was also much more realistic then many other few-years-int-feature cyber fiction. There was for example plot where one of hackers got to do amazing stuff because he still used keyboard. Which is nice metaphore to people still using assembly today to pull off amazing demos/wiruses/scripts impossible in high level languages.
So the reputation systems came a long long way from where they used to be, false positives are no longer big problem, the biggest issue is now reaction time (time between player starting spewin vitriol to the moment he's prevented from playing), ideally it should not be few days (as it's now in most cases), someone having bad day shouldn't mean a bad day to all person he's teamed up with
One of the solutions might be "incremental" baning, by disabling some of the futures - which some games already do (and Microsoft is doing in this case). One of better examples is voice chat muting, I cannot recall which game id doing it. They way it works is the more people mute asshole, the more likelly he is to start muted in first place, his teammates might decide to unmute him, but there's no longer risk of "Beter not fuck up morons i need this win" welcoming you to the match.
I'm looking forward to further advancements in these systems, as playing team games on internet is still quite annoying these days, especially since you often get matched with people who don't speak english and/or you cannot just smack for beeing an idiot like you'd if you played football together.
When creating these systems you don't simply ban someone after one or few reports. The way most of them work are: Calculate a trust in player reporting T. New players have this set very low, later the more acurate reports were the higher the trust, addintionally usually the more reports user sends the less they "weight" (this basically makes assholes who report for "feeding" everyone with negative k/d ratio meaningless and is a reason i was never banned
Once the number of reports * trust outweight player karma (which he usually collects by small amount for each game where he's not reported, and for each accurate report he makes), then he gets banned.
That's a bit simplified and in reality you build a neural network with feedback (that's how most of these systems are implemented), initially you hire people to "teach" a network, eliminate initial threat, and build "trust" on group of players. After you have big enough group of trusted players, they themselves are used to further train the network and detect new usefull players and ban bad ones. A lot depends on the initial training phase, but I've personally seen one Community Manager turn her community into self-moderating machine, after a year she didn't even had to do much banning herself, each message that didn't conform to standards was almost immidietly met with polite response that explained why it's inapropriate and request not to continue the topic! By users tehmselves!
So yes, these systems do work (At least good ones), and no reports do not become your personal moderation/harrasment tool, smart people already thought of that
You'd expect `rm huge_file` to work, but no it won't. Some pages recomend echo "">huge_file but that not always help either if the reason the disk got full is metadata
I honestly cannot understand how anyone can create filesystem that A) lies about free disk space B) Does not allow you to free up space when it's full.
Link to Original Source
It's a system designed to avoid "flops" and for more realistic IPO prices. If underwriter values stock to high they have to spend money buying that stock.
Usually there's time and/or a value limit (so we'll prop up price fo x days, or we'll spend Y$ to keep it above Z$)