It's about forcing you to do these things at gunpoint (and yes, a gunpoint is somewhere in your future if you stop paying your taxes) by raising taxes (by 3.8%) and by forcing you to buy health insurance when you don't want to do so.
And yet I'd bet you'd be the first to expect to get treated in an emergency room if you didn't have health insurance and something bad happened. The reality is that nobody wants to live in a country where people are allowed to die on the streets because they are poor. If you accept that as a premise, then at some point it is necessary for us all to accept some kind of mandate to participate in the health care system. The alternative is the current situation, the worst of all worlds, where emergency rooms end up being the safety net for those without insurance, and we all pay through the nose for it.
One reason this kind of problem occurs is that many collaborative filtering algorithms are measured based on "root mean squared error", basically the square root of the mean of the differences between what was predicted and what the user actually did.
The problem with this metric? It doesn't account for a variety of important things, one of which is that most users value diversity. Another is that in most recommendation systems, what is important is the relative relevance of recommendations to each-other, whereas RMSE is an absolute measure of effectiveness. And a really tricky one is that the recommendation algorithm itself can impact user behavior. For example, the user may raise their standards if the algorithm does a better job.
The unfortunate answer is that the only rock-solid way to measure the effectiveness of recommendation algorithms is to test them with real users, perhaps splitting the user population between different algoritms, and seeing which does best.
I'm pretty familiar with this issue as my day job is building a behavioral ad targeting engine. We learned a long time ago that while RMSE has its uses, there is often limited correlation between an algorithm's ability to predict user behavior retrospectively (which ads they will click on and what products they will buy), and how much additional revenue the algorithm will generate in practice.
Our solution is to use RMSE as a first-blush indication of how good an algorithm is. Secondly, we take the top, say, 10% of ads with the best predictions, and see what the actual click or conversion rate is within this 10%. This requires a higher volume of data, but yields results that are closer to what we find in reality. Lastly, the algorithm then has to prove itself in the wild on a small subset of traffic. Only then can we really know if any algorithm is an improvement on any other.
Freenet has been slow and hard to use in the past, but its improved quite a bit. It is the obvious platform for something like Wikileaks. Of course, there is nothing to prevent people from mirroring content on the web (since installing Freenet, like any piece of software, is a hassle). But at least there will be an unimpeachable backup of all data on Freenet.
One key component of Swarm is that a supervisor process uses a clustering algorithm to determine how data should be distributed such that it minimizes the number of times a continuation must jump between different computers. Does Mosix have any equivalent?
Why has Mosix not achieved wider usage, for example, allowing web applications to scale up using multiple servers?
Date: Wednesday, July 22, 2009
Time: 7pm PT, San Francisco
Ok, so 7pm PT today - but then it says:
Thu, Jul 23th at 3am - London | 10pm - New York | Thu, Jul 23th at 12pm - Sydney | Thu, Jul 23th at 11am - Tokyo | Thu, Jul 23th at 10am - Beijing | Thu, Jul 23th at 7:30am - Mumbai
So 7pm tomorrow? WTF?
"Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis