I'm under the impression that you're confusing things. Noone said that you'd be forced to run an exit node, or even a relay. I believe it's just about making the protocol a standard.
Don't get me wrong, I'd have hard time living without The Linux Foundation's products, but when this year I wanted to work for The Linux Foundation in Google Summer of Code, I gave up after reading their proposals. I wanted to learn some kernel development stuff and couldn't find a single suggestion related to that. Instead, there were some higher-level projects like OpenPrinting, which I personally find totally uninteresting.
The trick is that retrieval can be dangerous by itself if you're using the database and forgot to sanitize your SQL. Being a moron can't be solved by an RFC.
How is that news? Zalewski wrote a book on that years ago ("Silence on the wire")
You actually woke up just to see the article?
Because they ship, for example, laptops with these optical drives?
You don't worry about security too much, do you? As far as I know, 2.4 is not supported anymore.
I'm not sure it's really that simple design. Don't you think it really takes a lot of imagination to actually visualise the inner state of the cube?
DIdn't mean that. Complexity is usually a sign of bad design. Actually, most of concepts in CS are pretty straightforward and if you get stuff complicated, it's more prone to bugs and thus, security problems. For example, take ECDSA and RSA. Modular exponentation is a pretty simple concept while the whole elliptic-curve thing was complicated enough for guys smarter than us to insert a backdoor into the equations. We should definitely go for simple and transparent designs.
The next obvious step is not to use it unless you can understand it.
Ah, sorry, link's dead, here's a copy: http://internetcensus2012.bitbucket.org/
Reminds me of http://internetcensus2012.github.io/. I hope they'll publish all the data sets and I hope they won't have legal problems because of some sensitive data there, though I don't really believe it's possible. That's why the original author of IC2K12 published it anonymously.
(I meant O(n^2) memory complexity.)
Well, that sounds quite cool, but also makes me wonder how does the algorithm tell wrong associations from the good ones. These things can easily go up to n^2 complexity.
And what percentage of the overall information did they actually include in the 2% of requests?