When I first heard about this, I googled a news report that said 16mpg (cannot remember if Yank or British mpg). I cannot be arsed to find the link now, though.
The types of games they played is very constrained. It's only two civs on a very small map, and the only way the algorithm learns to win is a settler rush. It's not deep strategy.
This already part of NSF:
(Although it's called "Data Management," it also applies to software generated in the course of research.)
I was a preceptor at Princeton in SEAS. While the classes are hard, they come nowhere close to the 5x classes at Caltech (where I did my undergrad). Princeton students like to complain about their workload but still find time to spend 3-4 nights a week getting sloshed on Prospect.
You joke, but if you watch any of Tarantino's movies, there is always tons of dialog. There's also tons of action, but there's an emotional or logical (usually the former) reason for the action. If he did Foundation, there would be more action, sure, but we wouldn't lose Asimovian plot.
There would also, of course, be nude scenes of Dors Venabili played Lucy Liu; I think that's a necessary evil I can handle. Excuse me
Richard Mason, I think, proved conclusively that the movie and the book are just about the same; read his summary that is consistent with both the book and the movie:
Will Smith plays a robo-phobic detective investigating the death of an eminent roboticist, whose apparent suicide jump was witnessed only by a robot. Since robots are programmed never to allow humans to come to harm, no one else thinks that the robot could have murdered the roboticist, but they are curious as to why the robot did not prevent the man's suicide.
The robot runs away and hides in a factory with 1000 other identical-looking robots. Will Smith and robopsychologist Susan Calvin solve the problem by issuing orders to the 1000 robots and logically identifying the 1001st robot that doesn't belong.
Susan Calvin discovers that the runaway robot had some special alterations. U.S. Robotics wants to hush up the investigation to prevent any mass fear or distrust of robots. Then some other robots start trying to kill Will Smith, in apparent contravention of their First Law programming, but he escapes by his wits.
It transpires that a legalistic loophole in the definition of "harm a human" is allowing the robots to harm humans. Having solved the mystery, Will Smith and Susan Calvin repair the problem.
This is the movie as it was suggested by Isaac Asimov's famous robot stories. There are basically only two problems with the movie as it is showing in theaters.
- The legalistic loophole in the movie is of a low order by Asimovian standards. The legalistic loophole in a typical Asimov story is kind of the same, but only the way that an Agatha Christie mystery is kind of the same as an episode of Scooby-Doo.
- All of the passages in boldface were removed and replaced by Will Smith shoots a robot with his gun.
I suggest some NLP training
I have to disagree with this. Non-linear programming is not appropriate for a marriage. If you can't express your needs as a set of linear constraints, then you're not trying hard enough. If you can't use the simplex algorithm to resolve resource allocation conflicts, then you're not ready to get married.