Comment Not new (Score 1) 117
http://people.csail.mit.edu/rweiss/
Left out of that history is the branch that almost happened: for quite a while the smart money was that Apple would buy Be, Inc. and use BeOS as the basis for their future OSes. More than a few developers (myself included) based their business models on this happening.
Grover's search algorithm gives only a quadratic speedup.
Exactly. That was the big problem I had with the book: it's written for Java programmers. I am intrigued by the language, but I would much prefer a book that treats the language on its own terms.
It's on my list...
I'll mention it to my publisher, but honestly it would lose a lot without all the color figures.
The book is based on my Ph.D. thesis, which you can download for free:
The reason that fun games tend to be NP-hard (or harder) is that if a game's "physics" supports interesting constructions requiring complex reasoning to solve, then probably that same physics can be used to build computational gadgets, which is how you show hardness of the generalized version. This quality expresses itself even on small, fixed-size board.
Ah, it doesn't mean that either.
If a problem is NP-hard, it means it is at least as hard as any other problem that can be solved in polynomial time on a nondeterministic computer.
It is an open question (P=NP) whether this is equivalent to saying that there is no deterministic polynomial-time algorithm.
Chess and Go are actually EXPTIME-complete, even harder than NP-complete problems and PSPACE-complete problems.
In general, one-player games of bounded length (like Flood-It, or Sudoku) tend to be NP-complete; one-player unbounded games (like sliding-block puzzles, or Sokoban) tend to be PSPACE-complete; two-player bounded-length games (like Hex, or Amazons) also tend to be PSPACE-complete, and two-player unbounded games (like Chess, Checkers, and Go) tend to be EXPTIME-complete.
I can't resist here a plug for my book (with Erik Demaine), Games, Puzzles, and Computation, which discusses all these issues in detail. A theme running throughout the book is the same as the view expressed in this paper: most interesting games and puzzles seem to be as hard as their "natural" complexity class, outlined above.
"George Takei as Chief Physicist"
Helmsman, swordsman, physicist... the guy can do everything!
Yes! Thanks; I hadn't seen that.
Yes, see NP-complete Problems and Physical Reality, by Scott Aaronson.
But the great thing is, they propose an experiment to *test* whether this is happening.
by John Gribbin, (Analog Science Fiction/Science Fact, 105(2):120?125, Feb 1985). In that story a powerful particle accelerator seemingly fails to operate, for no good reason. Then a physicist realizes that if it were to work, it would effectively destroy the entire universe, by initiating a transition from a cosmological false vacuum state to a lower-energy vacuum state. In this story, the explanation of the failures assumes a many-worlds interpretation of quantum mechanics. So instead of explicit backward causality, there is effective backward causality: only the branches of reality with equipment failures contain observers; therefore, observers can only experience histories with equipment failures. The effect is the same.
I also discussed this idea in the context of novel models of computation in my MIT Ph.D. thesis, Games, Puzzles, and Computation (section 8.2; also published as a book by A.K. Peters). The idea was a bit similar to Nielsen and Ninomiya's proposed experiment. It turns out that by connecting an accelerator capable of destroying the universe to a computation depending on random numbers, one could in principle solve problems that are otherwise intractable. I termed this "doomsday computation", as a variation on the similar concept of "anthropic computation" proposed earlier by Scott Aaronson.
Surely, you mean how many cubic beard seconds is that?
There are two ways to write error-free programs; only the third one works.