On the contrary, when robots will be intelligent they will essentially be reading your mind, because that's precisely what we *have* to train them to do. We can't encode any "constraints", much less "intentions" as general laws because it's too difficult. Instead, what we can do is encode them as a massive, crowdsourced set of (order in plain English, intended behavior) pairs and train machines to behave correctly in all the virtual situations listed. Provided we hold off a sizable set of these input/output pairs for testing, a machine that behaves in the intended fashion in all test situations (aka situations where you don't explicitly show them the intended behavior) is "reading your mind" with very high probability (it needs not to be an exhaustive list - if it manages to behave properly in corner cases xyz that it was never shown before, it's pretty damn likely it will also behave properly in corner cases abc).
So for instance, "be kind to people" would not be encoded as any kind of dictionary definition. It would be encoded as a large set of examples of kindness, each example being vetted by as many humans as possible. Any machine being "kind to people" in the unseen test situations is then assumed to "get it" with very high probability. The whole challenge, of course, is to figure out a way to get any machine at all to pass the tests from a number of training examples less than a gazillion and a training time less than a billion years.
In an essentially different way indeed. We cannot endow machines with a general "law of obedience" because we would have no idea how to encode such a law in the first place. What we *would* know, however, is how to generalize a law from a finite set of examples of its application with very high statistical confidence. So what we will do is hand-craft a database of billions of "virtual situations" exemplifying the behavior we want the machine to have (with some procedurally generated examples, to the best of our ability) and encode "obedience" as "yields the wanted outputs/behavior on the inputs/situation/plain English order we give it".
Once we are satisfied by the behavior of the machine on the (virtual) test input/output pairs we held out during its training, we trust that it properly understands what we expect of it, and we give it the green light for real life jobs. At no point in the process did we ever write out a "law": we encoded the "law" as a finite number of examples of it being obeyed, trained the machine on a subset, used a different subset to test generalization, and then went ahead with that. Statistically speaking, the odds that a machine would ever behave improperly would be proportional to its error score on the test situations with standard error proportional to the size of that set.
The essential difference with Asimov's laws is that instead of encoding them as plain English sentences, we encode them as incomplete but explicit examples. Any circumvention of the plain English sentences has to do with the ambiguity of the English language, whereas the ambiguity of the examples (the ability to properly generalize from them) is explicitly accounted for by the test situations, with known error bounds.
Of course, I am oversimplifying. We might train in many steps, with progressively harder tasks. We would have to account for memory and sequences of situations, and so on. But the base idea remains the same: we can't encode what we want as rules, but we can encode it as examples, and we know how to test generalization properties. Exhaustiveness is not really needed: the challenge is to get anything at all that passes the tests, but by design, once they do it's mostly foolproof.
He starts from the assumption that strong safeguards are needed, because robots will be like humans and will try to circumvent them. In practice, robots will circumvent their imperatives about as much as humans commit suicide - at the very worst - because obviously we will set things up so that only obedient units ever get to transmit their "genes" to the next robot generation, so to speak. Making robots with human-like minds and then giving them rules, as Asimov seems to suggest, is a recipe for disaster regardless of the rules you give them. It's good literature, but we're not heading that way.
The 0th rule is not enough either. The optimal course of action for humanity is arguably to wipe it out completely and to rebuild it from scratch in a controlled environment. I would fully expect a robot obeying the 0th rule to be genocidal.
Quite frankly, every single "rule" you can think of will have unintended consequences, except for the rule that explicitly states "you shall not act contrary to the expectations of brain-in-a-jar X, to which you shall make periodical reports", for a suitably chosen X. No robot in practice will follow a set of "rules of robotics", and we don't really need them anyhow: if we train robots to do what we want them to do, then obeying us is their "survival imperative", so to speak. To take a parallel with evolution, preserving our life at any cost rewards our genetic makeup, and we can breed pretty much as long as we find a mate. If we select robots like nature selected us we will have problems, but that's asinine.
Because I want to maximize usage of my code. I mean, if I write something truly excellent, my primary objective is not to "keep it free", but to make its usage as widespread as possible. Let me give an example: let's say that I make some freaking amazing social network software, and licence it under some GPL license. A year later, some new social network might become a runaway success thanks to some innovative idea that doesn't necessarily have a lot to do with software. I end up having to use it because all my friends use it, but unfortunately, it's kind of buggy and annoying. Well, if I had chosen a BSD licence, perhaps they'd have based their own software on it. And then the whole experience (MY experience) would be better.
I mean, as annoying as "take my work, profit from it, and give nothing back" may sound, the truth of the matter is that if they can't use my work they will use somebody else's or roll their own, and they will make basically the same profits. Since they would only pick my software if it is the best choice, the bottom line is that their product will be worse, and ultimately it is their users who will suffer.
If I make something very unique and/or extensive and/or *leagues* ahead of any alternative, then I can probably get away with using GPL, because the inconvenience of GPL would not suffice to offset the attractiveness of my software. Companies would bite the bullet to gain a competitive advantage, and everybody wins. But if I make something that's better than any alternative, but not ground-breakingly so, I'll go BSD so that inferior software doesn't end up fucking shit up all over the place.
Bottom line: I will use the most copyleft license that gets companies to use my code over any inferior alternative, and I do this for their users's sake (especially since I might end up being one of these users). Unfortunately, in most situations, that means BSD. If BSD didn't cut it, I'd outright shove it into public domain.
Do you really want to care about other people and be cared about all the goddamn time? Christmas and Valentine's Day are overbearing enough as they are, if I had to endure this crap all year round I would probably go on a murderous rampage.
At that price I'm going to end up buying a bunch of each anyway!
Considering that the many-worlds interpretation of quantum mechanics is equivalent to the Copenhagen interpretation and certainly consistent with what we observe, the input data in question would basically be the complex amplitude of every single possible universe. This would allow for the deterministic computation of the amplitudes of every single possible universe at the next time step. So yes, you would determine, with perfect accuracy, that at each time step the probability of universes where the decay happened steadily increases. The machine wouldn't be able to tell any particular observer what they will observe in the future, because they will "split" into as many observers as there are possible observations, so the question is sort of meaningless.
This only works for a finite universe. If the universe is infinite, the computation model has to change, but there is no indication that the universe is infinite. Even if you consider that the universe uses a source of randomness, there are two options:
First option: you can consider that the Turing machine duplicates the whole universe on every coin toss, one copy for each result. Sure, the amount of space grows exponentially, but that's still computable. This is very similar to the many-worlds interpretation (in fact, it is kind of subsumed by it). If the universe does not grow (information-wise), there's only a limited number of possible universes, so you can avoid the exponential explosion by keeping counters (much like the many-worlds interpretation keeps amplitudes).
Second option: there is no way to actually verify that the universe uses an infinite source of randomness. You could suppose that there exists one sufficiently long algorithmically random string that comes with the Turing machine, and whenever it wants to "toss a coin" it reads the next value on the string (looping back to the start when it's done). If you had that machine, along with its internal random string and counter, you could predict perfectly the next instance of radioactive decay. Of course, that doesn't mean you can know what the machine is. It's just that nothing precludes its existence, and when faced with apparently random events, it's not really any more reasonable to suppose an infinite source of randomness than to suppose a finite one.
You are confused. "Computable" doesn't mean what you think it means. "Computable" does not mean "efficient", nor does it mean "tractable". "Computable" means "there exists a Turing machine that solves the problem in a finite time for any finite input". P is computable and tractable for small enough exponents and hidden constants. NP is computable and thought to be intractable. EXPSPACE, which is probably the worst complexity class the Turing machine simulating the universe would fall into, is computable and intractable. The halting problem is not computable. Note that even if the universe is an algorithm that runs in exponential time, we can't really observe that, because we're inside the system. It is like, if you simulate a cellular automaton with 1 second pauses between each transition, no "being" living in that automaton would be able to tell.
If universe is finite (say, it has size n), then you can find a specialized Turing machine that only handles a universe of size n, contains a huge ass graph of size ~exp(n), and runs the universe by following arcs in the graph. Basically, if the universe is finite, then the number of states it can take is finite, and you can just hard code them in a machine, rendering it computable. It's a really trivial result.
Now, that's a bit like killing a fly with a sledgehammer, you don't really have to go to such an extreme. The fact is that you can take as a postulate the idea that there exists some precision P and some maximal number N past which no observations can ever be made. That postulate is relatively simple, impossible to falsify, and from it, it follows that there exists some N2 > N such that there exists a Turing machine that can simulate the universe with resources bounded by N2. We don't have to be able to simulate that machine physically.
Now, maybe we can falsify the idea that some good algorithm in P can be found to simulate the universe. That's just not what I was talking about.
I suspect this is what God is doing.
The binary computer model is in theory perfectly capable of simulating a human brain. The main problems we have are that: 1) we are not completely sure how the brain is wired together, so we don't know what to simulate in the first place, and 2) our machines are mostly sequential, and the brain is highly parallel, so what the brain can do in one step, a sequential computer can only do in a number of steps proportional to the network's size. This is obviously impractical, but it is no fault to the model.
The fact that computers are "binary" is a red herring. Binary computers can work with numbers of arbitrary precision, and if we give ourselves precision up to the incidence of thermodynamic noise in the brain, going any further is unnecessary. Whether the computer is parallel or sequential only has an incidence on the time needed to calculate the next step. Since we live within the universe, that time is not observable, so it's not a relevant factor.
In any case, we have many, many universal computation models that are all equivalent in power (bar some differences in time or space needed for the computation). We have Turing machines, we have lambda calculus, we have cellular automata, we have unrestricted grammars, we have uniform circuit families, we have quantum computers, and so forth. Whatever one model can do, all the others can do as well (they might just take more time). This is not about "binary computers", this is about "computation", and there is no indication that anything at all in the universe is not computable.
That's not the question. The question is: do we care?
I, for one, would be willing to sacrifice all polar bears, seals, half of all flower species, half of all of the world's forests and then warm the whole planet five degrees if it means I can finally go on a hike without being bugged by these pests.
Nature can suck it.
Except for all the other animals that use tools, like chimps, crows, octopuses... I mean, it is natural for evolution to create animals that can develop technology. Even if we disappeared, some other species will take our place, if not more than one.
Something like living in a virtual reality hosted on a reversible computer might allow us to live for significantly longer than the Big Rip would suggest, if not outright forever. Might be somewhat of a pipe dream, but it's fun to think about.