Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Asimov was not naive. (Score 1) 146

On the contrary, when robots will be intelligent they will essentially be reading your mind, because that's precisely what we *have* to train them to do. We can't encode any "constraints", much less "intentions" as general laws because it's too difficult. Instead, what we can do is encode them as a massive, crowdsourced set of (order in plain English, intended behavior) pairs and train machines to behave correctly in all the virtual situations listed. Provided we hold off a sizable set of these input/output pairs for testing, a machine that behaves in the intended fashion in all test situations (aka situations where you don't explicitly show them the intended behavior) is "reading your mind" with very high probability (it needs not to be an exhaustive list - if it manages to behave properly in corner cases xyz that it was never shown before, it's pretty damn likely it will also behave properly in corner cases abc).

So for instance, "be kind to people" would not be encoded as any kind of dictionary definition. It would be encoded as a large set of examples of kindness, each example being vetted by as many humans as possible. Any machine being "kind to people" in the unseen test situations is then assumed to "get it" with very high probability. The whole challenge, of course, is to figure out a way to get any machine at all to pass the tests from a number of training examples less than a gazillion and a training time less than a billion years.

Comment Re:Asimov was not naive. (Score 1) 146

In an essentially different way indeed. We cannot endow machines with a general "law of obedience" because we would have no idea how to encode such a law in the first place. What we *would* know, however, is how to generalize a law from a finite set of examples of its application with very high statistical confidence. So what we will do is hand-craft a database of billions of "virtual situations" exemplifying the behavior we want the machine to have (with some procedurally generated examples, to the best of our ability) and encode "obedience" as "yields the wanted outputs/behavior on the inputs/situation/plain English order we give it".

Once we are satisfied by the behavior of the machine on the (virtual) test input/output pairs we held out during its training, we trust that it properly understands what we expect of it, and we give it the green light for real life jobs. At no point in the process did we ever write out a "law": we encoded the "law" as a finite number of examples of it being obeyed, trained the machine on a subset, used a different subset to test generalization, and then went ahead with that. Statistically speaking, the odds that a machine would ever behave improperly would be proportional to its error score on the test situations with standard error proportional to the size of that set.

The essential difference with Asimov's laws is that instead of encoding them as plain English sentences, we encode them as incomplete but explicit examples. Any circumvention of the plain English sentences has to do with the ambiguity of the English language, whereas the ambiguity of the examples (the ability to properly generalize from them) is explicitly accounted for by the test situations, with known error bounds.

Of course, I am oversimplifying. We might train in many steps, with progressively harder tasks. We would have to account for memory and sequences of situations, and so on. But the base idea remains the same: we can't encode what we want as rules, but we can encode it as examples, and we know how to test generalization properties. Exhaustiveness is not really needed: the challenge is to get anything at all that passes the tests, but by design, once they do it's mostly foolproof.

Comment Re:Asimov was not naive. (Score 3, Insightful) 146

He starts from the assumption that strong safeguards are needed, because robots will be like humans and will try to circumvent them. In practice, robots will circumvent their imperatives about as much as humans commit suicide - at the very worst - because obviously we will set things up so that only obedient units ever get to transmit their "genes" to the next robot generation, so to speak. Making robots with human-like minds and then giving them rules, as Asimov seems to suggest, is a recipe for disaster regardless of the rules you give them. It's good literature, but we're not heading that way.

Comment Re:Asimov naive? I don't think so. (Score 1) 146

The 0th rule is not enough either. The optimal course of action for humanity is arguably to wipe it out completely and to rebuild it from scratch in a controlled environment. I would fully expect a robot obeying the 0th rule to be genocidal.

Quite frankly, every single "rule" you can think of will have unintended consequences, except for the rule that explicitly states "you shall not act contrary to the expectations of brain-in-a-jar X, to which you shall make periodical reports", for a suitably chosen X. No robot in practice will follow a set of "rules of robotics", and we don't really need them anyhow: if we train robots to do what we want them to do, then obeying us is their "survival imperative", so to speak. To take a parallel with evolution, preserving our life at any cost rewards our genetic makeup, and we can breed pretty much as long as we find a mate. If we select robots like nature selected us we will have problems, but that's asinine.

Comment Re:BSD license was always more permissive, so grea (Score 1) 808

Because I want to maximize usage of my code. I mean, if I write something truly excellent, my primary objective is not to "keep it free", but to make its usage as widespread as possible. Let me give an example: let's say that I make some freaking amazing social network software, and licence it under some GPL license. A year later, some new social network might become a runaway success thanks to some innovative idea that doesn't necessarily have a lot to do with software. I end up having to use it because all my friends use it, but unfortunately, it's kind of buggy and annoying. Well, if I had chosen a BSD licence, perhaps they'd have based their own software on it. And then the whole experience (MY experience) would be better.

I mean, as annoying as "take my work, profit from it, and give nothing back" may sound, the truth of the matter is that if they can't use my work they will use somebody else's or roll their own, and they will make basically the same profits. Since they would only pick my software if it is the best choice, the bottom line is that their product will be worse, and ultimately it is their users who will suffer.

If I make something very unique and/or extensive and/or *leagues* ahead of any alternative, then I can probably get away with using GPL, because the inconvenience of GPL would not suffice to offset the attractiveness of my software. Companies would bite the bullet to gain a competitive advantage, and everybody wins. But if I make something that's better than any alternative, but not ground-breakingly so, I'll go BSD so that inferior software doesn't end up fucking shit up all over the place.

Bottom line: I will use the most copyleft license that gets companies to use my code over any inferior alternative, and I do this for their users's sake (especially since I might end up being one of these users). Unfortunately, in most situations, that means BSD. If BSD didn't cut it, I'd outright shove it into public domain.

Comment Re:Everything is computable? (Score 1) 214

Considering that the many-worlds interpretation of quantum mechanics is equivalent to the Copenhagen interpretation and certainly consistent with what we observe, the input data in question would basically be the complex amplitude of every single possible universe. This would allow for the deterministic computation of the amplitudes of every single possible universe at the next time step. So yes, you would determine, with perfect accuracy, that at each time step the probability of universes where the decay happened steadily increases. The machine wouldn't be able to tell any particular observer what they will observe in the future, because they will "split" into as many observers as there are possible observations, so the question is sort of meaningless.

This only works for a finite universe. If the universe is infinite, the computation model has to change, but there is no indication that the universe is infinite. Even if you consider that the universe uses a source of randomness, there are two options:

First option: you can consider that the Turing machine duplicates the whole universe on every coin toss, one copy for each result. Sure, the amount of space grows exponentially, but that's still computable. This is very similar to the many-worlds interpretation (in fact, it is kind of subsumed by it). If the universe does not grow (information-wise), there's only a limited number of possible universes, so you can avoid the exponential explosion by keeping counters (much like the many-worlds interpretation keeps amplitudes).

Second option: there is no way to actually verify that the universe uses an infinite source of randomness. You could suppose that there exists one sufficiently long algorithmically random string that comes with the Turing machine, and whenever it wants to "toss a coin" it reads the next value on the string (looping back to the start when it's done). If you had that machine, along with its internal random string and counter, you could predict perfectly the next instance of radioactive decay. Of course, that doesn't mean you can know what the machine is. It's just that nothing precludes its existence, and when faced with apparently random events, it's not really any more reasonable to suppose an infinite source of randomness than to suppose a finite one.

Comment Re:Unfalsifiable (Score 1) 214

You are confused. "Computable" doesn't mean what you think it means. "Computable" does not mean "efficient", nor does it mean "tractable". "Computable" means "there exists a Turing machine that solves the problem in a finite time for any finite input". P is computable and tractable for small enough exponents and hidden constants. NP is computable and thought to be intractable. EXPSPACE, which is probably the worst complexity class the Turing machine simulating the universe would fall into, is computable and intractable. The halting problem is not computable. Note that even if the universe is an algorithm that runs in exponential time, we can't really observe that, because we're inside the system. It is like, if you simulate a cellular automaton with 1 second pauses between each transition, no "being" living in that automaton would be able to tell.

If universe is finite (say, it has size n), then you can find a specialized Turing machine that only handles a universe of size n, contains a huge ass graph of size ~exp(n), and runs the universe by following arcs in the graph. Basically, if the universe is finite, then the number of states it can take is finite, and you can just hard code them in a machine, rendering it computable. It's a really trivial result.

Now, that's a bit like killing a fly with a sledgehammer, you don't really have to go to such an extreme. The fact is that you can take as a postulate the idea that there exists some precision P and some maximal number N past which no observations can ever be made. That postulate is relatively simple, impossible to falsify, and from it, it follows that there exists some N2 > N such that there exists a Turing machine that can simulate the universe with resources bounded by N2. We don't have to be able to simulate that machine physically.

Now, maybe we can falsify the idea that some good algorithm in P can be found to simulate the universe. That's just not what I was talking about.

Comment Re:Every problem a nail, everything 1's and 0's (Score 1) 214

The binary computer model is in theory perfectly capable of simulating a human brain. The main problems we have are that: 1) we are not completely sure how the brain is wired together, so we don't know what to simulate in the first place, and 2) our machines are mostly sequential, and the brain is highly parallel, so what the brain can do in one step, a sequential computer can only do in a number of steps proportional to the network's size. This is obviously impractical, but it is no fault to the model.

The fact that computers are "binary" is a red herring. Binary computers can work with numbers of arbitrary precision, and if we give ourselves precision up to the incidence of thermodynamic noise in the brain, going any further is unnecessary. Whether the computer is parallel or sequential only has an incidence on the time needed to calculate the next step. Since we live within the universe, that time is not observable, so it's not a relevant factor.

In any case, we have many, many universal computation models that are all equivalent in power (bar some differences in time or space needed for the computation). We have Turing machines, we have lambda calculus, we have cellular automata, we have unrestricted grammars, we have uniform circuit families, we have quantum computers, and so forth. Whatever one model can do, all the others can do as well (they might just take more time). This is not about "binary computers", this is about "computation", and there is no indication that anything at all in the universe is not computable.

Comment Re:Obligatory turd in punchbowl (Score 1, Funny) 521

That's not the question. The question is: do we care?

I, for one, would be willing to sacrifice all polar bears, seals, half of all flower species, half of all of the world's forests and then warm the whole planet five degrees if it means I can finally go on a hike without being bugged by these pests.

Nature can suck it.

Comment Unfalsifiable (Score 1) 214

The idea that the universe can be understood as a computer program is essentially unfalsifiable. Given that at any moment the set of all observations we have at our disposal is finite, it is trivial to build a Turing machine that produces that exact set, regardless of the actual underlying mechanics. Even if, say, the universe contained some magic oracle that solved the halting problem for Turing Machines, we could never actually verify that it does. It could just be some machine that runs the input TM for a number of steps greater than what the universe can store, and then gives up and says it never halts.

I believe that seeing the universe as a computation could be useful to gain new insights, but it's just a way to think about things, not something that can be formally tested.

Slashdot Top Deals

Never test for an error condition you don't know how to handle. -- Steinbach

Working...