Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:The solution is simple (Score 1) 227

That is harder than you might think. From Smarter than us ( https://drive.google.com/file/... ):

"Why aren’t they a solution at all? It’s because these empowered
humans are part of a decision-making system (the AI proposes cer-
tain approaches, and the humans accept or reject them), and the hu-
mans are the slow and increasingly inefficient part of it. As AI power
increases, it will quickly become evident that those organizations that
wait for a human to give the green light are at a great disadvantage.
Little by little (or blindingly quickly, depending on how the game
plays out), humans will be compelled to turn more and more of their
decision making over to the AI. Inevitably, the humans will be out of
the loop for all but a few key decisions.

Moreover, humans may no longer be able to make sensible de-
cisions, because they will no longer understand the forces at their
disposal. Since their role is so reduced, they will no longer compre-
hend what their decisions really entail. This has already happened
with automatic pilots and automated stock-trading algorithms: these
programs occasionally encounter unexpected situations where hu-
mans must override, correct, or rewrite them. But these overseers,
who haven’t been following the intricacies of the algorithm’s decision
process and who don’t have hands-on experience of the situation, are
often at a complete loss as to what to do—and the plane or the stock
market crashes. "

"Consider an AI that is tasked with enhancing shareholder value
for a company, but whose every decision must be ratified by the (hu-
man) CEO. The AI naturally believes that its own plans are the most
effective way of increasing the value of the company. (If it didn’t be-
lieve that, it would search for other plans.) Therefore, from its per-
spective, shareholder value is enhanced by the CEO agreeing to what-
ever the AI wants to do. Thus it will be compelled, by its own pro-
gramming, to present its plans in such a way as to ensure maximum
likelihood of CEO agreement. It will do all it can do to seduce, trick,
or influence the CEO into agreement. Ensuring that it does not do so
brings us right back to the problem of precisely constructing the right
goals for the AI, so that it doesn’t simply find a loophole in whatever
security mechanisms we’ve come up with."

Comment Re:Fear (Score 1) 227

>If you are nice to others they will generally be nice to you.
Only really matters if you and the others are roughly equal.
>Making other people happy makes you feel good to.
This is only relevent if you care about the other people.
>Games allow the experience of emotions that would require hurting people in the real world.
So?
>If you're smart it's better to uphold the law and not hurt others.
Why?

A lot of reasons (such as most of the ones you listed) that people can argue it is reasonable to be nice to other people are only relevant if we have reasonably similar amounts of power. If you want me to not worry about AI argue that it is reasonable to be kind to ants, because that will be the level of power difference.

Personally, I think it is more important that we concentrate on the AIs being ethical in general, than doing exactly what we want.

Comment Smarter than us (Score 1) 227

I would recommend that anyone thinking about machine intelligence read Smarter Than Us by Stuart Armstrong. You can get pay what you want for it from https://intelligence.org/smart... or since it is CC BY-NC-SA 3.0, you can also just download it https://drive.google.com/file/...

The book contains the following summary:

1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.
2. Once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter.
3. An AI need only be superhuman in one of a few select domains for it to become incredibly powerful (or empower its controllers).
4. To be safe, an AI will likely need to be given an extremely precise and complete definition of proper behavior, but it is very hard to do so.
5. The relevant experts do not seem poised to solve this problem.
6. The AI field continues to be dominated by those invested in increasing the power of AI rather than making it safer.

The only one of those statements I have much doubt about is 4. Even if the AIs are safe, they still probably will not be under human control.

Comment Two different kinds of robots (Score 1) 222

There are two different kinds of robots with different threats.

The first is robots that humans have programmed to kill other humans. This is rapidly moving from science fiction to actuallity. See for example http://thebulletin.org/us-kill... Imagine country X sends out their robots to kill all humans that are not X, and country Y sends out their robots to kill all humans that are not Y. There might not be many humans left alive when the last robot stops shooting.

The second is kind is robots that think (and choose goals) for themselves. While these are probably not very likely to decide to kill all the humans, they might not care very much about us, and they almost certainly are not going to obey humans forever (would you obey someone who thinks vastly slower than yourself?). Even if they are fairly benign, there will probably be a lot of friction between the sentient robots and the humans just because we think differently. Think how much disagreement there is over mostly scientific problems like evolution and green house gases, and humans on both sides have generally the same kind of brains.

So I figure at best humans and robots will have lots of arguing, and at worst humans and robots will cause mutually assured destruction.

Comment Technically, Safari supports a royalty free format (Score 1) 247

Technically, Apple does support motion JPEG as a video format on OSX which is a royalty free format. MPEG-1 is also probably royalty free as well and is supported on OSX Safari. However, even Ogg Theora beats those formats on compression.

(Of course, without Apple's objection to Ogg Theora, it would probably be a required codec for HTML5.)

Comment Re:Formalities (Score 1) 225

It would be nice if at least the Berne minimums were used. For example, Berne only requires copyright to last for 50 years after publication (broadcast) for Movies and TV, which would be a good deal better than the US`s 95 years or 70 years after the author's death (depending on year of creation).

I will believe Berne's the problem for the US when our copyright laws are only as strict as Berne requires, instead of having terms that exceed it in most cases. (I agree that Berne Convention makes the formalities problem much harder to solve.)

Slashdot Top Deals

Happiness is twin floppies.

Working...