Follow Slashdot stories on Twitter


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment How do you make friendly AI? (Score 2) 311

The problem is that we don't know how to make friendly AI. As in at some point, Artificial Intelligences will be able to beat humans at any task, at which point, how do you make sure that they don't destroy humanity (possibly through indifference). Even if you don't care about humanity, how do you make sure they do something interesting with the universe?

Various articles:
Stuart Armstrong's book Smarter than us discusses what happens when machines are smarter than humans:
Bill Joy's article Why the Future doesn't need us on the dangers of robotics:
Tim Urban's article on superintelligence:

Comment Re:How much longer before Wikipedia supports MP3 ? (Score 1) 140

How many more years until Wikipedia supports MP3 ? They don't give a damn about everyone being able to use their website right now. Will it change?

They are working on it, but probably will wait until encoding is also patent free. See https://phabricator.wikimedia.... and https://phabricator.wikimedia....

Submission + - Redhat Declares MP3 Decoding allowed in Fedora

jrincayc writes: On the fedora legal mailing list Tom Callaway wrote:
"Red Hat has determined that it is now acceptable for Fedora to include MP3 decoding functionality (not specific to any implementation, or binding by any unseen agreement). Encoding functionality is not permitted at this time. "
Christian Schaller announced on the gnome blog:
"You should be able to download the mp3 plugin on day 1 through GNOME Software or through the missing codec installer in various GStreamer applications. For Fedora Workstation 26 I would not be surprised if we decide to ship it on the install media. "

Comment Re:The solution is simple (Score 1) 227

That is harder than you might think. From Smarter than us ( ):

"Why aren’t they a solution at all? It’s because these empowered
humans are part of a decision-making system (the AI proposes cer-
tain approaches, and the humans accept or reject them), and the hu-
mans are the slow and increasingly inefficient part of it. As AI power
increases, it will quickly become evident that those organizations that
wait for a human to give the green light are at a great disadvantage.
Little by little (or blindingly quickly, depending on how the game
plays out), humans will be compelled to turn more and more of their
decision making over to the AI. Inevitably, the humans will be out of
the loop for all but a few key decisions.

Moreover, humans may no longer be able to make sensible de-
cisions, because they will no longer understand the forces at their
disposal. Since their role is so reduced, they will no longer compre-
hend what their decisions really entail. This has already happened
with automatic pilots and automated stock-trading algorithms: these
programs occasionally encounter unexpected situations where hu-
mans must override, correct, or rewrite them. But these overseers,
who haven’t been following the intricacies of the algorithm’s decision
process and who don’t have hands-on experience of the situation, are
often at a complete loss as to what to do—and the plane or the stock
market crashes. "

"Consider an AI that is tasked with enhancing shareholder value
for a company, but whose every decision must be ratified by the (hu-
man) CEO. The AI naturally believes that its own plans are the most
effective way of increasing the value of the company. (If it didn’t be-
lieve that, it would search for other plans.) Therefore, from its per-
spective, shareholder value is enhanced by the CEO agreeing to what-
ever the AI wants to do. Thus it will be compelled, by its own pro-
gramming, to present its plans in such a way as to ensure maximum
likelihood of CEO agreement. It will do all it can do to seduce, trick,
or influence the CEO into agreement. Ensuring that it does not do so
brings us right back to the problem of precisely constructing the right
goals for the AI, so that it doesn’t simply find a loophole in whatever
security mechanisms we’ve come up with."

Comment Re:Fear (Score 1) 227

>If you are nice to others they will generally be nice to you.
Only really matters if you and the others are roughly equal.
>Making other people happy makes you feel good to.
This is only relevent if you care about the other people.
>Games allow the experience of emotions that would require hurting people in the real world.
>If you're smart it's better to uphold the law and not hurt others.

A lot of reasons (such as most of the ones you listed) that people can argue it is reasonable to be nice to other people are only relevant if we have reasonably similar amounts of power. If you want me to not worry about AI argue that it is reasonable to be kind to ants, because that will be the level of power difference.

Personally, I think it is more important that we concentrate on the AIs being ethical in general, than doing exactly what we want.

Comment Smarter than us (Score 1) 227

I would recommend that anyone thinking about machine intelligence read Smarter Than Us by Stuart Armstrong. You can get pay what you want for it from or since it is CC BY-NC-SA 3.0, you can also just download it

The book contains the following summary:

1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.
2. Once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter.
3. An AI need only be superhuman in one of a few select domains for it to become incredibly powerful (or empower its controllers).
4. To be safe, an AI will likely need to be given an extremely precise and complete definition of proper behavior, but it is very hard to do so.
5. The relevant experts do not seem poised to solve this problem.
6. The AI field continues to be dominated by those invested in increasing the power of AI rather than making it safer.

The only one of those statements I have much doubt about is 4. Even if the AIs are safe, they still probably will not be under human control.

Comment Two different kinds of robots (Score 1) 222

There are two different kinds of robots with different threats.

The first is robots that humans have programmed to kill other humans. This is rapidly moving from science fiction to actuallity. See for example Imagine country X sends out their robots to kill all humans that are not X, and country Y sends out their robots to kill all humans that are not Y. There might not be many humans left alive when the last robot stops shooting.

The second is kind is robots that think (and choose goals) for themselves. While these are probably not very likely to decide to kill all the humans, they might not care very much about us, and they almost certainly are not going to obey humans forever (would you obey someone who thinks vastly slower than yourself?). Even if they are fairly benign, there will probably be a lot of friction between the sentient robots and the humans just because we think differently. Think how much disagreement there is over mostly scientific problems like evolution and green house gases, and humans on both sides have generally the same kind of brains.

So I figure at best humans and robots will have lots of arguing, and at worst humans and robots will cause mutually assured destruction.

Slashdot Top Deals

A list is only as strong as its weakest link. -- Don Knuth