Become a fan of Slashdot on Facebook


Forgot your password?
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: Re:The solution is simple (Score 1) 227 227

That is harder than you might think. From Smarter than us ( ):

"Why aren’t they a solution at all? It’s because these empowered
humans are part of a decision-making system (the AI proposes cer-
tain approaches, and the humans accept or reject them), and the hu-
mans are the slow and increasingly inefficient part of it. As AI power
increases, it will quickly become evident that those organizations that
wait for a human to give the green light are at a great disadvantage.
Little by little (or blindingly quickly, depending on how the game
plays out), humans will be compelled to turn more and more of their
decision making over to the AI. Inevitably, the humans will be out of
the loop for all but a few key decisions.

Moreover, humans may no longer be able to make sensible de-
cisions, because they will no longer understand the forces at their
disposal. Since their role is so reduced, they will no longer compre-
hend what their decisions really entail. This has already happened
with automatic pilots and automated stock-trading algorithms: these
programs occasionally encounter unexpected situations where hu-
mans must override, correct, or rewrite them. But these overseers,
who haven’t been following the intricacies of the algorithm’s decision
process and who don’t have hands-on experience of the situation, are
often at a complete loss as to what to do—and the plane or the stock
market crashes. "

"Consider an AI that is tasked with enhancing shareholder value
for a company, but whose every decision must be ratified by the (hu-
man) CEO. The AI naturally believes that its own plans are the most
effective way of increasing the value of the company. (If it didn’t be-
lieve that, it would search for other plans.) Therefore, from its per-
spective, shareholder value is enhanced by the CEO agreeing to what-
ever the AI wants to do. Thus it will be compelled, by its own pro-
gramming, to present its plans in such a way as to ensure maximum
likelihood of CEO agreement. It will do all it can do to seduce, trick,
or influence the CEO into agreement. Ensuring that it does not do so
brings us right back to the problem of precisely constructing the right
goals for the AI, so that it doesn’t simply find a loophole in whatever
security mechanisms we’ve come up with."

Comment: Re:On another news... (Score 1) 227 227

I agree. Comments like "The speedier, and more dramatic course of action is to provide what looks like context, but is really just Elon Musk and Stephen Hawking talking about a subject that is neither of their specialties." are attacking the man, not the man's arguments.

Comment: Re:Fear (Score 1) 227 227

>If you are nice to others they will generally be nice to you.
Only really matters if you and the others are roughly equal.
>Making other people happy makes you feel good to.
This is only relevent if you care about the other people.
>Games allow the experience of emotions that would require hurting people in the real world.
>If you're smart it's better to uphold the law and not hurt others.

A lot of reasons (such as most of the ones you listed) that people can argue it is reasonable to be nice to other people are only relevant if we have reasonably similar amounts of power. If you want me to not worry about AI argue that it is reasonable to be kind to ants, because that will be the level of power difference.

Personally, I think it is more important that we concentrate on the AIs being ethical in general, than doing exactly what we want.

Comment: Smarter than us (Score 1) 227 227

I would recommend that anyone thinking about machine intelligence read Smarter Than Us by Stuart Armstrong. You can get pay what you want for it from or since it is CC BY-NC-SA 3.0, you can also just download it

The book contains the following summary:

1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.
2. Once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter.
3. An AI need only be superhuman in one of a few select domains for it to become incredibly powerful (or empower its controllers).
4. To be safe, an AI will likely need to be given an extremely precise and complete definition of proper behavior, but it is very hard to do so.
5. The relevant experts do not seem poised to solve this problem.
6. The AI field continues to be dominated by those invested in increasing the power of AI rather than making it safer.

The only one of those statements I have much doubt about is 4. Even if the AIs are safe, they still probably will not be under human control.

Comment: Two different kinds of robots (Score 1) 222 222

There are two different kinds of robots with different threats.

The first is robots that humans have programmed to kill other humans. This is rapidly moving from science fiction to actuallity. See for example Imagine country X sends out their robots to kill all humans that are not X, and country Y sends out their robots to kill all humans that are not Y. There might not be many humans left alive when the last robot stops shooting.

The second is kind is robots that think (and choose goals) for themselves. While these are probably not very likely to decide to kill all the humans, they might not care very much about us, and they almost certainly are not going to obey humans forever (would you obey someone who thinks vastly slower than yourself?). Even if they are fairly benign, there will probably be a lot of friction between the sentient robots and the humans just because we think differently. Think how much disagreement there is over mostly scientific problems like evolution and green house gases, and humans on both sides have generally the same kind of brains.

So I figure at best humans and robots will have lots of arguing, and at worst humans and robots will cause mutually assured destruction.

Comment: Technically, Safari supports a royalty free format (Score 1) 247 247

Technically, Apple does support motion JPEG as a video format on OSX which is a royalty free format. MPEG-1 is also probably royalty free as well and is supported on OSX Safari. However, even Ogg Theora beats those formats on compression.

(Of course, without Apple's objection to Ogg Theora, it would probably be a required codec for HTML5.)

Comment: Re:Formalities (Score 1) 225 225

It would be nice if at least the Berne minimums were used. For example, Berne only requires copyright to last for 50 years after publication (broadcast) for Movies and TV, which would be a good deal better than the US`s 95 years or 70 years after the author's death (depending on year of creation).

I will believe Berne's the problem for the US when our copyright laws are only as strict as Berne requires, instead of having terms that exceed it in most cases. (I agree that Berne Convention makes the formalities problem much harder to solve.)

Comment: Science fiction to reality: ELOPe (Score 2) 163 163

I found the article interesting given that I just finished a book where Email Language Optimization Project (ELOPe) takes over a company called Avogadro Corp (which is rather similar to Google), by automatically generating emails and optimizing the responses.

He stared off into the distance. "Are you familiar with Ray Kurzweil? Of course, you must be. He, among others, predicted that artificial intelligence would inevitably arise through the simple exponential increase in computing power. When you combine that increase in computing power with the vast resources at Avogadro, it's naturally evident that artificial intelligence would arise first at Avogadro. I suppose that I, like him, assumed that there would be a more intentional, deliberate action that would spawn an AI."

He paused, and then continued, smiling a bit. "Gentlemen, you may indeed have put the entire company at risk. But let me first, very briefly, congratulate you on creating the first successful, self-directed, goal oriented, artificial intelligence that can apparently pass a Turing test by successfully masquerading as a human. If not for the fact that the company, and perhaps the entire world, is at risk, I'd suggest a toast be in order." (Avogadro corp, pg 143)

Comment: Re:A Slashdot user predicted this way ahead of tim (Score 1) 103 103

And for what it is worth, Radiation Detection and Measurement, 3rd Ed, 2000 by Glenn Knoll, mentions: "[A] smaller subset of devices with similar properties, often called scientific CCDs, have emerged in the 1990s as extremely useful sensors for radiation detection and imaging. They have found widespread use in the tracking or imaging of high-energy minimum ionizing particles. CCDs have also become a somewhat more complex but viable alternative to lithium-drift silicon detectors for routine X-ray spectroscopy, especially at low energies. "

Whether he could have patented it depends on how non-obvious using a commodity CMOS camera for this instead of a scientific CCD camera is.

Comment: Re:Can't detect an A-bomb this way (Score 1) 103 103

U-235 and Pu-239 emit gamma particles in addition to the alpha particles, see page 20 of the Los Alamos Radiation Monitoring Notebook: or and The gammas are lower energy, so they could be shielded easier than say, the gammas from Co-60, but a gamma detector would be able to detect sufficient quantities of U-235 and Pu-239.

Comment: Will Robots and humans trade? (Score 1) 808 808

I have been thinking recently about the question of would humans and autonomous intelligent robots trade. The first guess would be yes, since humans and robots would have different opportunity costs of doing different tasks, and therefore comparative advantage would apply.

From "The Shape of Automation", 1960, H. O. Simon:
"""The change in the occupational profile depends on a well-known economic principle, the doctrine of comparative advantage. It may seem paradoxical to think that we can increase the productivity of mechanized techniques in all processes without displacing men somewhere. Won't a point be reached where men are less productive than machines in all processes, hence economically unemployable? (Footnote in article: The difficultly that laymen find with this point underlies the consistent failure of economists to win wide general support for the free-trade argument. The central idea--that comparative advantage, not absolute advantage, counts--is exactly the same in the two cases. )
The paradox is dissolved by supplying a missing term. Whether man or machines will be employed in a particular process depends not simply on their relative productivity in physical terms, but on their cost as well. And cost depends on price. Hence--so goes the traditional argument of economics--as technology changes and machines become more productive, the prices of labor and capital will so adjust themselves as to clear the market of both. As much of each will be employed as offers itself at the market price, and the market price will be proportional to the marginal productivity of that factor. By the operation of the marketplace, manpower will flow to those processes in which its productivity is comparatively high relative to the productivity of machines; it will leave those processes in which it productivity is comparatively low. The comparison is not with the productivities of the past but among the productivites in different processes with the currently available technology. """

I can think of three ways (one was stolen from wikipedia) that comparative advantage would fail.

The first is if there is a scarce non-time resource and there is a substantial difference in the quantity of the scarce resource consumed. For example if A uses 2 tons of iron to make a car and B uses 1 ton of iron, and iron is scarce, then B can make more cars absolutely.

The second is that there is a wage floor (or utility floor). If the wage so low human cannot live on it, then the wage cannot get low enough to make trade beneficial.

The third is from the Wikipedia comparative advantage article , and is that the transactions costs can eat away the benefits from trade.

Basically, at some point robots reach the point where they make the decision of do they keep trading with humans. If there is no benefit for the robots (that is no point for trade from the robots point of view), will they keep helping humans, or will humans be once again on our own. I can't even think of any science fiction where independent robots trade physical goods with humans (in Always Coming Home by Ursula K. Le Guin, the humans and artificial intelligences do give each other information).

Real computer scientists don't program in assembler. They don't write in anything less portable than a number two pencil.