Become a fan of Slashdot on Facebook


Forgot your password?

Comment: Totally Agree (Score 1) 413

by Giant Electronic Bra (#49783233) Attached to: What AI Experts Think About the Existential Risk of AI

This is absolutely true, and is related to the whole "what is intelligence?" question. We tend to equate it with intent and free will because we're used to dealing with humans or at least animals that have a survival instinct and thus evidence 'will' to go with it.

There's no reason to believe that artificial 'intelligence' will ever evidence the same sort of intent. There's very little reason to endow a stock trading application with a sense of self or a need to survive. Even if it had some rudimentary form of these things, for whatever reason, they need not take the form of concern about itself, it would be more likely that such a program would be designed to protect its funds!

I can imagine some far future where in some sense self-driving cars have a 'will to survive' that makes them 'want' to avoid crashes, but they're unlikely to have the kind of self-awareness or ability to generalize that would be required to turn that into a desire to do anything except avoid traffic accidents, which is all they will need to understand about the world. You could imagine an interstellar space probe built to the level of a full general AI simply on the basis of having virtually no idea of what it will run into, but we're centuries from being able to build such things. As long as it can phone home fairly quickly all it might need is self-driving-car level 'reflexes', that would be useful. Even Pluto Express gets by as a totally dumb remotely controlled instrument, as do the Mars rovers. They don't need to be true AIs, not even close.

Comment: Re:Missing the key point (Score 1) 413

by Giant Electronic Bra (#49782391) Attached to: What AI Experts Think About the Existential Risk of AI

I think 'superintelligent' is meant to signify 'beyond the intelligence of any human being', not "a very clever person." It doesn't mean much if you use it in the latter sense.

Here's a thought for you all though, what if there's a law of diminishing returns on intelligence? What if it takes 100x more 'CPU power' to get 2x smarter? That may well be why humans are kinda dumb, not because that was all we needed, but that getting any smarter could be exponentially more resource intensive. We may never be able to build something more than a little bit smarter than us.

Comment: Re:That's simply not true. (Score 1) 413

ROFLMAO! I always love people that just drive by and say some variation of "your full of shit" without anything to back it up.

And the whole "but nobody thought X could do Y" thing is worthless as well, its proof of exactly nothing. It isn't even evidence of anything.

AI research in this day and age is focused in one of two basic approaches. Either its about understanding and emulating some specific low level neurological function, like facial recognition, or its about very narrow specific application of heuristics to a niche task, like weather prediction, fault analysis, etc. The closest things to general AI these days are stuff like autonomous vehicles where several systems are being integrated with some fairly complex control laws. NONE of these systems is aimed at any sort of general reasoning capability. I'm sure some of the things learned from building such systems WILL apply to more generalized reasoning systems, but its hard to argue that anyone even WANTS general purpose AI at this point. Certainly the fervor for it that was seen in the 60's through the 80's has long since evaporated as we came to a realistic conception of just how bloody hard it is, and how much we needed to go back and start with the most basic underlying functions before we could progress to any sort of real AI.

Comment: Re:Need to understand it before it exists (Score 1) 413

I would think the more likely results of greater and greater embodiment of intelligence in IT systems is likely to be destruction of a large part of the human economy. The transformation of human society which results is most likely to be the big challenge. Machines will largely just do what we program them to do, humans are the monkey wrench.

Of course I can see what you're getting at, the old "how do we know it works right?" problem. How do we know how to make systems embody our goals and values? This is a whole other vast question, we don't have models of how goals and values work, and won't until we understand basic reasoning.

Comment: Re:Missing the key point (Score 1) 413

Just how much of the solution space have we explored with our technology? As much as nature has? You are at best just speculating. In any case the human brain is something like 5 orders of magnitude more energy-efficient and roughly an equal measure faster per-unit-volume than modern computers. Is all of that capacity used constructively? We don't know, but its a pretty good bet that this level of efficiency and power are roughly a prerequisite for human-level intelligence.

As far as questionable assumptions, I'm not making assumptions, I'm drawing conclusions from examples found in nature, which is the only place we CAN draw them currently. That means I have evidence, and you have pure speculation. I don't disagree with you that your position is possible of course. However, we know so little that even if it is then it will hardly make a difference in timespan IMHO.

Comment: Re:Missing the key point (Score 1) 413

Well.... I think we can guess. Nature is pretty efficient in the course of time, and it hasn't developed any more efficient neural architecture than ours, which at the basic level is identical to a fish that lived 400 million years ago. So, my guess is you're going to have to do something at the order of magnitude of simulating the human brain with reasonable fidelity. Lets even assume that the people in Zurich have the level of fidelity correct, they would need all the computer hardware on Earth 5 times over and 1000 nuclear power plants to run a human-brain simulation.

Now, obviously hardware improvements happen, and so perhaps in 30 years you'll be correct, which isn't THAT long, but its still out there, and then we still have to learn how to actually architect it to be useful for the purpose, it certainly won't be a Von Neumann type machine! We can imagine possibilities and we probably have the start on most of the pieces that would be needed but I'd say its pretty reasonable to believe that it will be at least 50 years before some research lab has it in their basement.

Comment: Re:Need to understand it before it exists (Score 1) 413

Well, the world is always a surprising place, and I'm sure there will be surprises in the realm of complex data processing systems along the way. I'm not sure the right word for them would be 'superintelligences', who knows? The hazards and pitfalls are probably not exactly as we imagine them. Take the automobile, in the early 1900's when they were first mass-produced nobody imagined what we would have today. They couldn't have imagined 10's of thousands of traffic deaths every year on a vast world-wide road system. Nobody anticipated air pollution or AGW, etc.

So, my guess is, whatever we imagine to be the worries today, the real worries will be at least somewhat different, maybe completely different (IE traffic fatalities were a problem in 1903, but AGW totally came out of left field).

Comment: Yup! (Score 1) 413

The alien invasion nutters drive me crazy too! Imagine a race so powerful it can fling itself across the unimaginably vast gulf between the stars. What is Earth to such beings? There's nothing here you can't mine vastly more cheaply in some asteroid belt or on some moon somewhere, or out in the Oort Cloud, etc. With that kind of power you have no need to "find a place to live" etc.

In all of these cases it is very wooley thinking combined with some atavistic fears. Truthfully even the most clever humans rarely think rationally. People imagine that somehow a Stephen Hawking is some paragon of super rational logical thought or something, but its just utterly not true. These guys are VERY VERY clever and imaginative, but they don't really have any extra special insight. Neither do AI experts, apparently.

Comment: Re:Missing the key point (Score 1) 413

ROFLMAO! That's absurd because you have NO IDEA how that information is used to construct a human brain. It requires a very long and complex process, all the genetic material is is a 'key', nor is that all the information needed, there's the rest of the structure of the cell, which has been in existence for 4 billion years, uninterrupted. Think about it, from the first replicating cell the same cytoplasm has been passed on in every single one of the trillions of generations following it, you can't even make a single cell without that 'stuff'.

In any case, lets grant that whole description of how to build an intelligence can be described in 800mb, just for the sake of argument. So what? How do you 'execute' this program? Even granting that you have an unlimited hardware and power budget, how do you do this? We are nowhere on the path to knowing the answer to that question. Nor do we actually understand that 800mb of data, at all really.

I'm not impressed by such a superficial number, it means very little.

Comment: How? (Score 1) 413

How do you build this 'idiot savant'? How do you give it 'motivation'? What do you tell it to work on? We are like stone-age man here, we don't even know about the atom and you want us to work on nuclear power? Its absurd.

But what is even MORE absurd is the idea that if we could create such a process that we wouldn't be in control of it. The very notion that somehow an intelligence could 'reach out of the computer' strikes me as the uttermost level of boogeyman type nonsense.

Here's a good analogy. You worry about 'making a more deadly virus' but we don't even HAVE a virus AT ALL right now. We would have to invent the very concept of a virus first, and we have zero idea about how to do that, beyond some incredibly vague hand-waving.

Comment: Missing the key point (Score 4, Insightful) 413

Everyone is missing the key thing here. The question asked was "if a machine superintelligence did emerge", which is like asking "if the LHC produced a black hole..." There's nobody credible in AI who believes we have the slightest clue how to build a general AI, let alone one that is 'superintelligent'. Since we lack even basic concepts about how intelligence actually works we're like stone age man worrying about the atomic bomb. Sure, if a superintelligent AI emerged we might be in trouble, but nobody is trying to make one, nobody knows how to make one, nobody has any hardware that there is any reason to believe is within several orders of magnitude of being able to run one, etc.

So, what all of these people are talking about is something hugely speculative that is utterly disconnected from the sort of 'machine intelligence' that we ARE working on. There are several forms of what might fall into this category (there's really no precise definition), but none of them are really even close to being about generalized intelligence. The closest might be multi-purpose machine-learning and reasoning systems like 'Watson', but if you actually look at what their capabilities are, they're about as intelligent as a flatworm, hardly anything to be concerned about. Nor do they contain any of the sort of capabilities that living systems do. They don't have intention, they don't form goals, or pose problems for themselves. They don't have even a representation of the existence of their own minds. They literally cannot even think about themselves or reason about themselves because they don't even know they exist. Beyond that we are so far from knowing how to add that capability that we know nothing about how to do so, zero, nothing.

The final analysis is that what these people are being asked about is virtually a fantasy. They might as well be commenting on an alien invasion. This is something that probably won't ever come to pass at all, and if it does it will be long past our time. Its fun to think about, but the alarmism is ridiculous. In fact I don't see anything in the article that even implies any of the AI experts think its LIKELY that a superintelligent AI will ever exist, it was simply posited as a given in the question.

Asynchronous inputs are at the root of our race problems. -- D. Winker and F. Prosser