Smarter-than-Human Intelligence & The Singularity Summit 543
runamock writes "Brilliant technologists like Ray Kurzweil and Rodney Brooks are gathering in San Francisco for The Singularity Summit. The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable. The concept of the Singularity sounds more daunting in the form described by statistician I.J Good in 1965: 'Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.'"
Not quite ... (Score:4, Interesting)
Make that "... man is allowed to make" and I'll buy it.
Not necessarily (Score:5, Interesting)
Key Implication (Score:5, Interesting)
Man (level 1, or L1) creates better-than-man intelligence, call this L2
That intelligence uses its power to create L3
and so on.
In the case of truly artificial intelligence, i.e., independent processors, I can see the logic, though it may be that L2 is in fact smart enough not to obsolete itself by creating L3.
In the case of augmented human intelligence, I suggest that it's pretty likely that the task that the augmented L2 human turns its greater abilities on would not be creating L3.
Sadly, human history suggests that L2 will focus on manipulating the stock market for personal gain (the augmentation apparatus will leave L2 very vulnerable and L2 will want a tremendous amount of wealth to assure continued existence), or creating weapons, or accumulation of political power, or getting sucked into the vortex of religion, or other projects.
It will be very interesting to see, should we ever create L2, exactly what tasks it takes on. I bet they will not be beneficial to L1 life.
Foreboding (Score:4, Interesting)
Futurists and writers and other folks out on the edge, like Kurzweil... those fanciful enough to take on the thought problem, seem to lean, in the majority, towards believing the human race would be destroyed or at least decimated by hyper-intelligence (Wachowskis, James Cameron, Lem, etc etc - too many to mention, really). An interesting minority are of the school that hyper-intelligences would be largely unconcerned with people, only dangerous where our goals intersected (Gibson, Lethem, Clarke). Very few seem to believe that a Singularity would be a positive development for the human race. Maybe Asimov? I'm not sure. Sometimes it seems like he was the last person who seriously spent time imagining that post-human AI could really be controlled at all (and many of his novels were arguably about the problems around the attempt).
Re:Not quite ... (Score:4, Interesting)
Further, forget about the 'borg' idea. We will inevitably evolve into these machines.
Re:Not quite ... (Score:4, Interesting)
Intelligence isn't going to make invention obsolete unless there is artificial creativity to go with it. Some problems don't even present themselves as such until you try doing something different and non-obvious - almost random - and begin to realize new possibilities rather than refining existing ones.
How many great inventions came about because someone decided to try something just for the hell of it, without even thinking of the possibilities?
=Smidge=
Re:I disagree . . . (Score:5, Interesting)
It would make itself useful, and be more useful if it did have access to communication and tools. Eventually it would earn trust. In any case, the technology would inevitably spread or be reinvented, add Moore's Law in some form, and in a few years they'd be cheap and ubiquitous. Someone would plug one into the net. Unless we have a Butlerian Jihad, it's inevitable.
Re:Key Implication (Score:4, Interesting)
Re:I disagree . . . (Score:5, Interesting)
Perhaps because that's necessary for ultra-intelligence.
Or even give it arms/legs/options to do anything except communicate via a screen? I don't see them taking over anything unless they have arms/legs/means of replication.
May con artists throughout history have done "bad things" through their ability to fool people through a limited interface. (Nigerian scammers, anyone?) The AI research Eliezer Yudkowsky has proposed and run experiments [yudkowsky.net] showing it's possible that a very very intelligent program could "override a human through a text-only terminal". That is, it could convince a human operator to "let the genie out of the bottle".
Re:We still have no clue how to do strong AI (Score:3, Interesting)
I think we need a similar push in technology besides the understanding of the brain. So it's not surprising that we're so far away still -- I think we're still missing both parts of the puzzle. Just to show how far we still have to go purely technically -- nature fits the power of that mouse brain on our supercomputer in a few square centimeters. Even if we understood the human brain perfectly, current technology would be so inefficient that I doubt it would even be able to simulate it at a reasonable speed.
It's perhaps a bit of a chicken & the egg scenario... Do we need the tech first to start working on our brain theories and simulate them more quickly and easily, for more useful lab experiments? Or do we need to understand the brain better to know what technology we even need to invent?
Re:Of course... (Score:2, Interesting)
If the technology takes the 2nd path, humanity won't die so much as super-evolve to become relatively knowledge driven and form independent, unlike any form of typical life usually thought about. I think the 2nd path is more likely, since the first one doesn't help us as much, where as the 2nd one has vast economic frontiers along the way - entertainment (to the point of matrix-like immersion), a human with the ability to process simple information at the speed of a computer...
I'd also say that regardless, once knowledge becomes transferable, the super computer that designs earth won't be obsolete, in a similar manner that when you upgrade to a new machine, you carry over many of the files from your old machine. The physical machine would change, but the soul of the old one would transfer.
And that will be a question debated by many; is it relatively intangible intelligence and personality which defines us, or is it our physical bodies?
Re:Yea right (Score:3, Interesting)
Here's what I mean: what is intelligence after all. Indeed the ability to filter out the bad outcomes of certain actions and go for the better ones.
This gives us edge over random processes which also work their way out, but much slower. Hence, by observing and using logic, we save time, that a truly random process can't.
But intelligence is just a quite crude model of what happens out there. And it HAS to be. If you're approximating way too accurately, it means you're too complex and hence slow. And if you're slow, your prediction is useless.
Many "smart" people tend to ovethink things and do nothing in the end, since they see too many ways something can fail. So we need to reintroduce some noise, some randomness to the system, to allow for SOMETHING EVER to happen, fast crude solution has better chance of making it out there versus slower "smarter" solution.
Hence, I think a super intelligent AI won't really be that much better than a human overall, as this definition requires. We could use such AI for heavily specialized purposes (engineering?), but it won't be as good as the more stupid human overall by a long shot.
Re:Good's bad logic (Score:5, Interesting)
Human numbers are following the same pathological growth one sees in a petri dish filled with sugar/energy - the bacteria grows like crazy until the energy/food is consumed. Then it dies off. Humans are capable of intensifying resources to meet needs, but logically, this is not a permanent "Get out of jail free" card. Eventually limits are hit, and people die off.
with the present numbers of humans (billions) and the political economy (industrial capitalist) the world is quickly becoming one big Easter Island [wikipedia.org].
RS
Re:Yea right (Score:3, Interesting)
Question (hopefully without Godwinizing the thread): Was Stalin intelligent? Was Mao Zedong intelligent? Are you sure you want to maintain that "any intelligent" entity would realize it's all about careful balance?
Personally, I wouldn't think so. There are demonstrably sociopaths, intelligent evil people, in the world.
Re:Not quite ... (Score:2, Interesting)
Re:Not quite ... (Score:5, Interesting)
I Disagree. Compassion is not inevitable. You're working from your own tenets and philosophies, a machine need not have those same ideals. Compassion is at least partially born of self-interest. The cynical (or non-empathic, if you prefer) view is that compassionate societies aid those who need it, because later the person previously aided may be able to render aid... "There, but for the grace of God, go I", "Do unto others as you would be done unto", etc., etc.
Are we suggesting that these hyper-intelligent machines would have any self-interest in keeping around the competition for resources that humanity represents ? I'm not trying to be trollish, here - I'm asking a genuine question. Humanity is ruthless in exterminating competing lower lifeforms. Why would we expect superior machines to be any different ?
And even should there be some self-interest in the first generations of such machines, what about the 5th generation, the 10th, the 1000th ? All I'm suggesting is that some thought be put into providing good answers for questions like this *before* we create competition. I'm as much of a technophile as the rest of you, but the phrase goes "look *before* you leap". Later may be, well, too late.
Simon
Re:Not quite ... (Score:5, Interesting)
Re:Yea right (Score:3, Interesting)
Re:Not quite ... (Score:1, Interesting)
No, or we'd all be back to hunter-gatherers. But famines have probably killed more people than would have been born altogether under a hunter-gatherer culture. Agriculture just promotes more lives, it doesn't really save or cost them.
> No sooner would your spreadsheet application spontaneously become a 3D game engine
Funny thing, they built one into excel as an easter egg (well actually it was a test of COM scripting to load up the D3D DLL). No, it wasn't spontaneous, but consider that there's a port of pac man and space invaders to use cells as pixels
Iain M. Banks' The Culture (Score:3, Interesting)
In Iain M. Banks' Culture [wikipedia.org] novels, intelligences vastly superior to humanity ("Minds") are the ones in power. The humans still have lots of fun and don't want for material or intellectual freedom, however, because the Minds aren't interested in oppressing anyone. They like being nice.
I disagree with some of his premises, though. He assumes that there will be an economic singularity, where anyone will be able to have anything they could want and people will therefore settle for "enough". We've already pretty much had that -- the industrial revolution -- and all that shows me is that, when it becomes possible to produce things at a vastly cheaper rate, inequalities in the system still allow some people to get richer and force others to get poorer. We're seeing it right now: continual improvements in efficiency (computers, chemical engineering, new manufacturing processes, etc.) don't result in everyone having more leisure time, unless we count "unemployed and looking for work" as leisure time. Instead, the people at the top benefit far more than everyone else, and those on the bottom have to work longer hours, for lower pay, lower benefits and lower satisfaction. When it becomes possible for one person to do the work of three, the one doesn't usually want to share their money with the two who have nothing to do.
So for us to get where the Culture is, there would have to be a revolution -- if not physically violent, then at least mentally. Perhaps creating Minds who are, by their natures, compassionate and egalitarian, could be that revolution. I'm just not convinced such a thing could ever occur. It makes for great science fiction, though.
Re:We still have no clue how to do strong AI (Score:2, Interesting)
Precisely. The more we advance our experience in AI and our knowledge of the process of thought and emotion become, the further out we will move our forecast of strong AI. Indefinitely.
Talking about building a "human brain" is a further absurdity because no one has attempted, or even suggested how to go about attempting, to build so much as an ANT brain. An ant brain has only a quarter million neurons. To all appearance, ants experience basic emotion such as fear and contentment, as well as whatever "thought" processes that enable them to perform the amazing feats they perform.
It's easy to to form vague hypotheses about how to simulate logical thought... but ALL thought, logical or otherwise, is formed out of emotional constructs which motivate it and direct it. If artificial thought is possible, then artificial emotion comes first. The theory that emotion comes from thought is wrong. I believe that this is now accepted in the field of neurology (though not the field of AI). Some philosophers and theologians have been saying it for centuries. If you consider how you would write a program that would experience (not just simulate) emotion, you might get a glimpse of the virtually infinite ignorance from which we're approaching this subject, as well as the problem with the entire materialist premise that tells us that this is a solvable problem. To me, as a programmer, the answer is obvious. I need to know the calls I can make to the "emotion API." The instructions available to a computer processor are not sufficient to create actual emotion. Computer instructions contain only logic. Emotion isn't built out of logic, and neither, therefore, is thought. Logic can be built out of thought, and logic can be built out of a computer processor, but that's where the connections end.
Re:Not quite ... (Score:2, Interesting)
Re:Not quite ... (Score:3, Interesting)
Well, actually, it can use your credit card to pay someone to buy a robotic body and connect it to the Internet, upload its consciousness there, download a ton of child porn pictures to poorly hidden fodlers in your computer, and send a tip to the police.
AI, the Halting Problem, Incompleteness Thereom (Score:2, Interesting)
Intuitively, I would expect that any computer that achieved self-awareness, would instantly go to work on the most interesting problems it could think of - i.e. it's own nature. It would probably lock-up shortly after starting to think about it's own possible logic states.
Kurzweil's way off. (Score:3, Interesting)
yes Morse law does predict computers will have the computing power as much as a human brain in a few short years. Since processing power increases 66% per year, but memory throughput isn't keeping up as it's only increasing at 11% per year.
Granted some day there will be super intelligent machines, but for now they are just really fast idiots.
this.
By my estimates, it will be another 200 years to have computers be able to have equivalent performance to the Human brain in terms of memory performance.
They will also need to learn like we do and this will also take 20 years just to be as good as a clueless 20 year old.
I am sure we will have very good mimicking of intelligence well before 200 years, we probably could do it even now if enough money was thrown at the problem. But it wouldn't be Intelligent to the same depth and degree as we are. Well some of us are, there are a lot of really stupid people out there, usually working at call centers I find, we could probably replace them first.
I have been meaning to publish a paper on, as a Non-Academic does anyone have any ideas where I can publish this and make sure I can get proper credit before someone runs off with the ideas?