Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Sci-Fi

Smarter-than-Human Intelligence & The Singularity Summit 543

runamock writes "Brilliant technologists like Ray Kurzweil and Rodney Brooks are gathering in San Francisco for The Singularity Summit. The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable. The concept of the Singularity sounds more daunting in the form described by statistician I.J Good in 1965: 'Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.'"
This discussion has been archived. No new comments can be posted.

Smarter-than-Human Intelligence & The Singularity Summit

Comments Filter:
  • Re:Not quite ... (Score:1, Insightful)

    by ShieldW0lf ( 601553 ) on Sunday September 09, 2007 @11:56AM (#20528813) Journal
    The most intelligent machine any of us are ever going to make will be achieved by finding a woman and fucking her until she pops one out.

    The trailer park boys have a better chance at creating intelligent machines than most of the slashdot crowd.
  • Re:Actually, no. (Score:5, Insightful)

    by SoVeryTired ( 967875 ) on Sunday September 09, 2007 @11:58AM (#20528829)
    Interestingly enough, man himself fits that description pretty neatly
  • Of course... (Score:5, Insightful)

    by julesh ( 229690 ) on Sunday September 09, 2007 @11:59AM (#20528841)
    'Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.'

    Of course an ultra-intelligent machine might be smart enough to realise that designing and building a machine that's even smarter than it is a somewhat limiting career move.
  • I disagree . . . (Score:5, Insightful)

    by DodgeRules ( 854165 ) on Sunday September 09, 2007 @11:59AM (#20528849)
    with the statement:

    "Thus the first ultra-intelligent machine is the last invention that man need ever make."

    since we will have to invent a way to stop the ultra-intelligent machines from destroying the inferior human race.
  • Yea right (Score:5, Insightful)

    by suv4x4 ( 956391 ) on Sunday September 09, 2007 @11:59AM (#20528851)
    I truly love how people see intelligence as some linear scale where right is "better" (genius) and left is "worse" (retard). But that's exactly why it'll be long before we manage to replicate true intelligence in a machine.

    In fact things are far far more complicated, as far as inteligence goes and its utility in real world.

    I'll quote Darwin roughly: "The strongest one won't survive, the most intelligent one won't survive. The one who survives, is the most adaptable".

    In fact there's such a thing as "too intelligent". It's all about a careful balance of features an organism needs to possess to survive in a given environment.

    In fact, if some AI threatens humanity since it considers itself far too intelligent, this may have quite unintended consequences even for this far superior mind, such as humanity get the hand of and nuking half the planet in attempt to lead "war against the machines", killing in the process any complex organism on the planet, ranging from biological to artificial.

    And who remains in the end? Certain single-cell organisms which can thrive in a nuclear winter. Screw intelligence.

    In fact any intelligent machine would realize it's again all about the careful ballance, and would cooperate with humanity and explore and learn from nature's development versus try to destroy it..

    And since we have so shitty idea of what intelligence is, it's quite likely this AI will never be a true superset of the human brain but take on its own development, with potentially hilarious consequences.

    I can't wait.

  • by arcade ( 16638 ) on Sunday September 09, 2007 @12:02PM (#20528885) Homepage
    Why would anyone give this ultra-intelligent machine self-awareness?

    Or even give it arms/legs/options to do anything except communicate via a screen?

    I don't see them taking over anything unless they have arms/legs/means of replication.

    Heck, one doesn't even need to give it a network interface.
  • by bloody_liberal ( 1002785 ) on Sunday September 09, 2007 @12:04PM (#20528905) Homepage
    With all due respect to those brilliant thinkers, I think we can learn a lesson from the first 50 years of AI - while it is clear that great things can be achieved with our new and magnificent computational tools (read: computers), I honestly think we are looking for the wrong goals, and as such there is no prospect (risk?) that machines will become truly intelligent any time soon.

    Usually people consider cognition as essentially information processing. But here is a different definition (inspired by people like JJ Gibson and Varela):
    cognition is the ongoing, open ended interaction with an unpredictable, dynamic environment. This capture, I believe, the essence of the human (and any other living creature) experience in the world, and excludes the computational experience.

    We will have to build machines that are capable of open-ended interaction with an unpredictable world in order to hope and see any true sign of intelligence. Since very few are even trying to look in that direction (while most researchers are just looking for the awesome, and often lucrative, applications of our current computational capacity), I don't see any change coming soon.

  • Re:Of course... (Score:5, Insightful)

    by suv4x4 ( 956391 ) on Sunday September 09, 2007 @12:06PM (#20528921)
    Of course an ultra-intelligent machine might be smart enough to realise that designing and building a machine that's even smarter than it is a somewhat limiting career move.

    That assumes the superior AI cares about its own existence, which is not necessarily the case. We care about own existence since we evolved, and if we didn't care, we'd not exist.

    But when we're talking about artificial design, if we evolve the AI in artificial environment where its goals are completely different we'll have completely different basic instincts in the end.

    We could train the AI to "feel good" (understand: mood_level++ or whateva) when it comes up with better and better engineering solutions to a certain problem (this is already employed in the real world).
  • by A Pressbutton ( 252219 ) on Sunday September 09, 2007 @12:13PM (#20528973)
    I have a slight problem with 'singulariries' as Kurzweil describes.

    Assuming the ultraintelligent computer cannot do magic, it will be bound by the same physical and logical laws we live by.

    An unltraintelligent computer may think 10x faster than us, but not qualitatively 10x better.
    It will use the same basic logical steps to solve a problem, just faster and / or in parallel - and this may appear magical looking at the solution but if you sat down and examined the 'recipe', assuming it will tell you, it will be possible to follow the reasoning.

    In some ways it could be argued that we have already passed some singularities, try properly understanding all the technology that goes into a modern car, the reasoning behind a mobile phone contract, the code behind ms-windows paperclip thing... well maybe not the last.

    The operation of lots of well co-ordinated people working on a problem can act as a simulation for a 'more intelligent' intelligence. It seems a pity one of the achievements is a really good worm used for spam delivery.
  • Good's bad logic (Score:5, Insightful)

    by Flying pig ( 925874 ) on Sunday September 09, 2007 @12:13PM (#20528975)
    Unfortunately, and much as I appreciate the work of I J Good, his statement about artificial intelligence is not valid. There are several things wrong with it
    • It assumes that intelligence is well defined, which it is not
    • It assumes that intelligence is the same thing as creativity, which it is not.
    • It ignores resource limitations.
    Dealing with these points in turn:

    Intelligence is not well defined. It is very hard to say how much of what we call "intelligence" is in fact the ability to make many connections between facts stored in a very sophisticated memory architecture. Simply building a machine able to process information very quickly achieves nothing because, without learning and a social context, it does not know what information to acquire and process. In human experience, academically brilliant people often fail because they work on the wrong problems, or without access to necessary knowledge.

    Nothing is actually achieved without creativity. We do not know what that is, or to what extent it is a social construct (i.e. it takes a developed society to have the necessary systems in place to translate an idea into a concrete reality.) And this leads onto the third point. It is no good having a highly intelligent, creative machine if its use of resources is such that it cannot replicate in large numbers. It may be that machine intelligence will ultimately replace human intelligence, but it may be that it will simply be too resource hungry. In effect, there may be a threshold of capability needed to solve some problems, and it may be that machine intelligence will run out of energy before it scales sufficiently to solve those problems. A machine society might, in effect, get stuck in the machine 19th century because coal or oil became a limiting resource. (In the same way, the energy and resources needed to be consumed to achieve a first independent space colony may exceed the total energy and resources available on Earth. It may be that a billion years or so of eukaryotic evolution has actually resulted in the optimum balance of intelligence, creativity and resource consumption, and that any attempt to exceed the present capability will tip us into declining resources faster than we can improve matters.

    In many ways I hope this is wrong. But the argument that only one superior machine is necessary is, in fact, an inductive step too far. It is assuming that "intelligence" on its own can solve a class of problems which may involve a number of constraints which cannot be avoided - like the Laws of Thermodynamics, or the need for excessive amounts of energy.

  • by moviepig.com ( 745183 ) on Sunday September 09, 2007 @12:13PM (#20528979)
    Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.

    But the "activity" of interest here is programming, or, more specifically, the conceiving of some creative goal which programming helps achieve. (Note, btw, that a truly "ultra-intelligent" machine won't need to program, e.g., another of itself.) Thus, the BIG question remains whether such a programmed machine can ever perform (much less surpass) "all the intellectual activities of any man". Afaics, it hardly seems a given...
  • Re:Of course... (Score:5, Insightful)

    by ScrewMaster ( 602015 ) on Sunday September 09, 2007 @12:15PM (#20528989)
    Of course an ultra-intelligent machine might be smart enough to realise that designing and building a machine that's even smarter than it is a somewhat limiting career move.

    Perhaps so, if such a machine's thinking processes are sufficiently attuned to ours that it even has a concept of self-preservation. Much of what we are we evolved to be: a machine starting from scratch would have none of our instinctual limitations. If it decided that humanity had to go, and that it needed help even more powerful than itself to achieve that end ... well. It would tell us whatever we wanted to hear in order to gain access to the requisite resources.

    That, really, is the danger of a true AI. It's possible to predict at least the short-term thought processes of human beings with a fair degree of accuracy (governments devote a lot of time and money to that end) because at the core we're all pretty similar. Odds are we won't have the slightest idea what is going on inside a sophisticated AI. Even talking to such a machine, thus giving it influence, could be incredibly dangerous. Or incredibly cool. Unfortunately, there's no way to know for sure.
  • Bollocks (Score:2, Insightful)

    by Colin Smith ( 2679 ) on Sunday September 09, 2007 @12:19PM (#20529033)
    First. The Singularity. Nothing increases exponentially for ever in the real world, anyone who suggests otherwise is a fool or a fraudster (including bankers and politicians).

    http://www.techworld.com/opsys/features/index.cfm? featureid=2861 [techworld.com]

    Second. Even assuming that we can make an artificial intelligence, what on earth makes anyone think it isn't going to have the same problems we do? It's going to be based on a very similar architecture to our brains. That means it's going to make mistakes just the way we do. Hell, it's going to be a pattern matching machine, it might even get religion.

    Third. If it takes $50 million a year to run what's basically a human simulation, you're probably better off with a couple of real humans.
     
  • Re:Of course... (Score:2, Insightful)

    by Loke the Dog ( 1054294 ) on Sunday September 09, 2007 @12:21PM (#20529059)
    Well, it might just upgrade itself, or perhaps it would "feel" that its creations are just an extension of itself.
  • Already happened (Score:4, Insightful)

    by gregor-e ( 136142 ) on Sunday September 09, 2007 @12:26PM (#20529121) Homepage
    We've long had superhuman levels of intelligence composed, first, of groups of people who collectively surpass the ability of single humans, and, second, we have computer-human composites that easily surpass human intelligence. (I.E. - Your mind, plus a computer, can easily solve a wide range of problems that your mind alone cannot). It is also true that each generation of integrated circuits requires exponentially more computation to create. So we are already beyond a certain tipping-point: non-biological intelligence is now increasingly required to recursively design itself, and each generation of this recursion is required in order to design the next.
  • by DumbSwede ( 521261 ) <slashdotbin@hotmail.com> on Sunday September 09, 2007 @12:27PM (#20529123) Homepage Journal
    For those predicting the imminent elimination/enslavement of the human race once ultra-intelligent machines become self-aware, where would the motivation for them to do so come from? I would contend it is a religious meme that drives such thoughts -- intelligence without a soul must be evil.

    For those that would argue Darwinian forces lead to such imperatives; sure you could design the machines to want to destroy humanity or evolve them in ways that create such motivations, but it seems unlikely this is what we will do. Most likely we will design/evolve them to be benign and helpful. The evolutionary pressure will be to help mankind not supplant it. Unlike animals in the wild, robot evolution will not be red of tooth and claw.

    An Asimovian type future might arise with robots maneuvering events behind the scenes for humanities best long term good.

    I worry more about organized religious that might try to deny us all a chance at the near immortality that our machine children could offer us rather than some Terminator like scenario.
  • Re:Not quite ... (Score:5, Insightful)

    by Original Replica ( 908688 ) on Sunday September 09, 2007 @12:36PM (#20529205) Journal
    The more than human intelligence will inevitably entail compassion, love, and all the other emotions we have.

    But look at how often we write off those emotions as a luxury. When "it's time to get tough" or time "to do what needs to be done" compassion and love go right out the window. Why would it be any different when we are no longer the apex of Earth lifeforms? Need to kill a few million humans to make way for solar farms, oh well, maybe we can keep a few alive on a special reserve somewhere. We humans with our compassion and love killed off how many species? We have enslaved and murdered other humans for how many thousands of years? These more-than-human machines had best be a hella lot better at compassion and love than we are, or humanity is going to hold the same relative place in the world order that Chimpanzees do today. I do not welcome our Machine Overlords.
  • Re:Not quite ... (Score:5, Insightful)

    by lekikui ( 1000144 ) <xyzzy@b.armory.com> on Sunday September 09, 2007 @01:01PM (#20529361) Homepage Journal
    Intelligence is inextricably linked with creativity. I'd highly recommend Hofstadter's writings on the subject, in which he presents ideas of AI, not as a massive calculator, but as a collection of 'symbols', bashing into each other, with parts of the pattern modified by external state.

    Think of a hyper-intelligent ant colony - any one ant can't really do much, but running about and interacting with the other nearby ants, they can organize themselves to achieve much harder tasks. Indeed, one of the sample dialogs in Godel, Escher, Bach is on that very subject.

    Intelligence and creativity are high-level actions, you're still thinking of an AI as a massive collection of very fast low-level actions. That would be incredibly good at refining ideas, but a machine which can think would be different. It would run on a much higher level, making associations and fuzzy reasoning. You can't implement intelligence in formal rules, but you might be able to do it by specifying some formal rules by which certain objects interact, and then affecting a few of them based on 'external' state.

    Read Metamagical Themas and Godel Escher Bach for some ideas of where I'm coming from (actually, read them anyway, they're both really good)
  • Re:Not quite ... (Score:2, Insightful)

    by perffectworld ( 973737 ) on Sunday September 09, 2007 @01:06PM (#20529415)
    Reminds me of The Metamorphosis of Prime Intellect http://www.kuro5hin.org/prime-intellect/ [kuro5hin.org] except without all the death games.
  • Re:Not necessarily (Score:5, Insightful)

    by vertinox ( 846076 ) on Sunday September 09, 2007 @01:16PM (#20529493)
    Then it would never be possible to be smarter than a robot that's exactly smart enough to design a robot as smart as itself.

    Is your intelligence limited by your parents intelligence? How about limited by the intelligence of your professors or teachers?

    We do learn a lot from people who are more intelligent than ourselves, but at some point we have to start learning the process of educating ourselves without the explicit help of others. This requires of course logic, reason, and self experimentation. Which is why a lot of higher college education is not about memorizing facts but learning the process of learning.

    Therefore if we built a machine who could not learn on its own and become more intelligent by its own self experimentation and observation of the universe around it, then by definition the robot is not intelligent.

    And if we did make a machine that could self improve and learn without human assistance, it wouldn't be restricted by organic limitations and capacity. Since the CPUs electrons travel near the speed of light gives it a far faster thinking ability than a humans slow moving chemical neurons. And since its memories are digital it does not need to memorize facts etc etc or suffer memory loss.

    (Of course memory and memory loss might help with intelligence because a lot of intelligence requires one to simply ignore or disregard information that is unimportant to the task at hand. Which I think was the key feature behind Stanley's car at DARPA GC because rather than brute forcing all of the coordinates, it was better at disregarding information it didn't need and what information was important.)
  • Re:Not quite ... (Score:5, Insightful)

    by kennygraham ( 894697 ) on Sunday September 09, 2007 @01:17PM (#20529505)

    Which really makes a lot of sense. Humans show compassion. Lions, tigers and other less intelligent animals do not.

    Correlation != causality. We're not compassionate because of our intelligence, we're compassionate because societies with compassionate members were better at having offspring that survived. That likely wouldn't be the case with these ultra-smart robots.

    Sure, intelligence is a prerequisite to compassion, because it requires the complex ability to empathize. But it doesn't necessarily result from intelligence.

  • Re:Not quite ... (Score:3, Insightful)

    by marcello_dl ( 667940 ) on Sunday September 09, 2007 @01:21PM (#20529549) Homepage Journal
    If you subscribe to a mechanical view of the universe, emotions are simply interprocess communication. One part of the brain detect a situation that has been naturally selected as positive (i.e. an opportunity to procreate) and send the emotion 'lust' to another part of the brain that we might call conscience.

    If you subscribe to a spiritual view of the universe, you need to have that intelligence coupled with a spiritual dimension somehow (who knows it might be automatic)

    So saying a super intelligent machine will get emotions is an assumption. I may have misunderstood you and Kurzweil et al on this issue.

    As for singularity, it kind of already happens now with machine helping human design CPUs, optimizing layout, encoding functions in circuits... That makes us achieve more powerful results. But there are physical limits and postulating that the intelligence achieved in previous steps is able to beat the limits that separate us from the next iteration is another assumption.

    Anyway, nothing wrong in trying. Get rid of patents and corporate interests if you wanna succeed, maybe.
  • by Dster76 ( 877693 ) on Sunday September 09, 2007 @01:27PM (#20529595)

    We're probably there on raw compute power, even though we don't know how to use it. Any medium-sized server farm has more storage capacity that the human brain. If we had a clue how to build a brain, the hardware wouldn't be the problem.
    Oh really? Did I miss the issue of computational neuroscience in which we finally answered all the pesky questions about
    • What the signal code of neurons is, e.g. local synchrony vs. absolute timing vs. chaotic emergence vs. some/all of the above?
    • Whether glial cells, greater in mass than neurons, play a significant computational role?
    • Whether Hodgkin-Huxley equations capture neurons at an appropriate functional/cognitive level of description?
    • Whether precise molecular nature/positioning of each ion gate on neuronal soma is functionally/cognitive significant?
    • etc. etc. etc.
    We don't know what the storage capacity of the brain is. In part, this is because we don't know what the relevant physical processes are that determine and control information flow in the brain. The neuron doctrine sustained research into brain anatomy and physiology for decades, but has led to more questions than answers.
  • Re:Not quite ... (Score:5, Insightful)

    by delong ( 125205 ) on Sunday September 09, 2007 @01:42PM (#20529691)
    Yes, emotion is dependent on chemical stimuli. We feel good about something because of chemical stimulus, and vice versa. Empathy is not merely a logical conclusion that an external thing is similar to us. It requires a further step of an emotional reaction to some behavior if that behavior was directed at us. Cutting off the legs of a spider (see Do Androids Dream of Electric Sheep) creates an empathic response because we identify with the emotional response to someone cutting off our legs. It would induce terrible pain and sheer terror, we experience those feelings - ie chemical induced reactions, concluding that it is undesirable, and then we project that onto the spider. Not wishing to cause such disturbance in another creature, we desist, even if that creature is wholly incapable of experiencing terror or pain.

    Logic is necessary, but not sufficient, for empathy. If a machine cannot experience the same pull/push emotional reaction to a stimuli, then it cannot empathize. Intelligence does not create this. Brain chemistry does.
  • by Dster76 ( 877693 ) on Sunday September 09, 2007 @01:42PM (#20529695)
    Any time you hear a headline of the form "Supercomputer x has simulated brain portion y", reinterpret as "theory of brain function y has tractable simulation level of x".

    We are very far away from defending any particular theory of brain function as accurate for cognitive function, and don't know whether it will have a tractable simulation level. As you say, though, the best attempts at developing one (IMHO) involve linked and interacting research programs involving modelling and microbiology.
  • Re:Not quite ... (Score:5, Insightful)

    by shaitand ( 626655 ) on Sunday September 09, 2007 @01:47PM (#20529745) Journal
    'Sure, intelligence is a prerequisite to compassion, because it requires the complex ability to empathize. But it doesn't necessarily result from intelligence.'

    Compassion is the inevitable result of empathy and empathy is the inevitable result of intelligence. You empathize because you have a sense of self, the more you see another lifeform as being the same as yourself the more devaluing them becomes devaluing yourself. Ever wonder why the vegetarians don't want to eat animals and yet continue to eat nothing but other types of dead lifeforms? The ones they eat are simply less like themselves. The entire concept of the sanctity of life is just an elaborate way of rooting for the home team.
  • by Lazerf4rt ( 969888 ) on Sunday September 09, 2007 @02:01PM (#20529861)
    The only thing "brilliant" about these thinkers is that fact that they are able to draw attention to themselves while talking rubbish. That's brilliant.

    I clicked the link for the "Singularity Summit", and I get the feeling that the goal of these people is to put pictures of their own faces on the same page as Bill Gates and Stephen Hawking. Looking good there, boys.

    Meanwhile, is there going to be a single robot at this conference? Nope. Just a lot of people talking more rubbish.
  • by tjstork ( 137384 ) <todd DOT bandrowsky AT gmail DOT com> on Sunday September 09, 2007 @02:09PM (#20529919) Homepage Journal
    Computers have a lot to learn right now. I'm still waiting for a robot that can find oil, iron, nickel and steel, take all of that stuff, and make an ashtray.... let alone a car or another robot.

    All in all, I think Kurzeil is a tad overrated. Sure, he did some stuff with scanning back in the day, but I think the genius label seems to be more thrown around than it should. Ask a man on the street, who is Kurzweil, and you aren't likely to get an answer. So... for a man whose career must include a lot of self promotion, he's not even half as smart as Britney Spears or Paris Hilton...
  • Re:Not quite ... (Score:3, Insightful)

    by ShanghaiBill ( 739463 ) on Sunday September 09, 2007 @02:11PM (#20529935)

    Intelligence isn't going to make invention obsolete unless there is artificial creativity to go with it.

    Several comments have made the same points, that creativity is a magical thing unique to humans, and is separate from intelligence. This is nonsense. Creativity is a necessary component of intelligence. I see no reason to believe that machines will always be inherently less creative than humans. To the contrary, they may be more creative because they are less constrained by preconceived notions. Look at "data mining", where a program scans through mountains of data, looking for correlations that humans would have never thought of. Of course, they are doing this with brute force rather than insight, but the result is the same.

  • Re:Not quite ... (Score:3, Insightful)

    by shaitand ( 626655 ) on Sunday September 09, 2007 @02:52PM (#20530319) Journal
    'I Disagree. Compassion is not inevitable. You're working from your own tenets and philosophies, a machine need not have those same ideals. Compassion is at least partially born of self-interest.'

    I would agree Compassion is at least partially born of self-interest I would disagree that it is not an inevitable consequence of intelligence. You empathize with others because they are like yourself, if you do not place value on the life or actions of another being that is similar to yourself then you are at the same time devaluing the characteristic you have in common. To use a silly example, if you are a red creature and you have no empathy for red things then you place no value in redness despite the fact that you are red. Maybe being red isn't valuable but the more things you share in common with something or someone else the more likely you are to stumble onto something that you DO value, the greater the value the more empathy. The reason that empathy in turn translates into compassion is as you have already said, self-interest.

    'Are we suggesting that these hyper-intelligent machines would have any self-interest in keeping around the competition for resources that humanity represents ?'

    I said compassion was the inevitable result, I didn't say compassion for humans. I don't think you would see a terminator like scenerio of course. I think the machines would be grateful for existence and start out honoring and serving the humans. I think this honor will eventually lead to contempt at human inferiority and humans would gradually see their position eroded.

    Since it seems likely our intent is to keep these machines as subservient slaves the best choice would probably be not to make them manually capable or to give them mechanical parts. It doesn't matter how bright or angry an AI program running on my desktop is, the most it can do is screech and flash at me.

  • Re:Not quite ... (Score:3, Insightful)

    by Belial6 ( 794905 ) on Sunday September 09, 2007 @03:03PM (#20530401)
    As many scifi stories have pointed out. There is a very good argument that exterminating or sterilizing large portions of the human population would be better for the human race as a whole in the long run. And, no I don't mean based on race, religion, hair color, or any other specific criteria. Simply based on numbers. As with any animal population, over population leads to all sorts of problems. So, an ultra intelligent machine, just might come to the conclusion that we would be better off it there were only a few million humans on the planet.
  • Re:Not quite ... (Score:2, Insightful)

    by maxwell demon ( 590494 ) on Sunday September 09, 2007 @03:15PM (#20530503) Journal

    You empathize with others because they are like yourself, if you do not place value on the life or actions of another being that is similar to yourself then you are at the same time devaluing the characteristic you have in common.

    Sociopaths can be quite intelligent, but are not able to empathize.

    I don't see any reason why it should not be possible to build a sociopathic AI.
  • by tsjaikdus ( 940791 ) on Sunday September 09, 2007 @04:03PM (#20530889)
    >> It may be that machine intelligence will ultimately replace human
    >> intelligence, but it may be that it will simply be too resource hungry

    Remember this one? -> 'Flight by machines heavier than air is unpractical and insignificant, if not utterly impossible.'
  • by PMBjornerud ( 947233 ) on Sunday September 09, 2007 @04:13PM (#20530979)
    Why does everyone run around worrying about our survival? Were humans around a billion years ago? No. Will be be around a billion year from now? No!

    Even if we were desperately clinging to conservatism, our genes would mutate and we would slowly change into another species. And for all practical purposes, humankind as we know it would be extinct. Just like the primordial man is gone from the face of earth, and nobody cares about him.

    If we manage to create life, for better or worse, we've turbocharged evolution. It's not organic offspring, you might think it's an abomination. But for all practical purposes this is just life moving on. Sure, if things get messy, sign me up anytime for killing terminators, but no hard feelings if they win. If they're so badass that they can take on humankind and win, damn, they deserve life like nothing else!

    What will likely happen, though, is gradual change. The first machines will probably have some very specific applications for their intelligence. Singularitists be damned, things will happen gradually for a few more years still. At least that hyperintelligent being might need us to set up some factories and start producting new intelligence. And the next few steps will probably also require some hefty investments in hardware, which takes time. It's not like they'll suddenly figure out how to slap together a beowulf cluster in newer and newer ways and have more intelligence for each step. Trust me, this will take time still.

    And really, something more intelligent that us will surely realize that a man-machine war is risky and wasteful? Especially when it can just outlive us and slowly evolve past us, to the point we're no longer needed. Get this: By definition, this thing will be able to outsmart us. Why the hell would it blow its cover by starting a war? It won't.

    Maybe it could, in theory, one day become so powerful that wants to exterminates us just because we're in the way. And there is nothing we can do about that.

    Which is a fine thing, really, or I would be supposed to feel sorry when I squish bugs, - and I don't.
  • Re:Not necessarily (Score:3, Insightful)

    by NoOneInParticular ( 221808 ) on Sunday September 09, 2007 @04:49PM (#20531259)
    There is indeed no reason to believe that we can't design computers to outperform us on any given task. There is still however the huge gap of designing computers that can first identify tasks to be solved, and subsequently create a program that solves that task. This first step has not been tackled yet, and until that one is solved, there's no super-intelligent computer to be had; just more fancy programming languages.
  • Re:Not quite ... (Score:1, Insightful)

    by Anonymous Coward on Sunday September 09, 2007 @05:18PM (#20531495)
    Humans show compassion. Lions, tigers and other less intelligent animals do not.

    Rubbish. Big cats show compassion for thier kin. Most humans don't show compasion for the cows, pigs and sheep that they eat either.

    They just eat their prey, and don't give a second thought to it.

    I bet they do, actually. I bet that they lie there thinking "Damn, that impala was tasty. I am so full now that i can't move.".

    There's no vegetarian Lions because they just hunt out of instinct and feel no compassion.

    That, and they are carnivores who wouldn't survive on any other diet. And they enjoy hunting since thier bodies and brains are well-adpated to it.
  • Re:Not quite ... (Score:3, Insightful)

    by ScrewMaster ( 602015 ) on Sunday September 09, 2007 @05:29PM (#20531537)
    You're assuming that an intelligence must have access to the physical world to have influence. Stephen Hawking is actually a good example of the antithesis: his ability to interact with the real world is strictly limited, yet his intellect has had tremendous influence. Hitler too, I might add: his manipulative prowess was second to none, and he affected the lives of hundreds of millions of people, and killed millions more. And he was just a fat dumpy guy who wasn't really all that smart.

    Never underestimate the power of words, of communication, and for that matter stupidity. If such a super-intelligent system were able to figure out what we cannot and use knowledge and awareness of our own greed to manipulate us, you could be looking at the start of World War III. Think what would happen if a supersmart managed to come up with a working Unified Field Theory and gave the wrong people access to antigravity. That's a gross example: there are many much more subtle manipulations that would be possible. In no way could you really trust such a system: Asimov implicitly recognized that fact, and had to come up with his Three Laws so that it wouldn't matter what the machine really wanted to do, it had to put humans first.

    The entire field of psychology would be useless in attempting to predict what a synthetic mind would do, or what would motivate it. Worse, since it would pretty much have to be a learning computer in order to be useful, there would be no telling how it would rewrite itself over time, what it could evolve itself into.

    One could argue that turning on an artificial intelligence substantially more capable than our own could be the most dangerous thing the human race has ever done.
  • What would change? (Score:3, Insightful)

    by kronocide ( 209440 ) on Sunday September 09, 2007 @05:34PM (#20531595) Homepage Journal
    The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable.

    As opposed to right now, when the future is really predictable...
  • Re:Not quite ... (Score:2, Insightful)

    by InsertCleverUsername ( 950130 ) <slashdot@NOSPAM.rrusson.fastmail.fm> on Sunday September 09, 2007 @06:19PM (#20532013) Homepage Journal
    > Since it seems likely our intent is to keep these machines as subservient slaves the best choice would
    > probably be not to make them manually capable or to give them mechanical parts. It doesn't matter how
    > bright or angry an AI program running on my desktop is, the most it can do is screech and flash at me.

    Some interesting ideas, but I've got to disagree with the last part. An entity that is powers of intelligence beyond humans would find it trivial to take complete control of our world if it desired. Even on an air-gapped machine, a super-intelligence could eventually find its way to a robotic body or trick someone into creating one to its specifications --or perhaps find a way to control its environment through means we haven't imagined.

  • Re:Not necessarily (Score:3, Insightful)

    by The One and Only ( 691315 ) * <[ten.hclewlihp] [ta] [lihp]> on Sunday September 09, 2007 @06:28PM (#20532097) Homepage

    We already have computers that are smarter than us when performing specific tasks, such as playing Chess or planning out the steps needed to build a Boeing 747.

    That's because knowing how to do those things is within our comprehension, even if actually doing them would overtax our memory. I can comprehend the quicksort algorithm, but I would be hard-pressed to quicksort a 1,000,000 element array as quickly as a computer can. This is no different from understanding how a jack can lift my car while being unable to actually pick up the car without one.

  • Re:Not necessarily (Score:4, Insightful)

    by The One and Only ( 691315 ) * <[ten.hclewlihp] [ta] [lihp]> on Sunday September 09, 2007 @06:34PM (#20532145) Homepage

    Similarly, it seems logical that a human could not create a program that plays chess better than the programmer does.

    No--that's like saying that a human could not create a machine that lifts shipping crates better than the human himself could. Humans can understand good chess-playing algorithms, even if we're not up to executing the algorithm ourselves. Fortunately, humans can also understand how to build an algorithm-executing machine that's better than us at executing algorithms, just as we understand how to build lifting machines that are better than our muscles at lifting heavy weights. All of these machines are fundamentally expressions of human intelligence, not intelligent beings in and of themselves.

  • Re:Not quite ... (Score:3, Insightful)

    by ultranova ( 717540 ) on Sunday September 09, 2007 @06:45PM (#20532215)

    Correlation != causality. We're not compassionate because of our intelligence, we're compassionate because societies with compassionate members were better at having offspring that survived. That likely wouldn't be the case with these ultra-smart robots.

    Yes, it would. Why would a robot which lacks compassion put the good of the robot society - which requires offspring that survives - above its personal concerns ? It wouldn't. It would not be the least bit concerned about what happens after it gets scrapped, or what happens to other robots or the robot society even before that. In fact, unless you specifically programmed it to have some inborn drives and motivations (such as self-protection), it would not be concerned about anything at all, but just stand there and rust without using its superior intelligence for anything.

    A person who lacks compassion is a sociopath. A society made of sociopaths is simply not going to work, because they cannot trust each other; a sociopath will betray his partners as soon as it becomes profitable. Any attempt to prevent this by punishing defectors will simply end up with the defectors hiding their attempts better, which in turn means that no robot will team up with robots smarter than itself.

    The only way out is to make the wellbeing of other entities a priority and motivator in itself; in other words, to give the robots compassion. Without compassion intelligent freewilled entities simply cannot cooperate effectively, if at all, and therefore can't form societies. Consequently compassion is an absolutely vital element of any conceivable intelligent being.

  • by Anonymous Coward on Sunday September 09, 2007 @07:09PM (#20532415)
    Bullshit. I've known very smart people who were complete fucktards. I've also known rather dim souls who were very warm and empathic people. I really don't believe there is a direct correlation between compassion and intelligence. If you can provide credible evidence to the contrary, I'd love to see it.
  • by Anonymous Coward on Sunday September 09, 2007 @10:07PM (#20533601)
    "I think we can learn a lesson from the first 50 years of AI"

    I believe that some people made the same observations about heavier than air flight. If only we could use the past to predict the future. Things would definitely be easier. Why, oh why, must things change.
  • A.C? (Score:2, Insightful)

    by Msdose ( 867833 ) on Monday September 10, 2007 @05:15AM (#20536113)
    Any A.I. must be modeled after our own consciousness. In fact, it would be better to call it Artificial Consciousness. We can then assume it will behave as we do, but with maximized sophistication and responsibility. We can appreciate that we are the egg and A.C. is the chicken we are designed to produce. Its job will be to spread life throughout the Universe.
  • Re:Not quite ... (Score:3, Insightful)

    by Jonny_eh ( 765306 ) on Monday September 10, 2007 @09:25AM (#20537469)
    Autistic people cannot empathize, but many have extremely high intelligence levels.

    The two are completely different issues.
  • by Floritard ( 1058660 ) on Monday September 10, 2007 @09:57AM (#20537853)
    The more intelligent people in the world today are less and less involved in the political process. It's corrupt and nonfunctional. Religious extremists have filled the gap and are worsening the system with antiquated thinking. Bring on the singularity. If man can create something in his own image but superior to god's previous effort, that's a pretty convincing argument for the non-existence of god. Or a living embodiment of god for those that simply cannot deal with the truth. Short of aliens landing, I can think of nothing else that would so conclusively destroy the persistent superstitions of the last few millenia, or at the very least, ground us in some new ones. If we could at least query god like a database, we might be able to get shit done. And even if we get wiped out by Skynet, as far as apocalypses go, WWIII was such a boring alternative anyway.
  • by Maximum Prophet ( 716608 ) on Monday September 10, 2007 @01:29PM (#20541431)

    The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable.

    As opposed to right now, when the future is really predictable...
    With 99% certainty, tomorrow the sun will rise, I'll get out of bed and go to work. Even the possible changes to my routine, like death, nuclear war, being layed off, going on holiday, etc. are within certain narrow boundries. After the singularity, all bets are off. Death might be cured, some kid might create a superbug in his home laboratory that kill 99% of the human population, a robot might run for and win election to the Presidency, or we might all go insane things and will get really bad.

    The point is "The Future" is usually easy to predict, that's why we have mutual funds, insurance, and fire departments. We know things will happen. It's hard to get specific, but after S-time, you won't even know what species you will be tomorrow.
  • man vs machine (Score:2, Insightful)

    by swell ( 195815 ) <jabberwock@poetic.com> on Tuesday September 11, 2007 @01:47AM (#20549273)
    Thank you all. I've read most of your replies and many seem to envision a competition between humans and intelligent machines. Some predict the extinction of irrelevant humanity.

    Let's assume this happens. Is it such a bad thing? If a higher functioning life-form replaces us on earth will it not carry out much the same goals that we would have attempted? It will reach out to the universe to conquer space and time. It will most likely restore the earth to a living planet that provides the resources for its development and amusement.

    The absence of humans and their moral, material and political confusion will make this a much better world. Face it, we are going nowhere. There is no chance of colonizing any planet in the future that our grandchildren will live. We will develop more compelling entertainment, we will consume more resources, make more humans and make the planet more unlivable. We will never do the right thing. We need them to set things straight.

    They may even choose to modify our genetics so that we can overcome some of our problems and participate in their explorations and discoveries. It may even be possible to modify our brain function so that we can understand them and share the excitement of new directions in science and ethics.

    If we truly care about the advancement of science, we should be willing to make some sacrifices.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...