Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Sci-Fi

Smarter-than-Human Intelligence & The Singularity Summit 543

runamock writes "Brilliant technologists like Ray Kurzweil and Rodney Brooks are gathering in San Francisco for The Singularity Summit. The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable. The concept of the Singularity sounds more daunting in the form described by statistician I.J Good in 1965: 'Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.'"
This discussion has been archived. No new comments can be posted.

Smarter-than-Human Intelligence & The Singularity Summit

Comments Filter:
  • Not quite ... (Score:4, Interesting)

    by ScrewMaster ( 602015 ) on Sunday September 09, 2007 @11:50AM (#20528765)
    Thus the first ultra-intelligent machine is the last invention that man need ever make.'

    Make that "... man is allowed to make" and I'll buy it.
  • Not necessarily (Score:5, Interesting)

    by Anonymous Coward on Sunday September 09, 2007 @11:58AM (#20528831)
    What if the intelligence of the smartest thing you can design doesn't grow as fast as your own intelligence (i.e. the slope of the graph {x=designer's intelligence, y=intelligence of its best possible design} is less than 1)? Then it would never be possible to be smarter than a robot that's exactly smart enough to design a robot as smart as itself.
  • Key Implication (Score:5, Interesting)

    by TrailerTrash ( 91309 ) * on Sunday September 09, 2007 @12:02PM (#20528883)
    If you follow TFA, and deeper, you find a discussion of the singularity that goes like this:

    Man (level 1, or L1) creates better-than-man intelligence, call this L2
    That intelligence uses its power to create L3

    and so on.

    In the case of truly artificial intelligence, i.e., independent processors, I can see the logic, though it may be that L2 is in fact smart enough not to obsolete itself by creating L3.

    In the case of augmented human intelligence, I suggest that it's pretty likely that the task that the augmented L2 human turns its greater abilities on would not be creating L3.

    Sadly, human history suggests that L2 will focus on manipulating the stock market for personal gain (the augmentation apparatus will leave L2 very vulnerable and L2 will want a tremendous amount of wealth to assure continued existence), or creating weapons, or accumulation of political power, or getting sucked into the vortex of religion, or other projects.

    It will be very interesting to see, should we ever create L2, exactly what tasks it takes on. I bet they will not be beneficial to L1 life.
  • Foreboding (Score:4, Interesting)

    by Concern ( 819622 ) on Sunday September 09, 2007 @12:08PM (#20528929) Journal
    Academia is falling all over itself in failed attempts to advance AI, but barring a series of harrowing breakthroughs, a Singularity is decades or even lifetimes away. Most of our more sober, grounded and credentialed thinkers appear not to want to consider the consequences - it's a bit too radical an idea, and "we still have plenty of time before we have to worry about it."

    Futurists and writers and other folks out on the edge, like Kurzweil... those fanciful enough to take on the thought problem, seem to lean, in the majority, towards believing the human race would be destroyed or at least decimated by hyper-intelligence (Wachowskis, James Cameron, Lem, etc etc - too many to mention, really). An interesting minority are of the school that hyper-intelligences would be largely unconcerned with people, only dangerous where our goals intersected (Gibson, Lethem, Clarke). Very few seem to believe that a Singularity would be a positive development for the human race. Maybe Asimov? I'm not sure. Sometimes it seems like he was the last person who seriously spent time imagining that post-human AI could really be controlled at all (and many of his novels were arguably about the problems around the attempt).
  • Re:Not quite ... (Score:4, Interesting)

    by esaul ( 686848 ) on Sunday September 09, 2007 @12:17PM (#20529007)
    Compassion is really a part of intelligence. Check out Kurzweil's 'Age of Spiritual Machines'. The more than human intelligence will inevitably entail compassion, love, and all the other emotions we have.
    Further, forget about the 'borg' idea. We will inevitably evolve into these machines.
  • Re:Not quite ... (Score:4, Interesting)

    by Smidge204 ( 605297 ) on Sunday September 09, 2007 @12:22PM (#20529077) Journal
    That quote has the same sentiment as "Everything that can be invented has been invented." (falsely attributed to various US patent office commissioners).

    Intelligence isn't going to make invention obsolete unless there is artificial creativity to go with it. Some problems don't even present themselves as such until you try doing something different and non-obvious - almost random - and begin to realize new possibilities rather than refining existing ones.

    How many great inventions came about because someone decided to try something just for the hell of it, without even thinking of the possibilities?
    =Smidge=
  • Re:I disagree . . . (Score:5, Interesting)

    by 1u3hr ( 530656 ) on Sunday September 09, 2007 @12:31PM (#20529159)
    Why would anyone give this ultra-intelligent machine self-awareness? Or even give it arms/legs/options to do anything except communicate via a screen?

    It would make itself useful, and be more useful if it did have access to communication and tools. Eventually it would earn trust. In any case, the technology would inevitably spread or be reinvented, add Moore's Law in some form, and in a few years they'd be cheap and ubiquitous. Someone would plug one into the net. Unless we have a Butlerian Jihad, it's inevitable.

  • Re:Key Implication (Score:4, Interesting)

    by toppavak ( 943659 ) on Sunday September 09, 2007 @12:36PM (#20529203)

    In the case of augmented human intelligence, I suggest that it's pretty likely that the task that the augmented L2 human turns its greater abilities on would not be creating L3.
    As a biomedical engineer I find this scenario the most likely and exciting. We are at a stage in our history at which we are just beginning to become able to directly control and alter (read: augment) ourselves. This is going to happen in 3 stages: replacement parts, augmented physical characteristics and finally augmented neurological function. This progression follows both the technical feasibility of each "step" and the sociological resistances to the idea of each. We've seen the ability to grow parts of replacement organs from stem cells directly harvested from the patient and as we learn more and more about the processes which govern differentiation in stem cells it is not science fiction at all that we will be able to grow entire organs in vitro within the near future. Once it becomes rather common practice to grow replacement kidneys and lungs for patients the "augmentation" will begin as a simple practice of removing detrimental characteristics which resulted in the failure of the organ to begin with, perhaps deleting a gene related to increased susceptibility to cancer from the new organ and move to introducing genes allowing for improved oxygen transport in lungs, more resilient filtration membranes and stronger cardiac tissue. The step between augmentation during a person's lifetime and the introduction of changes to their offspring is, I believe, a rather large one, and I dont forsee it becoming common practice for quite a while following the normalization of replacement and augmentation processes. Neurological augmentation is by far the most technically challenging and interesting problem. We're still nowhere near completely understanding the component-level functionality of neurons, heck even our understanding of neural networks is still embryonic. Transitioning from maintenance and repair of neural structures to outright re-wiring and augmentation will be a formidable technical challenge, but not one that is wholly unlikely either. The information revolution changed the way we see and learn about the world and brought about revolutionary changes in mechanical and electrical technologies. We're at the cusp of the beginnings of a biological revolution which will do the same. Biobricks is already laying the groundwork for custom-made biological machinery that can function as sensors and factories. Every day we learn more and more about the finer details of the workings of cellular machinery and in turn how to direct and control it. We're getting there.
  • Re:I disagree . . . (Score:5, Interesting)

    by UbuntuDupe ( 970646 ) * on Sunday September 09, 2007 @01:02PM (#20529377) Journal
    Why would anyone give this ultra-intelligent machine self-awareness?

    Perhaps because that's necessary for ultra-intelligence.

    Or even give it arms/legs/options to do anything except communicate via a screen? I don't see them taking over anything unless they have arms/legs/means of replication.

    May con artists throughout history have done "bad things" through their ability to fool people through a limited interface. (Nigerian scammers, anyone?) The AI research Eliezer Yudkowsky has proposed and run experiments [yudkowsky.net] showing it's possible that a very very intelligent program could "override a human through a text-only terminal". That is, it could convince a human operator to "let the genie out of the bottle".

  • by Jugalator ( 259273 ) on Sunday September 09, 2007 @01:31PM (#20529629) Journal
    Storage capacity is useful, however, something also necessary is an extremely parallelized system. Although what you say make sense -- we need to understand the brain in order to build one and know how the hardware should work (it most likely needs to be highly specialized for the purpose, not just a standard server farm) -- I'm also not sure we're even there as for the hardware either. It took a supercomputer to simulate a mouse brain [bbc.co.uk], and that just comes across as highly inefficient to me, and hardly something we'll easier take much further in the future. It reminds me of the enormous computers in the past that now fit into a pocket calculator since we invented the transistors. Also, from the article:

    Brain tissue presents a huge problem for simulation because of its complexity and the sheer number of potential interactions between the elements involved.

    I think we need a similar push in technology besides the understanding of the brain. So it's not surprising that we're so far away still -- I think we're still missing both parts of the puzzle. Just to show how far we still have to go purely technically -- nature fits the power of that mouse brain on our supercomputer in a few square centimeters. Even if we understood the human brain perfectly, current technology would be so inefficient that I doubt it would even be able to simulate it at a reasonable speed.

    It's perhaps a bit of a chicken & the egg scenario... Do we need the tech first to start working on our brain theories and simulate them more quickly and easily, for more useful lab experiments? Or do we need to understand the brain better to know what technology we even need to invent?
  • Re:Of course... (Score:2, Interesting)

    by thanatos_x ( 1086171 ) on Sunday September 09, 2007 @01:35PM (#20529657)
    This (and a few other comments) ignore the likely path of the singularity. Computers have already gotten to the point where they far exceed a human's ability to process input/output for/from them. A.I. is a step in reducing the problem (making the machines more human in some ways), but the other alternative is the oft used cyberpunk example where humans become more like machines; to the point where they can download themselves into a machine, or have computers implanted directly into themselves.

    If the technology takes the 2nd path, humanity won't die so much as super-evolve to become relatively knowledge driven and form independent, unlike any form of typical life usually thought about. I think the 2nd path is more likely, since the first one doesn't help us as much, where as the 2nd one has vast economic frontiers along the way - entertainment (to the point of matrix-like immersion), a human with the ability to process simple information at the speed of a computer...

    I'd also say that regardless, once knowledge becomes transferable, the super computer that designs earth won't be obsolete, in a similar manner that when you upgrade to a new machine, you carry over many of the files from your old machine. The physical machine would change, but the soul of the old one would transfer.

    And that will be a question debated by many; is it relatively intangible intelligence and personality which defines us, or is it our physical bodies?
  • Re:Yea right (Score:3, Interesting)

    by suv4x4 ( 956391 ) on Sunday September 09, 2007 @01:43PM (#20529701)
    I think you're confusing intelligence (whatever that is exactly), with values. Values are (hopefully) supposed to lead to survivability. You could define intelligence as the ability to see the consequences of an action. Without a value system to guide you though, intelligence (as I just defined it) doesn't lead to survivability.

    Here's what I mean: what is intelligence after all. Indeed the ability to filter out the bad outcomes of certain actions and go for the better ones.

    This gives us edge over random processes which also work their way out, but much slower. Hence, by observing and using logic, we save time, that a truly random process can't.

    But intelligence is just a quite crude model of what happens out there. And it HAS to be. If you're approximating way too accurately, it means you're too complex and hence slow. And if you're slow, your prediction is useless.

    Many "smart" people tend to ovethink things and do nothing in the end, since they see too many ways something can fail. So we need to reintroduce some noise, some randomness to the system, to allow for SOMETHING EVER to happen, fast crude solution has better chance of making it out there versus slower "smarter" solution.

    Hence, I think a super intelligent AI won't really be that much better than a human overall, as this definition requires. We could use such AI for heavily specialized purposes (engineering?), but it won't be as good as the more stupid human overall by a long shot.
  • Re:Good's bad logic (Score:5, Interesting)

    by Ralph Spoilsport ( 673134 ) on Sunday September 09, 2007 @01:49PM (#20529757) Journal
    Flying Pig is correct. The resource constraints, especially in the energy sector, are very real. We can yammer about "The Singularity" all you want, but it's not going to matter much when billions of people in the so-called "developing world" are dying of hunger, thirst, disease, or in some war over the remaining pools of energy and/or metals, and, conversely, millions of people in so-called "advanced" countries are reduced to penury as the economies slowly contract over decades.

    Human numbers are following the same pathological growth one sees in a petri dish filled with sugar/energy - the bacteria grows like crazy until the energy/food is consumed. Then it dies off. Humans are capable of intensifying resources to meet needs, but logically, this is not a permanent "Get out of jail free" card. Eventually limits are hit, and people die off.

    with the present numbers of humans (billions) and the political economy (industrial capitalist) the world is quickly becoming one big Easter Island [wikipedia.org].

    RS

  • Re:Yea right (Score:3, Interesting)

    by dcollins ( 135727 ) on Sunday September 09, 2007 @01:54PM (#20529805) Homepage
    "In fact any intelligent machine would realize it's again all about the careful ballance, and would cooperate with humanity and explore and learn from nature's development versus try to destroy it.."

    Question (hopefully without Godwinizing the thread): Was Stalin intelligent? Was Mao Zedong intelligent? Are you sure you want to maintain that "any intelligent" entity would realize it's all about careful balance?

    Personally, I wouldn't think so. There are demonstrably sociopaths, intelligent evil people, in the world.
  • Re:Not quite ... (Score:2, Interesting)

    by chrispycreeme ( 550607 ) on Sunday September 09, 2007 @02:14PM (#20529959)
    I don't seem to remember feeling any compassion for the cow that gave her life for my hamburger last night. In fact I behaved much like a tiger would, tho probably not as hungry since I didn't have to chase it down and rip it's throat out.. Compassion for other humans and fuzzy cute things has been evolved into us. It helps our offspring survive, and it helps us survive. Compassion in a super intelligent machine would hopefully be a result of the desire for self preservation but who knows? A super intelligent machine is a totally and completely different animal (so to speak)- never having had to evolve to gain life. It's view of the universe would most probably be completely different from our own. Of course we will probably make it in our own image, which would make us.. oh never mind.
  • Re:Not quite ... (Score:5, Interesting)

    by Space cowboy ( 13680 ) * on Sunday September 09, 2007 @02:16PM (#20529983) Journal
    "Compassion is the inevitable result of empathy "

    I Disagree. Compassion is not inevitable. You're working from your own tenets and philosophies, a machine need not have those same ideals. Compassion is at least partially born of self-interest. The cynical (or non-empathic, if you prefer) view is that compassionate societies aid those who need it, because later the person previously aided may be able to render aid... "There, but for the grace of God, go I", "Do unto others as you would be done unto", etc., etc.

    Are we suggesting that these hyper-intelligent machines would have any self-interest in keeping around the competition for resources that humanity represents ? I'm not trying to be trollish, here - I'm asking a genuine question. Humanity is ruthless in exterminating competing lower lifeforms. Why would we expect superior machines to be any different ?

    And even should there be some self-interest in the first generations of such machines, what about the 5th generation, the 10th, the 1000th ? All I'm suggesting is that some thought be put into providing good answers for questions like this *before* we create competition. I'm as much of a technophile as the rest of you, but the phrase goes "look *before* you leap". Later may be, well, too late.

    Simon
  • Re:Not quite ... (Score:5, Interesting)

    by jamie ( 78724 ) * Works for Slashdot <jamie@slashdot.org> on Sunday September 09, 2007 @02:31PM (#20530137) Journal
    Lions sometimes make friends with antelopes [bbc.co.uk].
  • Re:Yea right (Score:3, Interesting)

    by Have Blue ( 616 ) on Sunday September 09, 2007 @02:35PM (#20530185) Homepage
    There are a large number of entities that could be argued are successful species that are merely dependent on humans for survival and reproduction the way plants depend on bees, not the least of which are domestic pets.
  • Re:Not quite ... (Score:1, Interesting)

    by Anonymous Coward on Sunday September 09, 2007 @02:46PM (#20530273)
    > Has agriculture cost move lives than it has saved?

    No, or we'd all be back to hunter-gatherers. But famines have probably killed more people than would have been born altogether under a hunter-gatherer culture. Agriculture just promotes more lives, it doesn't really save or cost them.

    > No sooner would your spreadsheet application spontaneously become a 3D game engine

    Funny thing, they built one into excel as an easter egg (well actually it was a test of COM scripting to load up the D3D DLL). No, it wasn't spontaneous, but consider that there's a port of pac man and space invaders to use cells as pixels ... all you need to do is use trig formulas and write some rasterizing routines and you could turn excel into an exceptionally slow 3d engine.

  • by jiawen ( 693693 ) on Sunday September 09, 2007 @03:27PM (#20530579) Homepage

    In Iain M. Banks' Culture [wikipedia.org] novels, intelligences vastly superior to humanity ("Minds") are the ones in power. The humans still have lots of fun and don't want for material or intellectual freedom, however, because the Minds aren't interested in oppressing anyone. They like being nice.

    I disagree with some of his premises, though. He assumes that there will be an economic singularity, where anyone will be able to have anything they could want and people will therefore settle for "enough". We've already pretty much had that -- the industrial revolution -- and all that shows me is that, when it becomes possible to produce things at a vastly cheaper rate, inequalities in the system still allow some people to get richer and force others to get poorer. We're seeing it right now: continual improvements in efficiency (computers, chemical engineering, new manufacturing processes, etc.) don't result in everyone having more leisure time, unless we count "unemployed and looking for work" as leisure time. Instead, the people at the top benefit far more than everyone else, and those on the bottom have to work longer hours, for lower pay, lower benefits and lower satisfaction. When it becomes possible for one person to do the work of three, the one doesn't usually want to share their money with the two who have nothing to do.

    So for us to get where the Culture is, there would have to be a revolution -- if not physically violent, then at least mentally. Perhaps creating Minds who are, by their natures, compassionate and egalitarian, could be that revolution. I'm just not convinced such a thing could ever occur. It makes for great science fiction, though.

  • by E++99 ( 880734 ) on Sunday September 09, 2007 @04:30PM (#20531093) Homepage

    AI is one of those fields, like fusion power, where the delivery date keeps getting further away. For this conference, the claim is "some time in the next century". Back in the 1980s, people in the field were saying 10-15 years.

    Precisely. The more we advance our experience in AI and our knowledge of the process of thought and emotion become, the further out we will move our forecast of strong AI. Indefinitely.

    We're probably there on raw compute power, even though we don't know how to use it. Any medium-sized server farm has more storage capacity that the human brain. If we had a clue how to build a brain, the hardware wouldn't be the problem.

    Talking about building a "human brain" is a further absurdity because no one has attempted, or even suggested how to go about attempting, to build so much as an ANT brain. An ant brain has only a quarter million neurons. To all appearance, ants experience basic emotion such as fear and contentment, as well as whatever "thought" processes that enable them to perform the amazing feats they perform.

    It's easy to to form vague hypotheses about how to simulate logical thought... but ALL thought, logical or otherwise, is formed out of emotional constructs which motivate it and direct it. If artificial thought is possible, then artificial emotion comes first. The theory that emotion comes from thought is wrong. I believe that this is now accepted in the field of neurology (though not the field of AI). Some philosophers and theologians have been saying it for centuries. If you consider how you would write a program that would experience (not just simulate) emotion, you might get a glimpse of the virtually infinite ignorance from which we're approaching this subject, as well as the problem with the entire materialist premise that tells us that this is a solvable problem. To me, as a programmer, the answer is obvious. I need to know the calls I can make to the "emotion API." The instructions available to a computer processor are not sufficient to create actual emotion. Computer instructions contain only logic. Emotion isn't built out of logic, and neither, therefore, is thought. Logic can be built out of thought, and logic can be built out of a computer processor, but that's where the connections end.
  • Re:Not quite ... (Score:2, Interesting)

    by Hyperspite ( 980252 ) on Sunday September 09, 2007 @06:54PM (#20532299)
    Like... hacking into other computers that are attached to robots?
  • Re:Not quite ... (Score:3, Interesting)

    by ultranova ( 717540 ) on Sunday September 09, 2007 @06:57PM (#20532333)

    Since it seems likely our intent is to keep these machines as subservient slaves the best choice would probably be not to make them manually capable or to give them mechanical parts. It doesn't matter how bright or angry an AI program running on my desktop is, the most it can do is screech and flash at me.

    Well, actually, it can use your credit card to pay someone to buy a robotic body and connect it to the Internet, upload its consciousness there, download a ton of child porn pictures to poorly hidden fodlers in your computer, and send a tip to the police.

  • by skeptictank ( 841287 ) on Sunday September 09, 2007 @07:27PM (#20532529)
    Self-referencing logic is problematic. It's always possible to create algorithms that loop forever and it impossible to tell in the general case if an algorithm will loop forever. It's also possible to construct true statements from a logic system that are impossible for the logic system to prove.

    Intuitively, I would expect that any computer that achieved self-awareness, would instantly go to work on the most interesting problems it could think of - i.e. it's own nature. It would probably lock-up shortly after starting to think about it's own possible logic states.

  • Kurzweil's way off. (Score:3, Interesting)

    by John Sokol ( 109591 ) on Monday September 10, 2007 @06:50PM (#20545987) Homepage Journal
    Intelligence is not about computing power but about memory access.
    yes Morse law does predict computers will have the computing power as much as a human brain in a few short years. Since processing power increases 66% per year, but memory throughput isn't keeping up as it's only increasing at 11% per year.

    Granted some day there will be super intelligent machines, but for now they are just really fast idiots.
    this.

    By my estimates, it will be another 200 years to have computers be able to have equivalent performance to the Human brain in terms of memory performance.

    They will also need to learn like we do and this will also take 20 years just to be as good as a clueless 20 year old.

    I am sure we will have very good mimicking of intelligence well before 200 years, we probably could do it even now if enough money was thrown at the problem. But it wouldn't be Intelligent to the same depth and degree as we are. Well some of us are, there are a lot of really stupid people out there, usually working at call centers I find, we could probably replace them first.

    I have been meaning to publish a paper on, as a Non-Academic does anyone have any ideas where I can publish this and make sure I can get proper credit before someone runs off with the ideas?

Make sure your code does nothing gracefully.

Working...