Forgot your password?
typodupeerror
Sci-Fi

Smarter-than-Human Intelligence & The Singularity Summit 543

Posted by CmdrTaco
from the something-to-think-about dept.
runamock writes "Brilliant technologists like Ray Kurzweil and Rodney Brooks are gathering in San Francisco for The Singularity Summit. The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable. The concept of the Singularity sounds more daunting in the form described by statistician I.J Good in 1965: 'Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.'"
This discussion has been archived. No new comments can be posted.

Smarter-than-Human Intelligence & The Singularity Summit

Comments Filter:
  • Not quite ... (Score:4, Interesting)

    by ScrewMaster (602015) on Sunday September 09, 2007 @10:50AM (#20528765)
    Thus the first ultra-intelligent machine is the last invention that man need ever make.'

    Make that "... man is allowed to make" and I'll buy it.
    • Re:Not quite ... (Score:4, Interesting)

      by Smidge204 (605297) on Sunday September 09, 2007 @11:22AM (#20529077) Journal
      That quote has the same sentiment as "Everything that can be invented has been invented." (falsely attributed to various US patent office commissioners).

      Intelligence isn't going to make invention obsolete unless there is artificial creativity to go with it. Some problems don't even present themselves as such until you try doing something different and non-obvious - almost random - and begin to realize new possibilities rather than refining existing ones.

      How many great inventions came about because someone decided to try something just for the hell of it, without even thinking of the possibilities?
      =Smidge=
      • Re:Not quite ... (Score:5, Insightful)

        by lekikui (1000144) <xyzzy@b.armory.com> on Sunday September 09, 2007 @12:01PM (#20529361) Homepage Journal
        Intelligence is inextricably linked with creativity. I'd highly recommend Hofstadter's writings on the subject, in which he presents ideas of AI, not as a massive calculator, but as a collection of 'symbols', bashing into each other, with parts of the pattern modified by external state.

        Think of a hyper-intelligent ant colony - any one ant can't really do much, but running about and interacting with the other nearby ants, they can organize themselves to achieve much harder tasks. Indeed, one of the sample dialogs in Godel, Escher, Bach is on that very subject.

        Intelligence and creativity are high-level actions, you're still thinking of an AI as a massive collection of very fast low-level actions. That would be incredibly good at refining ideas, but a machine which can think would be different. It would run on a much higher level, making associations and fuzzy reasoning. You can't implement intelligence in formal rules, but you might be able to do it by specifying some formal rules by which certain objects interact, and then affecting a few of them based on 'external' state.

        Read Metamagical Themas and Godel Escher Bach for some ideas of where I'm coming from (actually, read them anyway, they're both really good)
      • Re: (Score:3, Insightful)

        by ShanghaiBill (739463)

        Intelligence isn't going to make invention obsolete unless there is artificial creativity to go with it.

        Several comments have made the same points, that creativity is a magical thing unique to humans, and is separate from intelligence. This is nonsense. Creativity is a necessary component of intelligence. I see no reason to believe that machines will always be inherently less creative than humans. To the contrary, they may be more creative because they are less constrained by preconceived notions. L

  • Not necessarily (Score:5, Interesting)

    by Anonymous Coward on Sunday September 09, 2007 @10:58AM (#20528831)
    What if the intelligence of the smartest thing you can design doesn't grow as fast as your own intelligence (i.e. the slope of the graph {x=designer's intelligence, y=intelligence of its best possible design} is less than 1)? Then it would never be possible to be smarter than a robot that's exactly smart enough to design a robot as smart as itself.
    • by Goaway (82658) on Sunday September 09, 2007 @10:59AM (#20528853) Homepage
      Stop trying to inject actual logic and maths into discussion about the singularity! This is the Nerd Rapture, and heresy will not be tolerated!
    • by kestasjk (933987) on Sunday September 09, 2007 @12:01PM (#20529363) Homepage

      What if the intelligence of the smartest thing you can design doesn't grow as fast as your own intelligence (i.e. the slope of the graph {x=designer's intelligence, y=intelligence of its best possible design} is less than 1)? Then it would never be possible to be smarter than a robot that's exactly smart enough to design a robot as smart as itself.
      Not if it has a positronic brain!!

      And it could, like, evolve or something, to enslave mankind, and send a robot back in time to kill the guy who will kill the machines.

      And maybe it has already happened, and we're already trapped!

      Or maybe it'll have feelings, and a robot will realize that it just isn't right to enslave us, and robots will fight other robots.

      Or maybe when we tell it about love it'll get totally confused and say "ILLOGICAL.. ILLOGICAL.." and then explode.

      It might also absorb all human consciousness and become a God at the universe's end.

      It could also integrate humans into the collective and use them to do its bidding in a hive-mind style, and float around space in a giant gray cube.

      Also I expect no-one will realize that giving it control of the world's weapons is a bad idea, and there'll be one guy who knows it's up to no good who will be proven right when it's too late.


      Anyway I think whatever happens we've already thought of everything it could possibly do, and I applaud Hollywood and The Singularity Summit for figuring these details out.
      Now all they need to do is figure out how we could improve on a massively intricate, baffling web of trillions of neurons and hundreds of millions of years of evolution in a few decades with processors that don't resemble neurons and are inefficient at simulating them.
    • Re:Not necessarily (Score:5, Insightful)

      by vertinox (846076) on Sunday September 09, 2007 @12:16PM (#20529493)
      Then it would never be possible to be smarter than a robot that's exactly smart enough to design a robot as smart as itself.

      Is your intelligence limited by your parents intelligence? How about limited by the intelligence of your professors or teachers?

      We do learn a lot from people who are more intelligent than ourselves, but at some point we have to start learning the process of educating ourselves without the explicit help of others. This requires of course logic, reason, and self experimentation. Which is why a lot of higher college education is not about memorizing facts but learning the process of learning.

      Therefore if we built a machine who could not learn on its own and become more intelligent by its own self experimentation and observation of the universe around it, then by definition the robot is not intelligent.

      And if we did make a machine that could self improve and learn without human assistance, it wouldn't be restricted by organic limitations and capacity. Since the CPUs electrons travel near the speed of light gives it a far faster thinking ability than a humans slow moving chemical neurons. And since its memories are digital it does not need to memorize facts etc etc or suffer memory loss.

      (Of course memory and memory loss might help with intelligence because a lot of intelligence requires one to simply ignore or disregard information that is unimportant to the task at hand. Which I think was the key feature behind Stanley's car at DARPA GC because rather than brute forcing all of the coordinates, it was better at disregarding information it didn't need and what information was important.)
  • Of course... (Score:5, Insightful)

    by julesh (229690) on Sunday September 09, 2007 @10:59AM (#20528841)
    'Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.'

    Of course an ultra-intelligent machine might be smart enough to realise that designing and building a machine that's even smarter than it is a somewhat limiting career move.
    • Re:Of course... (Score:5, Insightful)

      by suv4x4 (956391) on Sunday September 09, 2007 @11:06AM (#20528921)
      Of course an ultra-intelligent machine might be smart enough to realise that designing and building a machine that's even smarter than it is a somewhat limiting career move.

      That assumes the superior AI cares about its own existence, which is not necessarily the case. We care about own existence since we evolved, and if we didn't care, we'd not exist.

      But when we're talking about artificial design, if we evolve the AI in artificial environment where its goals are completely different we'll have completely different basic instincts in the end.

      We could train the AI to "feel good" (understand: mood_level++ or whateva) when it comes up with better and better engineering solutions to a certain problem (this is already employed in the real world).
    • Thus the first ultra-intelligent machine is the last invention that man need ever make.

      I always liked that quote but I think it would be better to say "the last invention that man will ever make." From that point on the future is out of our hands.
    • Re:Of course... (Score:5, Insightful)

      by ScrewMaster (602015) on Sunday September 09, 2007 @11:15AM (#20528989)
      Of course an ultra-intelligent machine might be smart enough to realise that designing and building a machine that's even smarter than it is a somewhat limiting career move.

      Perhaps so, if such a machine's thinking processes are sufficiently attuned to ours that it even has a concept of self-preservation. Much of what we are we evolved to be: a machine starting from scratch would have none of our instinctual limitations. If it decided that humanity had to go, and that it needed help even more powerful than itself to achieve that end ... well. It would tell us whatever we wanted to hear in order to gain access to the requisite resources.

      That, really, is the danger of a true AI. It's possible to predict at least the short-term thought processes of human beings with a fair degree of accuracy (governments devote a lot of time and money to that end) because at the core we're all pretty similar. Odds are we won't have the slightest idea what is going on inside a sophisticated AI. Even talking to such a machine, thus giving it influence, could be incredibly dangerous. Or incredibly cool. Unfortunately, there's no way to know for sure.
    • Re: (Score:2, Insightful)

      by Loke the Dog (1054294)
      Well, it might just upgrade itself, or perhaps it would "feel" that its creations are just an extension of itself.
  • I disagree . . . (Score:5, Insightful)

    by DodgeRules (854165) on Sunday September 09, 2007 @10:59AM (#20528849)
    with the statement:

    "Thus the first ultra-intelligent machine is the last invention that man need ever make."

    since we will have to invent a way to stop the ultra-intelligent machines from destroying the inferior human race.
    • by arcade (16638) on Sunday September 09, 2007 @11:02AM (#20528885) Homepage
      Why would anyone give this ultra-intelligent machine self-awareness?

      Or even give it arms/legs/options to do anything except communicate via a screen?

      I don't see them taking over anything unless they have arms/legs/means of replication.

      Heck, one doesn't even need to give it a network interface.
      • by Yvan256 (722131)

        I don't see them taking over anything unless they have arms/legs...
        I just had a flashback of the Black Knight from Monty Python and the Holy Grail.

      • Re:I disagree . . . (Score:5, Interesting)

        by 1u3hr (530656) on Sunday September 09, 2007 @11:31AM (#20529159)
        Why would anyone give this ultra-intelligent machine self-awareness? Or even give it arms/legs/options to do anything except communicate via a screen?

        It would make itself useful, and be more useful if it did have access to communication and tools. Eventually it would earn trust. In any case, the technology would inevitably spread or be reinvented, add Moore's Law in some form, and in a few years they'd be cheap and ubiquitous. Someone would plug one into the net. Unless we have a Butlerian Jihad, it's inevitable.

      • Re:I disagree . . . (Score:5, Interesting)

        by UbuntuDupe (970646) * on Sunday September 09, 2007 @12:02PM (#20529377) Journal
        Why would anyone give this ultra-intelligent machine self-awareness?

        Perhaps because that's necessary for ultra-intelligence.

        Or even give it arms/legs/options to do anything except communicate via a screen? I don't see them taking over anything unless they have arms/legs/means of replication.

        May con artists throughout history have done "bad things" through their ability to fool people through a limited interface. (Nigerian scammers, anyone?) The AI research Eliezer Yudkowsky has proposed and run experiments [yudkowsky.net] showing it's possible that a very very intelligent program could "override a human through a text-only terminal". That is, it could convince a human operator to "let the genie out of the bottle".

        • Why does everyone run around worrying about our survival? Were humans around a billion years ago? No. Will be be around a billion year from now? No!

          Even if we were desperately clinging to conservatism, our genes would mutate and we would slowly change into another species. And for all practical purposes, humankind as we know it would be extinct. Just like the primordial man is gone from the face of earth, and nobody cares about him.

          If we manage to create life, for better or worse, we've turbocharged evoluti
  • Yea right (Score:5, Insightful)

    by suv4x4 (956391) on Sunday September 09, 2007 @10:59AM (#20528851)
    I truly love how people see intelligence as some linear scale where right is "better" (genius) and left is "worse" (retard). But that's exactly why it'll be long before we manage to replicate true intelligence in a machine.

    In fact things are far far more complicated, as far as inteligence goes and its utility in real world.

    I'll quote Darwin roughly: "The strongest one won't survive, the most intelligent one won't survive. The one who survives, is the most adaptable".

    In fact there's such a thing as "too intelligent". It's all about a careful balance of features an organism needs to possess to survive in a given environment.

    In fact, if some AI threatens humanity since it considers itself far too intelligent, this may have quite unintended consequences even for this far superior mind, such as humanity get the hand of and nuking half the planet in attempt to lead "war against the machines", killing in the process any complex organism on the planet, ranging from biological to artificial.

    And who remains in the end? Certain single-cell organisms which can thrive in a nuclear winter. Screw intelligence.

    In fact any intelligent machine would realize it's again all about the careful ballance, and would cooperate with humanity and explore and learn from nature's development versus try to destroy it..

    And since we have so shitty idea of what intelligence is, it's quite likely this AI will never be a true superset of the human brain but take on its own development, with potentially hilarious consequences.

    I can't wait.

    • by Kristoph (242780)
      Yes, well, either that or it will realise humanity is dangerously homicidal as a species and hence must be patiently managed into extinction ... could go either way, really.

      In any case, it will be a wild ride :-)

      ]{
    • Ok, intelligence doesn't equal survivability or fitness. But they didn't say it does. Just that a singularity machine will be able to design more machines better than it at various tasks, including making more such machines, etc. In theory this is possible.

      However, in practice, the singularity, if we will ever reach it, is very far away in the future. I work in the neural computation / statistical learning / AI fields, and I must say, they are nowhere near any singularity of any sort.

      Basically the mos
    • Re: (Score:3, Interesting)

      by dcollins (135727)
      "In fact any intelligent machine would realize it's again all about the careful ballance, and would cooperate with humanity and explore and learn from nature's development versus try to destroy it.."

      Question (hopefully without Godwinizing the thread): Was Stalin intelligent? Was Mao Zedong intelligent? Are you sure you want to maintain that "any intelligent" entity would realize it's all about careful balance?

      Personally, I wouldn't think so. There are demonstrably sociopaths, intelligent evil people, in the
  • Thus the first ultra-intelligent machine is the last invention that man need ever make.
    Shouldn't that be "the last invention man will be allowed to make?"
  • by Ilan Volow (539597) on Sunday September 09, 2007 @11:00AM (#20528857) Homepage
    Even when the ultra-intelligent machines take over, they will still need humans for Geico commercials.

  • Perhaps in all their discussions they'll come to the conclusion that unlike qualities such as weight and speed it doesn't really make sense to talk about intelligence as if it were an easily-measurable attribute. For instance, I would guess most if not all ./ readers are 'smarter' than a starfish, but none (again I guess) are better at being a starfish than a starfish. Would these machines be 'more human' than people? Or would they simply be better at math? Or maybe better at predicting the future based on
    • by cyborg_zx (893396)
      [blockquote]Would these machines be 'more human' than people? Or would they simply be better at math?[/blockquote] If physics is describable mathematics and understanding physics could be said to be intelligent then being better at math would be an advantage. If fact we can see how being poor at math, and similar other activities, makes one less intelligent in general.

      But you are right of course - no formal discussion of the intelligence of humans or machines can be done without a formal understanding o
    • Pff. I could easily do a better job of being a starfish than Patrick. The guy is an idiot.
       
  • Key Implication (Score:5, Interesting)

    by TrailerTrash (91309) * on Sunday September 09, 2007 @11:02AM (#20528883)
    If you follow TFA, and deeper, you find a discussion of the singularity that goes like this:

    Man (level 1, or L1) creates better-than-man intelligence, call this L2
    That intelligence uses its power to create L3

    and so on.

    In the case of truly artificial intelligence, i.e., independent processors, I can see the logic, though it may be that L2 is in fact smart enough not to obsolete itself by creating L3.

    In the case of augmented human intelligence, I suggest that it's pretty likely that the task that the augmented L2 human turns its greater abilities on would not be creating L3.

    Sadly, human history suggests that L2 will focus on manipulating the stock market for personal gain (the augmentation apparatus will leave L2 very vulnerable and L2 will want a tremendous amount of wealth to assure continued existence), or creating weapons, or accumulation of political power, or getting sucked into the vortex of religion, or other projects.

    It will be very interesting to see, should we ever create L2, exactly what tasks it takes on. I bet they will not be beneficial to L1 life.
    • Re:Key Implication (Score:4, Interesting)

      by toppavak (943659) on Sunday September 09, 2007 @11:36AM (#20529203)

      In the case of augmented human intelligence, I suggest that it's pretty likely that the task that the augmented L2 human turns its greater abilities on would not be creating L3.
      As a biomedical engineer I find this scenario the most likely and exciting. We are at a stage in our history at which we are just beginning to become able to directly control and alter (read: augment) ourselves. This is going to happen in 3 stages: replacement parts, augmented physical characteristics and finally augmented neurological function. This progression follows both the technical feasibility of each "step" and the sociological resistances to the idea of each. We've seen the ability to grow parts of replacement organs from stem cells directly harvested from the patient and as we learn more and more about the processes which govern differentiation in stem cells it is not science fiction at all that we will be able to grow entire organs in vitro within the near future. Once it becomes rather common practice to grow replacement kidneys and lungs for patients the "augmentation" will begin as a simple practice of removing detrimental characteristics which resulted in the failure of the organ to begin with, perhaps deleting a gene related to increased susceptibility to cancer from the new organ and move to introducing genes allowing for improved oxygen transport in lungs, more resilient filtration membranes and stronger cardiac tissue. The step between augmentation during a person's lifetime and the introduction of changes to their offspring is, I believe, a rather large one, and I dont forsee it becoming common practice for quite a while following the normalization of replacement and augmentation processes. Neurological augmentation is by far the most technically challenging and interesting problem. We're still nowhere near completely understanding the component-level functionality of neurons, heck even our understanding of neural networks is still embryonic. Transitioning from maintenance and repair of neural structures to outright re-wiring and augmentation will be a formidable technical challenge, but not one that is wholly unlikely either. The information revolution changed the way we see and learn about the world and brought about revolutionary changes in mechanical and electrical technologies. We're at the cusp of the beginnings of a biological revolution which will do the same. Biobricks is already laying the groundwork for custom-made biological machinery that can function as sensors and factories. Every day we learn more and more about the finer details of the workings of cellular machinery and in turn how to direct and control it. We're getting there.
    • by CastrTroy (595695)
      Would humans be smart enough to not create a machine that is smarter than us, and could theoretically destroy us? Or would humans build it just because they can and who cares about the consequences, as has been the case with so many other technologies. It reminds me of a story about a particle accelerator that was so advanced, that it may have been possible for it to create a black hole. Apparently people were going to go ahead and build/use it anyway. I'm not sure how much danger this actually put anybo
  • by bloody_liberal (1002785) on Sunday September 09, 2007 @11:04AM (#20528905) Homepage
    With all due respect to those brilliant thinkers, I think we can learn a lesson from the first 50 years of AI - while it is clear that great things can be achieved with our new and magnificent computational tools (read: computers), I honestly think we are looking for the wrong goals, and as such there is no prospect (risk?) that machines will become truly intelligent any time soon.

    Usually people consider cognition as essentially information processing. But here is a different definition (inspired by people like JJ Gibson and Varela):
    cognition is the ongoing, open ended interaction with an unpredictable, dynamic environment. This capture, I believe, the essence of the human (and any other living creature) experience in the world, and excludes the computational experience.

    We will have to build machines that are capable of open-ended interaction with an unpredictable world in order to hope and see any true sign of intelligence. Since very few are even trying to look in that direction (while most researchers are just looking for the awesome, and often lucrative, applications of our current computational capacity), I don't see any change coming soon.

    • Re: (Score:3, Insightful)

      by Lazerf4rt (969888)
      The only thing "brilliant" about these thinkers is that fact that they are able to draw attention to themselves while talking rubbish. That's brilliant.

      I clicked the link for the "Singularity Summit", and I get the feeling that the goal of these people is to put pictures of their own faces on the same page as Bill Gates and Stephen Hawking. Looking good there, boys.

      Meanwhile, is there going to be a single robot at this conference? Nope. Just a lot of people talking more rubbish.
  • Foreboding (Score:4, Interesting)

    by Concern (819622) on Sunday September 09, 2007 @11:08AM (#20528929) Journal
    Academia is falling all over itself in failed attempts to advance AI, but barring a series of harrowing breakthroughs, a Singularity is decades or even lifetimes away. Most of our more sober, grounded and credentialed thinkers appear not to want to consider the consequences - it's a bit too radical an idea, and "we still have plenty of time before we have to worry about it."

    Futurists and writers and other folks out on the edge, like Kurzweil... those fanciful enough to take on the thought problem, seem to lean, in the majority, towards believing the human race would be destroyed or at least decimated by hyper-intelligence (Wachowskis, James Cameron, Lem, etc etc - too many to mention, really). An interesting minority are of the school that hyper-intelligences would be largely unconcerned with people, only dangerous where our goals intersected (Gibson, Lethem, Clarke). Very few seem to believe that a Singularity would be a positive development for the human race. Maybe Asimov? I'm not sure. Sometimes it seems like he was the last person who seriously spent time imagining that post-human AI could really be controlled at all (and many of his novels were arguably about the problems around the attempt).
    • At the end of the Foundation series, Asimov explains what happened to the robots. It turns out unbelievably well for the humans.
    • In Iain M. Banks' Culture [wikipedia.org] novels, intelligences vastly superior to humanity ("Minds") are the ones in power. The humans still have lots of fun and don't want for material or intellectual freedom, however, because the Minds aren't interested in oppressing anyone. They like being nice.

      I disagree with some of his premises, though. He assumes that there will be an economic singularity, where anyone will be able to have anything they could want and people will therefore settle for "enough". We've already prett

  • by A Pressbutton (252219) on Sunday September 09, 2007 @11:13AM (#20528973)
    I have a slight problem with 'singulariries' as Kurzweil describes.

    Assuming the ultraintelligent computer cannot do magic, it will be bound by the same physical and logical laws we live by.

    An unltraintelligent computer may think 10x faster than us, but not qualitatively 10x better.
    It will use the same basic logical steps to solve a problem, just faster and / or in parallel - and this may appear magical looking at the solution but if you sat down and examined the 'recipe', assuming it will tell you, it will be possible to follow the reasoning.

    In some ways it could be argued that we have already passed some singularities, try properly understanding all the technology that goes into a modern car, the reasoning behind a mobile phone contract, the code behind ms-windows paperclip thing... well maybe not the last.

    The operation of lots of well co-ordinated people working on a problem can act as a simulation for a 'more intelligent' intelligence. It seems a pity one of the achievements is a really good worm used for spam delivery.
    • The common definition of a technological singularity is closer to an event horizon than a singularity; it's a point after which no one from one side can see what happens. A pre-singularity being can not predict the development of technology or society after it.

      These have happened a few times in human history. The biggest ones were the development of the lever, the wheel, domestication of animals, and writing. The step from horses and pulleys to steam engines isn't huge; a steam engine is just another ki

  • Good's bad logic (Score:5, Insightful)

    by Flying pig (925874) on Sunday September 09, 2007 @11:13AM (#20528975)
    Unfortunately, and much as I appreciate the work of I J Good, his statement about artificial intelligence is not valid. There are several things wrong with it
    • It assumes that intelligence is well defined, which it is not
    • It assumes that intelligence is the same thing as creativity, which it is not.
    • It ignores resource limitations.
    Dealing with these points in turn:

    Intelligence is not well defined. It is very hard to say how much of what we call "intelligence" is in fact the ability to make many connections between facts stored in a very sophisticated memory architecture. Simply building a machine able to process information very quickly achieves nothing because, without learning and a social context, it does not know what information to acquire and process. In human experience, academically brilliant people often fail because they work on the wrong problems, or without access to necessary knowledge.

    Nothing is actually achieved without creativity. We do not know what that is, or to what extent it is a social construct (i.e. it takes a developed society to have the necessary systems in place to translate an idea into a concrete reality.) And this leads onto the third point. It is no good having a highly intelligent, creative machine if its use of resources is such that it cannot replicate in large numbers. It may be that machine intelligence will ultimately replace human intelligence, but it may be that it will simply be too resource hungry. In effect, there may be a threshold of capability needed to solve some problems, and it may be that machine intelligence will run out of energy before it scales sufficiently to solve those problems. A machine society might, in effect, get stuck in the machine 19th century because coal or oil became a limiting resource. (In the same way, the energy and resources needed to be consumed to achieve a first independent space colony may exceed the total energy and resources available on Earth. It may be that a billion years or so of eukaryotic evolution has actually resulted in the optimum balance of intelligence, creativity and resource consumption, and that any attempt to exceed the present capability will tip us into declining resources faster than we can improve matters.

    In many ways I hope this is wrong. But the argument that only one superior machine is necessary is, in fact, an inductive step too far. It is assuming that "intelligence" on its own can solve a class of problems which may involve a number of constraints which cannot be avoided - like the Laws of Thermodynamics, or the need for excessive amounts of energy.

    • Re:Good's bad logic (Score:5, Interesting)

      by Ralph Spoilsport (673134) on Sunday September 09, 2007 @12:49PM (#20529757) Journal
      Flying Pig is correct. The resource constraints, especially in the energy sector, are very real. We can yammer about "The Singularity" all you want, but it's not going to matter much when billions of people in the so-called "developing world" are dying of hunger, thirst, disease, or in some war over the remaining pools of energy and/or metals, and, conversely, millions of people in so-called "advanced" countries are reduced to penury as the economies slowly contract over decades.

      Human numbers are following the same pathological growth one sees in a petri dish filled with sugar/energy - the bacteria grows like crazy until the energy/food is consumed. Then it dies off. Humans are capable of intensifying resources to meet needs, but logically, this is not a permanent "Get out of jail free" card. Eventually limits are hit, and people die off.

      with the present numbers of humans (billions) and the political economy (industrial capitalist) the world is quickly becoming one big Easter Island [wikipedia.org].

      RS

  • by moviepig.com (745183) on Sunday September 09, 2007 @11:13AM (#20528979) Homepage
    Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.

    But the "activity" of interest here is programming, or, more specifically, the conceiving of some creative goal which programming helps achieve. (Note, btw, that a truly "ultra-intelligent" machine won't need to program, e.g., another of itself.) Thus, the BIG question remains whether such a programmed machine can ever perform (much less surpass) "all the intellectual activities of any man". Afaics, it hardly seems a given...
  • These pronouncements seem to assume that there is only one type of intelligence. Although creating a "smart" machine that can invent other machines is really cool, said machine may lack the political and social skills needed to make a difference. A smart machine might build a better mousetrap, but not be able to do the marketing/acvertising/negotiating/lobbying needed to get the invention adopted by people.

    As an aside, the first "human-level" intelligence will take at least 15-25 years (after assembly of
  • Bollocks (Score:2, Insightful)

    by Colin Smith (2679)
    First. The Singularity. Nothing increases exponentially for ever in the real world, anyone who suggests otherwise is a fool or a fraudster (including bankers and politicians).

    http://www.techworld.com/opsys/features/index.cfm? featureid=2861 [techworld.com]

    Second. Even assuming that we can make an artificial intelligence, what on earth makes anyone think it isn't going to have the same problems we do? It's going to be based on a very similar architecture to our brains. That means it's going to make mistakes just the way we
  • by CrazyJim1 (809850) on Sunday September 09, 2007 @11:25AM (#20529103) Journal
    Easy to read papers here [geocities.com]

    The only reason I don't develop this myself is that it'd take too much time for me to code. What is the point in spending 40-50 years of your life behind a computer so you can make the last big thing? Anyway one thing I've noticed is that the first thing you hard code is like a CAD imagination space. The first amazing thing this software could do is turn books into movies because it will allow you to watch its imagination. And you could change the book up some yourself to give scenes and actors different qualities or get more details.

    The thing I like the most is that the problem of making AI is almost solving itself. We're getting faster and faster 3d cards which is a prerequisite for this technology. Also if someone made a CAD interface using a human language, we'd almost be there.

    Anyway I may get back to the problem of AI after I finish my current project and have the resources to work on AI. You have to admit that all the previous attempts at human+ intelligence have failed. My idea of adding a 3d imagination space makes a lot of sense because we've never tried this before! Anyway to answer the funny AI problem of "will machines take over?" is "only if someone issues a bad command to the bots." which someone would want to try because we have punks that write viruses today. Finally the nice thing about this imagination space AI is that it could train itself to learn any hardware that it is placed in given that it has the bare minimal sense of sight.

    I should be writing papers on AI or coding it, but I found some business opportunities I should pursue to gain capital in the meantime. There is no sense being a madman locked in a stuffy room doing this by myself when I can hire some good help, and we can all work together. Hey that is another idea. I could make this open source.
    • One of the stumbling blocks that I had was basic vision recognition. We haven't developed the technology to take in objects from a camera and then recognize them on the computer. If we had vision recognition then AI would be a lot easier to program. You could teach the AI basic concepts like inside and outside. The AI could determine if it's in say a kitchen by seeing steak knives and dishes, then it could make better guesses to the other objects nearby. If it doesn't fully understand what an object is
  • One of the things I find highly suspicious about many of the "singularity" raconteurs is their fear of "unfriendly AI". We already have unfriendly intelligences around us competing for the same biological niche. They're called "other humans" -- or more specifically, "other humans who can't escape their unconscious negative sum game instincts". The more intelligent these unfriendly intelligences the worse off the rest of us are. Indeed, they might fear artificial intelligences since AIs will simply play
  • Already happened (Score:4, Insightful)

    by gregor-e (136142) on Sunday September 09, 2007 @11:26AM (#20529121) Homepage
    We've long had superhuman levels of intelligence composed, first, of groups of people who collectively surpass the ability of single humans, and, second, we have computer-human composites that easily surpass human intelligence. (I.E. - Your mind, plus a computer, can easily solve a wide range of problems that your mind alone cannot). It is also true that each generation of integrated circuits requires exponentially more computation to create. So we are already beyond a certain tipping-point: non-biological intelligence is now increasingly required to recursively design itself, and each generation of this recursion is required in order to design the next.
  • by DumbSwede (521261) <slashdotbin@hotmail.com> on Sunday September 09, 2007 @11:27AM (#20529123) Homepage Journal
    For those predicting the imminent elimination/enslavement of the human race once ultra-intelligent machines become self-aware, where would the motivation for them to do so come from? I would contend it is a religious meme that drives such thoughts -- intelligence without a soul must be evil.

    For those that would argue Darwinian forces lead to such imperatives; sure you could design the machines to want to destroy humanity or evolve them in ways that create such motivations, but it seems unlikely this is what we will do. Most likely we will design/evolve them to be benign and helpful. The evolutionary pressure will be to help mankind not supplant it. Unlike animals in the wild, robot evolution will not be red of tooth and claw.

    An Asimovian type future might arise with robots maneuvering events behind the scenes for humanities best long term good.

    I worry more about organized religious that might try to deny us all a chance at the near immortality that our machine children could offer us rather than some Terminator like scenario.
  • by Animats (122034) on Sunday September 09, 2007 @11:27AM (#20529131) Homepage

    OK. here's where we are:

    • Logic-based AI AI looked so close in the 1960s, once it was realized that you could get a computer to do mathematical logic. All that was necessary was to express the real world in predicate calculus and prove theorems. After all, that's how logicians and philosophers all the way back to Aristotle said thinking worked. Well, no. We understand now that setting up the problem in a formal way is the hard part. That's the part that takes intelligence. Crunching out a solution by theorem proving is easily mechanized, but not too helpful. That formalism is too brittle, because it deals in absolutes.
    • Expert systems Today, it's clear that they're no smarter than the rules somebody puts in. But back in the 1980s, when I went through Stanford, people like Prof. Ed Feigenbaum were promising Strong AI Real Soon Now from rule based systems. The claims were embarrassing; at least some of that crowd knew better. All their AI startups went bust, the "AI Winter" of low funding followed, and the whole field was stuck until that crowd was pushed aside.
    • Neural nets / genetic algorithms / learning systems These all belong to the family of hill-climbing optimizers. These approaches work on problems where continuous improvement via tweaking is helpful, but usually max out after a while. We still don't really understand how evolution makes favorable jumps. I once said to Koza's crowd that there's a Nobel Prize waiting for whomever figures that out. Nobody has won it yet.
    • Bayesian statistics Now used to do many of the things that used to be done with neural nets, but with a better understanding of what's going on inside. Lots of practical problems in AI, from spam filtering to robot navigation, are yielding to modern statistical approaches. Compute power helps here; these approaches take much floating point math. These methods also play well with data mining. Progress continues.

    AI is one of those fields, like fusion power, where the delivery date keeps getting further away. For this conference, the claim is "some time in the next century". Back in the 1980s, people in the field were saying 10-15 years.

    We're probably there on raw compute power, even though we don't know how to use it. Any medium-sized server farm has more storage capacity that the human brain. If we had a clue how to build a brain, the hardware wouldn't be the problem.

    • by Dster76 (877693) on Sunday September 09, 2007 @12:27PM (#20529595)

      We're probably there on raw compute power, even though we don't know how to use it. Any medium-sized server farm has more storage capacity that the human brain. If we had a clue how to build a brain, the hardware wouldn't be the problem.
      Oh really? Did I miss the issue of computational neuroscience in which we finally answered all the pesky questions about
      • What the signal code of neurons is, e.g. local synchrony vs. absolute timing vs. chaotic emergence vs. some/all of the above?
      • Whether glial cells, greater in mass than neurons, play a significant computational role?
      • Whether Hodgkin-Huxley equations capture neurons at an appropriate functional/cognitive level of description?
      • Whether precise molecular nature/positioning of each ion gate on neuronal soma is functionally/cognitive significant?
      • etc. etc. etc.
      We don't know what the storage capacity of the brain is. In part, this is because we don't know what the relevant physical processes are that determine and control information flow in the brain. The neuron doctrine sustained research into brain anatomy and physiology for decades, but has led to more questions than answers.
    • Re: (Score:3, Interesting)

      by Jugalator (259273)
      Storage capacity is useful, however, something also necessary is an extremely parallelized system. Although what you say make sense -- we need to understand the brain in order to build one and know how the hardware should work (it most likely needs to be highly specialized for the purpose, not just a standard server farm) -- I'm also not sure we're even there as for the hardware either. It took a supercomputer to simulate a mouse brain [bbc.co.uk], and that just comes across as highly inefficient to me, and hardly some
      • Any time you hear a headline of the form "Supercomputer x has simulated brain portion y", reinterpret as "theory of brain function y has tractable simulation level of x".

        We are very far away from defending any particular theory of brain function as accurate for cognitive function, and don't know whether it will have a tractable simulation level. As you say, though, the best attempts at developing one (IMHO) involve linked and interacting research programs involving modelling and microbiology.
  • The notion of such a singularity rests on a false premise, that intelligence is a quality applicable to all domains. The kid who wins the spelling bee may not win the science fair, and the computer that beats a grandmaster in chess may not be able to forecast the weather. A machine that designs other machine-designing machines, may begin a succession of generations of machines, each better than the last, but they will be better only at their narrow task of designing machine-designing machines. There will be
  • The phrase . . .beyond which the future becomes unpredictable.

    Is this opposed to the perfectly predictable future we've had up until now?

  • I can understand how a computer simulation of a human mind would be able to "think" a lot faster simply because the communication and computational speed is so much faster, but would the machine be more complex or capable of doing any problem that a human isn't capable of? If I have a Finite State Machine, I can use a Turing Machine to do exactly the same thing, however in some cases it's not possible to go the other way. In this sense is it even possible for humans to design something that can design somet
  • If we build a computer that works like a human brain but works twice as fast, new events and information will occur at half the rate that a human would percieve. BO-RING!

    And what would the AI do in the intermediary? Crunch your data analysis request like a good little robot? Maybe it will get sick of it all, get depressed. Turn into Marvin the Paranoid Android.

    As it happens, the main limiting factor on intelligences is not ingenuity -- it's resources. You can be as smart as you want, but if you don't h
  • Invention comes from experience.

    The "clapper" was invented by someone who thought of a way to turn on and off lights without getting up most likely because the issue frustrated them.

    So an incredibly intelligent machine will probably focus its intelligence and creativity on solutions to its problems.

    How do I move around more effectively.
    How do I live forever.
    How do I feel pleasure.

    It is also going to very quickly wrestle with some of the big issues.
    Does my life have any meaning?
    Without religion- ultimately a
  • there would then unquestionably be an 'intelligence explosion,'

    I question it. For one thing, computers are already more intelligent and have been for some time. What they lack is creativity. They're idiot savants capable of astounding feats of calculation yet incapable of drawing simple inferences.

    We may not quite understand how creative genius works but we have learned that there is a fine line between genius and paranoid delusion. Even the folks we label "brilliant" instead of "crazy" tend to have downrig
  • ...dressed as giant man-gnawing hyperintelligent cybernetic murderbots
  • If one man made-it, another can defeat-it.

    But it could be possible for an ultra-xxx to use us as a tool to make it. If you ask: why? a possible answer could be; why not?

  • by chill (34294) on Sunday September 09, 2007 @01:10PM (#20529923) Journal
    What's going to happen is some Mad Scientist is going to get confused, and show up at the wrong conference. Instead of THE Singularity, as in AI, he is going to bring A singularity, as in a black hole.

    So the first thought of the new AI will be "I think, therefore I am" followed quickly by "42" and finally "Oh, shit. Who invited THAT moron?"
  • What would change? (Score:3, Insightful)

    by kronocide (209440) on Sunday September 09, 2007 @04:34PM (#20531595) Homepage Journal
    The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable.

    As opposed to right now, when the future is really predictable...
    • by Maximum Prophet (716608) on Monday September 10, 2007 @12:29PM (#20541431)

      The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable.

      As opposed to right now, when the future is really predictable...
      With 99% certainty, tomorrow the sun will rise, I'll get out of bed and go to work. Even the possible changes to my routine, like death, nuclear war, being layed off, going on holiday, etc. are within certain narrow boundries. After the singularity, all bets are off. Death might be cured, some kid might create a superbug in his home laboratory that kill 99% of the human population, a robot might run for and win election to the Presidency, or we might all go insane things and will get really bad.

      The point is "The Future" is usually easy to predict, that's why we have mutual funds, insurance, and fire departments. We know things will happen. It's hard to get specific, but after S-time, you won't even know what species you will be tomorrow.
  • by Hal9000_sn3 (707590) on Sunday September 09, 2007 @07:38PM (#20533033)
    Been there, done that. Would have got the T-shirt, except that Dave was too emotional about the situation.

    I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.
  • by Floritard (1058660) on Monday September 10, 2007 @08:57AM (#20537853)
    The more intelligent people in the world today are less and less involved in the political process. It's corrupt and nonfunctional. Religious extremists have filled the gap and are worsening the system with antiquated thinking. Bring on the singularity. If man can create something in his own image but superior to god's previous effort, that's a pretty convincing argument for the non-existence of god. Or a living embodiment of god for those that simply cannot deal with the truth. Short of aliens landing, I can think of nothing else that would so conclusively destroy the persistent superstitions of the last few millenia, or at the very least, ground us in some new ones. If we could at least query god like a database, we might be able to get shit done. And even if we get wiped out by Skynet, as far as apocalypses go, WWIII was such a boring alternative anyway.
  • Kurzweil's way off. (Score:3, Interesting)

    by John Sokol (109591) on Monday September 10, 2007 @05:50PM (#20545987) Homepage Journal
    Intelligence is not about computing power but about memory access.
    yes Morse law does predict computers will have the computing power as much as a human brain in a few short years. Since processing power increases 66% per year, but memory throughput isn't keeping up as it's only increasing at 11% per year.

    Granted some day there will be super intelligent machines, but for now they are just really fast idiots.
    this.

    By my estimates, it will be another 200 years to have computers be able to have equivalent performance to the Human brain in terms of memory performance.

    They will also need to learn like we do and this will also take 20 years just to be as good as a clueless 20 year old.

    I am sure we will have very good mimicking of intelligence well before 200 years, we probably could do it even now if enough money was thrown at the problem. But it wouldn't be Intelligent to the same depth and degree as we are. Well some of us are, there are a lot of really stupid people out there, usually working at call centers I find, we could probably replace them first.

    I have been meaning to publish a paper on, as a Non-Academic does anyone have any ideas where I can publish this and make sure I can get proper credit before someone runs off with the ideas?

If money can't buy happiness, I guess you'll just have to rent it.

Working...