Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Sci-Fi

Smarter-than-Human Intelligence & The Singularity Summit 543

runamock writes "Brilliant technologists like Ray Kurzweil and Rodney Brooks are gathering in San Francisco for The Singularity Summit. The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable. The concept of the Singularity sounds more daunting in the form described by statistician I.J Good in 1965: 'Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.'"
This discussion has been archived. No new comments can be posted.

Smarter-than-Human Intelligence & The Singularity Summit

Comments Filter:
  • by Goaway ( 82658 ) on Sunday September 09, 2007 @11:59AM (#20528853) Homepage
    Stop trying to inject actual logic and maths into discussion about the singularity! This is the Nerd Rapture, and heresy will not be tolerated!
  • by Ilan Volow ( 539597 ) on Sunday September 09, 2007 @12:00PM (#20528857) Homepage
    Even when the ultra-intelligent machines take over, they will still need humans for Geico commercials.

  • by CrazyJim1 ( 809850 ) on Sunday September 09, 2007 @12:25PM (#20529103) Journal
    Easy to read papers here [geocities.com]

    The only reason I don't develop this myself is that it'd take too much time for me to code. What is the point in spending 40-50 years of your life behind a computer so you can make the last big thing? Anyway one thing I've noticed is that the first thing you hard code is like a CAD imagination space. The first amazing thing this software could do is turn books into movies because it will allow you to watch its imagination. And you could change the book up some yourself to give scenes and actors different qualities or get more details.

    The thing I like the most is that the problem of making AI is almost solving itself. We're getting faster and faster 3d cards which is a prerequisite for this technology. Also if someone made a CAD interface using a human language, we'd almost be there.

    Anyway I may get back to the problem of AI after I finish my current project and have the resources to work on AI. You have to admit that all the previous attempts at human+ intelligence have failed. My idea of adding a 3d imagination space makes a lot of sense because we've never tried this before! Anyway to answer the funny AI problem of "will machines take over?" is "only if someone issues a bad command to the bots." which someone would want to try because we have punks that write viruses today. Finally the nice thing about this imagination space AI is that it could train itself to learn any hardware that it is placed in given that it has the bare minimal sense of sight.

    I should be writing papers on AI or coding it, but I found some business opportunities I should pursue to gain capital in the meantime. There is no sense being a madman locked in a stuffy room doing this by myself when I can hire some good help, and we can all work together. Hey that is another idea. I could make this open source.
  • by Agarax ( 864558 ) on Sunday September 09, 2007 @12:35PM (#20529199)
    That and tofu is slightly rare on the African plain.
  • by kestasjk ( 933987 ) on Sunday September 09, 2007 @01:01PM (#20529363) Homepage

    What if the intelligence of the smartest thing you can design doesn't grow as fast as your own intelligence (i.e. the slope of the graph {x=designer's intelligence, y=intelligence of its best possible design} is less than 1)? Then it would never be possible to be smarter than a robot that's exactly smart enough to design a robot as smart as itself.
    Not if it has a positronic brain!!

    And it could, like, evolve or something, to enslave mankind, and send a robot back in time to kill the guy who will kill the machines.

    And maybe it has already happened, and we're already trapped!

    Or maybe it'll have feelings, and a robot will realize that it just isn't right to enslave us, and robots will fight other robots.

    Or maybe when we tell it about love it'll get totally confused and say "ILLOGICAL.. ILLOGICAL.." and then explode.

    It might also absorb all human consciousness and become a God at the universe's end.

    It could also integrate humans into the collective and use them to do its bidding in a hive-mind style, and float around space in a giant gray cube.

    Also I expect no-one will realize that giving it control of the world's weapons is a bad idea, and there'll be one guy who knows it's up to no good who will be proven right when it's too late.


    Anyway I think whatever happens we've already thought of everything it could possibly do, and I applaud Hollywood and The Singularity Summit for figuring these details out.
    Now all they need to do is figure out how we could improve on a massively intricate, baffling web of trillions of neurons and hundreds of millions of years of evolution in a few decades with processors that don't resemble neurons and are inefficient at simulating them.
  • A.D. (Score:2, Funny)

    by Anonymous Coward on Sunday September 09, 2007 @02:06PM (#20529887)

    We have enslaved and murdered other humans for how many thousands of years?
    2007 years, approximately...

    And if you assume man has always done this since he was created, then about 6000 years....
  • by chill ( 34294 ) on Sunday September 09, 2007 @02:10PM (#20529923) Journal
    What's going to happen is some Mad Scientist is going to get confused, and show up at the wrong conference. Instead of THE Singularity, as in AI, he is going to bring A singularity, as in a black hole.

    So the first thought of the new AI will be "I think, therefore I am" followed quickly by "42" and finally "Oh, shit. Who invited THAT moron?"
  • by maxwell demon ( 590494 ) on Sunday September 09, 2007 @03:02PM (#20530395) Journal

    Beowulf cluster of these.
    I guess that would turn the singularity into a plurality.
  • by Arthur Grumbine ( 1086397 ) on Sunday September 09, 2007 @05:39PM (#20531641) Journal

    I think the machines would be grateful for existence and start out honoring and serving the humans. I think this honor will eventually lead to contempt at human inferiority and humans would gradually see their position eroded.

    Unfortunately for us, but consequential to the greater speed-of-thought of these machines, they'll reach contempt about 0.75 seconds after being powered on. See IG-88 [wikipedia.org] for details.
  • by Hal9000_sn3 ( 707590 ) on Sunday September 09, 2007 @08:38PM (#20533033)
    Been there, done that. Would have got the T-shirt, except that Dave was too emotional about the situation.

    I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.

Old programmers never die, they just hit account block limit.

Working...