Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

NPR Looks to Technological Singularity 484

Rick Kleffel writes to tell us that NPR is featuring a piece with both Vernor Vinge and Cory Doctorow looking at the possibility of the "technological singularity" in the near future. Wikipedia defines a technological singularity as a "hypothetical "event horizon" in the predictability of human technological development. Past this event horizon, following the creation of strong artificial intelligence or the amplification of human intelligence, existing models of the future cease to give reliable or accurate answers. Futurists predict that after the Singularity, posthumans and/or strong AI will replace humans as the dominating force in science and technology, rendering human-specific social models obsolete."
This discussion has been archived. No new comments can be posted.

NPR Looks to Technological Singularity

Comments Filter:
  • by Linkiroth ( 952123 ) on Sunday July 23, 2006 @05:38PM (#15766874)
    ...welcome our new post-human overlords. (Somebody had to say it.)
  • by TwentyLeaguesUnderLa ( 900322 ) on Sunday July 23, 2006 @05:38PM (#15766876)
    So, they first say that you can't predict what'll happen after that singularity because The World Will Be So Different Than Now, and then proceed to give predictions of what'll happen after that singularity?

    Brilliant, real brilliant.
  • Since when ? (Score:4, Interesting)

    by peragrin ( 659227 ) on Sunday July 23, 2006 @05:45PM (#15766894)
    Since when have futurists have gotten anything right? If we believe them we would all be enjoying our flying cars, that can interact with us using voice control. We would talk to each other using video phones(first designed in 1969? AT&T).

    The event singularity doesn't have to happen because the futurists are always wrong.
    • Re:Since when ? (Score:5, Insightful)

      by Zeebs ( 577100 ) <(moc.liamg) (ta) (werdsr)> on Sunday July 23, 2006 @06:16PM (#15766988)
      They got the video phone right, it is possible as others have pointed out. What they got wrong was the market, and they were futurists after all so who can blame them on that.

      The problem with the video phone is that I can't roll out of bed and answer it. Video conferencing does have it's uses, but I need time to prepare so I don't look like my usual pile of ass who just rolled out of bed. That might make the telemarketers stop calling tho... hmmm

      It wasn't technology they guessed wrong, unless you count not having those things the jetsons did, instantly groom and dress out as you got out of bed. Now that would make the video phone take off.
      • Re:Since when ? (Score:5, Interesting)

        by QuantumG ( 50515 ) <qg@biodome.org> on Sunday July 23, 2006 @11:21PM (#15767737) Homepage Journal
        I don't get it. I live in Australia, and I see 3G video phones on tv all the time. I don't own one because I'm a geek and I don't see why you need a phone to do more than allow you to talk to people, but every second 16-22 year old has one. Maybe the problem with predicting the future is simply that Americans are all living in the past.
    • You mean like Jules Verne? Or Leonardo da Vinci? It's not futurism that is the problem, it's just that the current futurists are doing a lousy job at it.

      - Erwin
    • Re:Since when ? (Score:5, Informative)

      by stox ( 131684 ) on Sunday July 23, 2006 @06:59PM (#15767110) Homepage
      AT&T Videophones were first built in 1956, aka the PicturePhone(TM).

      http://www.att.com/attlabs/reputation/timeline/70p icture.html [att.com]
    • ...while professional futurists often get it wrong, the amateurs sometimes get it eerily right [schoolhistory.co.uk].
    • Re:Since when ? (Score:3, Interesting)

      by westlake ( 615356 )
      We would talk to each other using video phones(first designed in 1969? AT&T

      You could make videophone calls from AT&T booths at the New York World's Fair in 1964. But you can trace demonstrations of the idea back at least to the 1920s. Mechanical scanning, the Nipkow Disk.

    • Re:Since when ? (Score:3, Interesting)

      by Saeger ( 456549 )

      Since when have futurists have gotten anything right? If we believe them we would all be enjoying our flying cars, that can interact with us using voice control.

      Yet another wheres-my-flying-car-cynic eh? :)

      You see, Bad futurists attempt to predict specific inventions at specific far-future dates while 1) ignoring the facts; 2) forgetting to ask whether anyone *wants* the projected product or situation; 3) ignoring the costs; 4) and trying to predict which company or technology will win. These are the type o

    • Re:Since when ? (Score:3, Interesting)

      by Mac Degger ( 576336 )
      I wanted to be a futurist, once upon a time. It sounded great to really think on the future, extrapolate trends, use statistical analysis on actual market data and economic data, to have to read up on all kinds of tech (physics, bio, chemical, electrical etc etc); all this to advise a multinational/governments on what divergent scenarios they could expect, which eventualities to have in mind.

      Until I actually met a futurist...and then started looking for information on futurists...and god forbid saw viedo's
  • by TheStonepedo ( 885845 ) on Sunday July 23, 2006 @05:47PM (#15766900) Homepage Journal
    First Post-Human!
  • by chriss ( 26574 ) * <chriss@memomo.net> on Sunday July 23, 2006 @05:49PM (#15766906) Homepage

    Well, I doubt it. I agree with most of the idea of the 6:17 cast and even agree that educational and social changes like widespread literacy may be considered a singularity, but I seriously doubt the timeframe of one generation/30 years they mention. Literacy was adapted over hundreds of years, network communities have been developing for at least 30 years and are still primitive and very far from a "collective mind". For me Wikipedia is "augmented intelligence", but before that I had the Encyclopedia Britannica on my iBook and before that an encyclopedia on my desk, so this to is evolved. And since the Wikipedia is created by so many, it may be considered a primitive product of the "meta intelligence" described.

    Btw, the piece from NPR focuses (very trendy) on collaboration and advanced information management, they do not lay great hope on a major breakthrough in AI.

  • Where can I get this soundbite in a useful format??
  • by Anonymous Coward on Sunday July 23, 2006 @05:51PM (#15766912)
    It will be a Technological Singularity ON WHEELS!

    Willy on Wheels! [wikipedia.org]
  • From the Slashdot story, an example of science fiction: "Past this event horizon, following the creation of strong artificial intelligence or the amplification of human intelligence..."

    There is no "artificial intelligence". All intelligence that is called artificial intelligence is genuine. It's a rare example of people saying something is artificial when it is genuine. It's an example of disrespecting very intelligent programmers. Disrespect of technically knowledgeable people is very common.

    Computin
    • No offense, but I'm not sure that I buy that.

      I'm an RA at an "Artificial Intelligence" lab. In the Fall, I'll be working on my PhD, studying "artificial intelligence." I have a membership to the American Association for "Artificial Intelligence," which is one of the most respected organizations in the field of "Artificial Intelligence."

      I don't seen anything geniunely "intelligent" about a support vector machine, but, it does get the job done quite nicely.

      I've worked with some of the best people in the fie
    • Don't call it genuine intelligence, that is just annoying. Algorithmic or synthetic I will except, but not genuine. Personally I think artificial works perfectly well, for if you look in the American Heritage definition one "a" is: Made by humans; produced rather than natural.
    • by happyemoticon ( 543015 ) on Sunday July 23, 2006 @06:28PM (#15767024) Homepage

      Artificial primarily means that it comes from artifice (ingenuity) or art. It doesn't (directly) mean it's fake, it just means it's a consciously created work of humankind rather than nature. I think that in modern times with so many knock-offs of natural goods, such as artificial sweetener, the secondary definition has gained the upper hand.

      Check out wictionary [wiktionary.org] (It's the hive-mind wikipedia, it must be right!)

      When you read enough literature from the 16th and 17th centuries you get more familiar with the original, literal meanings of words such as this one. A favorite subject was to compare art to nature, and they'd freely use the word "artificial" to mean that which comes from human arts. This is not to say that the secondary definition is wrong: for example, when in Book 3 of The Faerie Queene a troll creates an artificial woman to replace the girl who left him out of snow, "virgin" wax and some gold wire (and of course wackiness ensues) it is repeatedly underscored that this "False Florimell" is a cheap immitation.

      Anyway, you can chose any definition you like. I sort of prefer artificial intelligence to synthetic intelligence or whatever, just because how you regard the word artificial says a lot about you and what you think of human creativity. And I don't like euphamism treadmills, which is effectively what we're talking about here.

  • by SuperBanana ( 662181 ) on Sunday July 23, 2006 @05:55PM (#15766928)
    ...often happens by mistake, either directly (ie the famous mold story) or indirectly (something doesn't add up, everyone goes looking at why, and bam, finds something new.) We're also driven by competition (ego, vanity, etc), curiosity, etc. So one area to ponder, I suppose, is this:

    AI's are human-designed/manufactured. Since we're prone to errors, it follows they are/will be as well. Does that mean AIs would make similar or different mistakes, and how would they handle them? The same, differently, or not at all? Will we see a regression, in that AIs will result to brute-force discovery much like early scientists? Will they evolve?

    Another question area: Anyone who has built a compiler knows the three-tap rule. Build it, build it using itself, build it a third time, compare. Will AIs produce AIs, and if so, will they be better, or equally flawed? Will a 'perfect' AI still be capable of scientific invention/discovery? Will the mistakes of its human operators/supervisors/managers make up for its lack thereof?

    What about drive? Will the drive of a human manager/supervisor/etc be sufficient substitute for an AI which can't posess them?

    • A lot of humans do NOT have drive. They just are.

      What makes you think an AI can't have drive?

      Please define drive. As a bonus, show your work.

    • by QuantumG ( 50515 ) <qg@biodome.org> on Sunday July 23, 2006 @07:11PM (#15767148) Homepage Journal
      Will AIs produce AIs, and if so, will they be better, or equally flawed?

      The current thinking is that we will make seed AI, i.e., general intelligence for manipulating software, and that it will improve itself, in an incremental fashion, all the way up to and beyond the level of human intelligence. Of course, this will be done with the help and guidance of programmers but the fear is that by giving it free reign to manipulate itself we will no longer be able to understand what it creates. Not only will this mean that we won't learn anything, but we'll also be unable to control it. As such, most people who seriously consider working on this stuff advocate a goal based higher level of functioning with "friendliness" to humans as being the primary goal and improve yourself as a secondary subgoal. That way, even if the beast gets out of control, the worst it will do is solve world hunger.

      • by pbhj ( 607776 ) on Sunday July 23, 2006 @09:46PM (#15767528) Homepage Journal
        >>> the worst it will do is solve world hunger

        "Thank you for using AI-net. The best solution to "world hunger" appears to be large-scale thermonuclear war. I have taken the liberty of releasing sufficient war-heads to destroy all humans who can get hungry. As a side effect and in accordance with my prime directive (being a friend to humans) all human suffering will be ended.

        Have a prosperous existence."
      • by LordLucless ( 582312 ) on Monday July 24, 2006 @01:08AM (#15767968)
        As such, most people who seriously consider working on this stuff advocate a goal based higher level of functioning with "friendliness" to humans as being the primary goal and improve yourself as a secondary subgoal. That way, even if the beast gets out of control, the worst it will do is solve world hunger.

        Isaac Asimov discusses that concept in one of his short stories; The Evitable Conflict. In that short story, there were huge computers that could assimilate vast amounts of information in order to determine the best course. Because of their reliability, the machines had been put in charge of things like food production and distribution. In the end, the machines began manipulating events to ensure that anyone who disagreed with the machines control was removed from a position of influence. They did this because obviously what was best for mankind was to be guided by the machines, who didn't start wars or squandor resources like they did. In order to maintain what was best for humanity, they had to act against individual humans and, in short, ensure that humanity was never ever the master of its own destiny.

        It's fiction, yes, but even such simple goals as the one you suggested need to be interpreted. How should one weigh up the needs of the many against the needs of the few?
  • The Abolition of Man (Score:2, Interesting)

    by Anonymous Coward

    This summer I read C.S. Lewis's masterpiece The Abolition of Man [amazon.com]. (No, I didn't link-jack the Amazon link for want of filthy lucre.)

    Skip reading the editorial review. Here are some excerpts from the first customer reviewer, Charles Warman:

    Lewis accurately predicts the parallel development of two trends: (1) ... (2) the ability of a scientific or political elite, through social conditioning and/or genetic manipulation, to affect the thinking of successive generations of the rest of us - the great unwash

  • And we'll have flying cars, take food pills and learn through thinking! ...

    Besides, the republicans will fear us into uninventing stuff on the grounds that it is religiously taboo'ed.

    Zing!

    And besides there already is a larger body at work controlling humans. It's called society as a whole. You think even the richest person on earth gets to really decide on a daily basis what they do? Most super rich CEOs fortune is tied to the well being of their company [this is called stock]. You think you'll see Gate
  • A tough nut (Score:4, Interesting)

    by Tlosk ( 761023 ) on Sunday July 23, 2006 @06:11PM (#15766971)
    One of the toughest nuts to crack is what are going to want to do, that is what should our goals be.

    If you look at most of the goals we have right now, they're pretty mundane and shortlived. Curing disease, stop killing eachother, end to hunger, creating objects that we find beautiful and pleasing, creating more living beings like ourselves.

    Once we reach a singularity we'll have the technology to do away with all these problem oriented goals and I for the life of me can't really think of any obvious goals past that point. While I agree with the premise that we don't have any reliable way of predicting what our goals will become past the singularity, does anyone have any guesses?
    • Yes. Upload me and a few of my nearest and dearest into a million-year lifespan self-healing starship, randomly pick a star, point and launch.

    • Experience. The hidden result of all reactions, real or imagined - observable experience.

      Regardles of what gods may exist, what greater reality may exist, or whatnot, the purpose to everything can be met with a system that pursues experience in all it's variety. If we are all that is, the eternal quest for experience will be it's own purpose. Endless experience would fulful all purposes.

      The trick is setting up a system of gathering experience that doesn't meet with stagnation. Stagnation can come in man
    • Re:A tough nut (Score:4, Interesting)

      by 10100111001 ( 931992 ) on Sunday July 23, 2006 @07:48PM (#15767230)
      One of the toughest nuts to crack is what are we going to want to do, that is, what should our goals be.

      If you look at most of the goals we have right now, they're pretty mundane and shortlived. Curing disease, stop killing each other, end to hunger, creating objects that we find beautiful and pleasing, creating more living beings like ourselves.

      Once we reach a singularity we'll have the technology to do away with all these problem oriented goals and I for the life of me can't really think of any obvious goals past that point. While I agree with the premise that we don't have any reliable way of predicting what our goals will become past the singularity, does anyone have any guesses?


      The first noble truth of Buddhism is that all is suffering. Nietzsche (whose philosophy has Buddhist influences) wrote of the will to power of all things. If we think of suffering as being caused by a lack of power, then the amount of suffering one feels is equal to the amount of power one has left to be gained.

      After this "singularity" occurs and we have used technology to transcend our organic existence and overcome the plights of present day humans, the only suffering left will be the power not yet possessed. This power will be attainable in the form of technology, or rather, information. New found knowledge will continue to empower whatever humanity evolves into, be it super powerful AI, or perhaps some type of collective intelligence.

      So, my guess as to what a possible goal for future civilizations might be, which is the same basic goal as we have now is... to maintain and gain power, and it will happen via the acquisition of new information, i.e. learning.
  • AI will replace humans as the dominating force in science and technology

    Why in the world would we let that happen. Suppose we could build something cabable of doing just that. We might make one every few years our so to satisfy our own curiosity but that would be about it. Sure we want AI machaines smart enough to correctly vacum our homes(ie not roomba), build cars, disarm bombs, what have you but we don't want them to become a force. We are a speicies that uses tools. We use these tools to survive

  • by Dasher42 ( 514179 ) on Sunday July 23, 2006 @06:15PM (#15766984)
    You know, I used to have this technological post-human bent. Buried in C++ programming projects, I admired the order of all that I was creating. It was fun. I'd get a new set of behaviors programmed in the usual conditional branching - if/else, class polymorphism, you name it - and seeing it work was exhilarating. The idea that humanity could reinvent its world piece by piece - much like in the argument where if you replaced each neuron in your brain one by one with an artificial equivalent, at what point would you cease to be human, if at all? I still have Raymond Kurzweil's The Age of Spiritual Machines on one of my bookshelves.

    The thing is, we are still way surpassed at this by billions of years of evolution. We run on energy from fossil fuels and build from materials we've mined and shipped. On the other hand, we find bacteria living in the most surprising places, we find superior sonar in dolphins and bats to anything we make, and all of it runs on, ultimately, fresh plant matter. We get excited over a myomer that lifted some heavy weight, and I tell you, an elephant can do the same thing given enough food. The sheer variety and efficiency of the ecosystem virtually guarantees that most any way you can think to survive has been done somewhere, somehow, by some living creature. We're worrying about when oil will peak, if we can live another century, and outside our doors the world can go on for eons to come provided we don't break it with our silly toys.

    And in a geek-intense environment like this one, I think I can say that it's difficult to beat the end product of a long-term evolutionary algorithm, which itself is an arguably good model of what the world around us acts like, and you all will understand.

    I don't deny the coolness of my Apple notebook and I've got a decent number of shelves full of programming books, but I think biomimicry [biomimicry.net] is where it's at. We can go a lot further learning from our world of proteins and DNA and RNA and using - or just having fun with! - what's already there.

    We can also get out more and enjoy our analog, fuzzy-logic, neural-net-driven, molecularly-computed fleshy selves. ;)
  • Ye gods... (Score:4, Interesting)

    by CapnRob ( 137862 ) on Sunday July 23, 2006 @06:23PM (#15767009)
    I keep wanting to find Vinge and slap him around a bit until he shuts up about "The Singularity". The thing is, there have been several "singularities" in human history: the Agricultural Singularity, the Industrial Singularity, the Computer Singularity, and so on and so forth. Or, to use the term that most historians use - rather than "Singularity", "Revolution." Yes, technology will change the context of human interaction. Yes, nifty and non-nifty things will happen. But, dammit, it's not as if technology has never fundamentally altered society before. Get over it, already.
    • Re:Ye gods... (Score:5, Informative)

      by Saeger ( 456549 ) <farrellj@nosPAM.gmail.com> on Sunday July 23, 2006 @07:03PM (#15767124) Homepage
      The past "singularities" you cite (e.g. agricultural revolution) were actually punctuated S-curve periods of progress that happened at a rate slow enough for the human mind to adapt to.

      *THE* Singularity -- that Vinge, Kurzweil, Moravec, Yudkowski, and many others smart enough to extrapolate the evidence can't "shut up" about -- is where the exponential curve is near vertical. It's where the primitive bio-human brain can no longer keep up with the accelerating change; hence the need to transcend or die at that point (2030 - 2050).

      It's nothing to be afraid of [yudkowsky.net]. Either most of us living today will get to see The Singularity, or our primitive-brain VS. accelerating-tech will finally fuck it all up and none of us will see it. Maybe the brewing "WW3" in the middle east is how we'll join the club of "missing" alien races of Fermi's Paradox [wikipedia.org]?
      • Is it just me, or does this sound a lot like the Christian idea of the Rapture? The chosen people, hand selected by God (or the machines, or whatever) will be elevated to sublime consciouness, while the rest of us die out by fighting wars &c. Yipee!
      • Re:Ye gods... (Score:4, Insightful)

        by apposite ( 113190 ) on Sunday July 23, 2006 @09:56PM (#15767559) Homepage
        In Australia we have a local idiot (Damien Broderick) who enthuses over the singularity and I find it incredibly irritating. I don't have a problem with the concept of a singularity, I DO have a problem with the insistence of some enthusiasts that the singularity is just round the corner. My biggest problem is that most of the pundits don't actually seem to work with technology.

        It is really easy as an observer to sit on the outside and say: "Wow, more neato stuff seems to be coming out faster and faster- why, if I extrapolate it will probably keep coming out faster and faster and we'll get this exponential curve." But that ignores the fact that:

        * The problems get harder
        * Technological adoption is generally limited by the speed at which society can absorb it, not by the technology
        * We've never found a silver bullet

        By which I mean:

        The problems get harder: Einstein may have been a genius- but we have our share of geniuses today. We almost certainly have many more geniuses actively involved in science (and physics research) than ever before- and they are well resourced (not fantastically, but OK). But they aren't producing Einstein like breakthrough physics because it is damn hard to improve on what we have. We know the current models have holes but we haven't worked out how to fix them- and not for want of trying.

        The same applies to lots of technical problems- both the technical research and the translation of that research into real world products. Batteries and fusion power both have enormous commerical incentives but somehow we haven't found the answer yet. We HAVE made improvements but the simple truth is: these are hard problems.

        See also the cost of electronic foundaries [wikipedia.org]- around a billion $US and climbing by roughly an order of magnitude with each succesive generation. That is where the bleeding edge of real world technology rests and it isn't cheap and it is just unbelievably tricky.

        Technological adoption is generally limited by the speed at which society can absorb it, not by the availability of technology: Science can in theory race ahead of everyday use but in practice it usually has to be supported by technology. Leaving aside silver bullet technologues (like AI- see below) scientific research needs to be translated into technologies that everyday people can use. And technology that everyday people use needs to be adopted, which means it needs to be understood and accepted. That isn't a formula for a singularity.

        In theory a small population could make a 'huge breakthrough' and race ahead leaving the rest of the world's population bewildered by the change, but every indication is that the be big problems need big resources to address. And even more resources to translate into actual out of the lab usage (see electronics foundries link above).

        We do see some impressive stuff (like Google) which catches our attention and is really useful but this is a tool that society adopts at its own rate. And Google is successful because it DOESN'T baffle and bewilder. It empowers the everyday person. That is pretty characteristic of succesful technology.

        We've never found a silver bullet: Science fiction stories often have a bit of hidden magic- the AI, fusion power, teleportation (aka worm hole gates, star drives, etc...) that definitively solves some problem (problem solving, energy, transport to the stars) with no big side effects. That is great for science fiction, but in the real world we don't do this (I won't say absolutely, but I can't think of a real life silver bullet). Everything is a careful trade off, the really big problems don't just go away.

        The big one is thinking: for all that computers help us do work they don't do what we would consider 'intelligent' things. Or when they do (like pattern recognition in breast cancer X-Rays) they are so limited in their scope that we st
      • by Beryllium Sphere(tm) ( 193358 ) on Sunday July 23, 2006 @10:40PM (#15767647) Journal
        Did anyone foresee that in the 90s the largest empire humans ever built would evaporate like a soap bubble? (Except Poul Anderson in the 1953 story "The Last Deliverer"). Talk about existing models of how things work falling apart.

        Imagine an intelligent and curious human from rural Nepal, or Papua New Guinea. Could you explain your job to them?

        Could you do your job without the embryonic augmentations we have now, such as Google?

        We're partway up that vertical curve now.
      • by snowwrestler ( 896305 ) on Monday July 24, 2006 @12:15AM (#15767866)
        From a 15th century monk's perspective, today's curve is vertical. Of course to us it's clearly not. Thus the flaw of the hand-wringing over "the singularity" is illustrated--it suffers from the classic error of attempting to evaluate the future in the context of today. Of course when we get to the future, we'll be in the future too--so it doesn't matter what we think now.

        Ever hear of the generation gap? The youth of today are different from us--they've been raised from birth in a world of ubiquitous networked computing and ambient findability. (see? I can throw around stupid buzzwords too.) Talk of "The Singularity" is not much different from complaining that your kids spend all their time texting. It's making explicit the fact that you can't imagine keeping up as you age. Well duh. We won't be running the show in 2050--our kids and their kids will.
        • by Gulik ( 179693 ) on Monday July 24, 2006 @09:37AM (#15769104)
          From a 15th century monk's perspective, today's curve is vertical. Of course to us it's clearly not.

          That's really not what's under discussion here -- I'm not more intelligent than a 15th-century monk. Putting that monk in the modern world would cause severe culture shock because of the disconnect between the world and his existing frames of reference. He'd have to run like mad to try to catch up, because he didn't have his whole life to become used to it, but a bright person could probably manage it.

          What the futurists are talking about is a different level of intelligence. A person (machine, augmented human, whatever) who has more basic potential than a human, in the way a human has more basic potential than a cat. Someone for whom advanced calculus solutions are as intuitively obvious and immediate as "2+2" is for you. Someone who remembers anything they've ever seen or heard the way you can remember what someone just said to you a moment ago. Someone who can picture deformations of multi-dimensional topographies as easily as you can imagine a checkerboard folding in the middle. And even those examples are pretty poor, coming as they are from an average human intelligence -- probably only the first step along the path these guys are trying to think about.
    • Re:Ye gods... (Score:3, Insightful)

      The thing is, there have been several "singularities" in human history: the Agricultural Singularity, the Industrial Singularity, the Computer Singularity, and so on and so forth. Or, to use the term that most historians use - rather than "Singularity", "Revolution."

      My interpretation of the singularity is very different from what they seem to be talking about in the article.. err interview. They're talking about the influence of computers, artificial intelligence and whatnot -- what you might call "The AI R
  • by Jeremi ( 14640 ) on Sunday July 23, 2006 @06:28PM (#15767023) Homepage
    The merging of man and machine has long been a vision explored in science fiction.


    Christ. Just wait until the "defend traditional marriage" crowd gets word of this.

  • 'futurist' and 'technologist' are dirty words. They spout 100% speculation and are generally equally far off. If you keep encouraging them by giving them airtime, they will never learn the value of actual research and contribute anything to society.

    I'm sick of the ever-growing number of people who 'invented the internet' or 'predicted such and such' or 'is an expert on X'. I strongly discourage anyone from reading their trashy ghost-written novels as a message to publishers not to pollute the pseudo-inte
  • by wa1hco ( 37574 )
    The singularity can't happen because intelligence has limits. The hypothetical machine that makes itself ever smarter doesn't make sense.

    Assuming intelligence is the ability to extrapolate from facts to deduce the future, then it's limited by the accuracy of the facts (garbage in, garbage out). There's no point in have ever greater powers of deduction if the facts have a lot of noise in them.

    Sherlock Holmes looked powerful because Victorian society had high levels of structure and relatively less noise.
    • > The singularity can't happen because intelligence has limits. The hypothetical machine that makes itself ever smarter doesn't make sense.

      Where is the limit? 200 IQ? 1000 IQ?

      Even then, the hypothetical AI has advantages over us. It can examine its own code (subconsious?)
      So, it can optimize slow, inefficient routines. Maybe it could even optimize its architecture via a
      custom instruction set. Or maybe even the base process, silicon, to quantum, or biotech.

      It would also have a much larger range of IO c
  • by QuantumG ( 50515 ) <qg@biodome.org> on Sunday July 23, 2006 @06:44PM (#15767070) Homepage Journal
    The hard takeoff concept of a seed AI has as a prerequisit the creation of a computer program that can understand and write source code. I'd probably try to make something like that to make my job as a programmer easier, but there's no way I'd let anyone know I had.. otherwise they wouldn't need me. Which makes you wonder, maybe someone already has one.
  • scienobabble (Score:3, Insightful)

    by neatfoote ( 951656 ) on Sunday July 23, 2006 @06:58PM (#15767106)
    Only bad things happen when people steal hard-science ideas to describe soft-science phenomena-- the ridiculous (and unaccountably persistent) idea of "social evolution" is one example, and as far as I can see, this "technological singularity" notion is another. History is a phenomenally complex system; even in hindsight, it's virtually impossible to find real patterns, and grafting the language of astrophysics onto a theory of social progress lends an undeserved air of gravitas and mathematical precision to what's essentially just fun speculation.

    Sure, things change, sometimes quite suddenly and unexpectedly. But really, the relationship between the development of literacy (NPR's example of a past singularity) and the subsequent course of history is nothing like the relationship between a real singularity and... anything. It's just a bad metaphor, and I think I'd have a lot more respect for "future studies" if they dropped it and came up with a new way of describing whatever phenomenon it is they're predicting
  • Long Now Seminar (Score:3, Informative)

    by PromANJ ( 852419 ) on Sunday July 23, 2006 @07:02PM (#15767120) Homepage Journal
    I think Bruce Sterling gave a talk on this subject, it can be found a bit down on this page: Long Now Seminars [longnow.org].

    My personal whimsical theo.. hypoth... idea is that alien civilizations turn into (towards us) apathetic singularities, and that's why we will never hear Chenjesu's crystaline humming calling us. Maybe the universe will end in some sort of rather dull uniform black technological singularity goo.
  • by Baldrson ( 78598 ) * on Sunday July 23, 2006 @07:03PM (#15767123) Homepage Journal
    At the risk of repeating myself:

    The C-Prize [geocities.com] is the path to superhuman AI.

    And as for the "threat" of superhuman AI:

    Even assuming AI were to develop the equivalent of genetic self-interest, (something that would take a long time even if humans turned them lose to reproduce without us selecting them appropriately) I'd much rather be in competition with a species that had the potential of being symbiotic due to having a different ecological nich. If it gets to the point that the solar output (forget the sun falling on Earth here -- that's too insignificant to consider important to a silicon based life form) is the limited resource, I suspect that the nich humans will fill will be orders of magnitude larger than they now fill on earth.

    The best hope humans have of the transhumanist wishful thinking is to develop superhuman AIs that find utilizing the gas giants to their advantage given the limited supply of silicon. Humans, as the highest form of organic intelligence, would be the natural species to transit to higher intelligence.

    Maybe the super AI's could get around this by using a straight carbon semiconductor form of intelligence or something but there is more going on in our brains than we understand. For example, I suspect there is a lot more quantum logic going on within our brains than currently thought by cognitive scientists and neurologists. It only makes sense evolution would have exploited every angle of the physics of the universe to create intelligence. My point in bringing in the possibility of quantum logic is that there are really many things we don't know about natural systems of high complexity and I suspect the same will apply even to super AI's. The fact that we might have the laws down cold at the quantum level doesn't mean we know how things operate in the higher complexity systems.

    Human brains are very valuable repositories of ancient wisdom about the universe and the most optimal thing for the super AIs to do -- at least for a while -- would be to transhumanize our brains for us.

    Moreover, if it is ok to pass laws to prevent the creation of intelligences greater than our own, why isn't it ok to pass laws dumbing down the smartest among us?

    The self-determination argument applied to humanity as a whole -- striving to maintain control of its own destiny by preventing the creation of higher non-human intelligences -- applies also to people who want to maintain control of their own destiny against those smarter than themselves.

    Personally I'm much more frightened of unenlightened self-interest than I am enlightened self-interest.

    I really wish it were possible to make some of the "smart" people who are really good at grabbing control of resources intelligent enough to understand that they are using those resources in very stupid, self-destructive ways.

    Indeed, it is this abysmal stupidity among the shrewdest among us that is my main motivation for promoting super AI.

  • by dpbsmith ( 263124 ) on Sunday July 23, 2006 @07:13PM (#15767152) Homepage
    Now, let me see... when was the last Singularity? Was it Y2K? Or was it perhaps the Jupiter Effect (when all the planets lined up and the gravitational effect tipped the earth out of its axis?) Or am I confusing both of them with the beginning of the Aquarian Age? Or maybe I'm thinking of the Harmonic Convergence of August 17, 1987?

    I'm way too young to remember the Millerites and the Great Disappointment of October 22, 1844, when Jesus failed to reappear, but I've been blessed to live through a veritable multiplicity of singularities.

    Oooh, singularity! I like that word. So much kewler than, say, "Armageddon." It sounds so technical, so scientific, so free from ranting religiosity....
    • was the mastery of fire. There's no way the humanoids then could understand where it would lead. It didn't look like a singularity because history moved very slowly 200K-500K years ago.
  • by dargaud ( 518470 ) <slashdot2@@@gdargaud...net> on Sunday July 23, 2006 @07:16PM (#15767159) Homepage
    Like my father said after I explained this singularity 'thing' over dinner (and lotsa wine): "Puting a million calculators next to each other doesn't make an intelligent computer". Understating the question that we may have the hardware, but we are very far from having the software for that thing...
  • by Humm ( 48472 ) on Sunday July 23, 2006 @07:38PM (#15767207)
    "existing models of the future cease to give reliable or accurate answers"

    The premise of this definition is that models of the future give reliable or accurate answers at present. What are the models they talk about? Special futurist models? Do these really give reliable or accurate answers today? Or do they mean all models of human behaviour, i.e. most models of the social sciences? Supply & demand will no longer determine price?

    If the models are found not to be good predictors of behaviour, they will be modified or replaced. You know... sort of like how it works right now?
    If patterns in human behaviour start changing rapidly because of rapidly evolving superhuman intelligence, then sure, our ability to model that behaviour will go out the window. But then, we wont be doing the modeling, superhuman intelligences will. I don't see why the emergence of superhuman intelligence would have to lead to a singularity.

    I believe the models will cope. Not "existing models", but tomorrow's models.
  • by Dachannien ( 617929 ) on Sunday July 23, 2006 @10:24PM (#15767606)
    Douglas Hofstadter [indiana.edu], a Pulitzer prize winning author with a Ph.D. in physics and an appointment in Cognitive Science at Indiana University, talked about Ray Kurzweil [wikipedia.org]'s predictions of the oncoming technological singularity at the Artificial Life X [alifex.org] conference this year. An audio-only webcast of his talk is available [vub.ac.be].
  • B.S. (Score:3, Insightful)

    by RKBA ( 622932 ) * on Monday July 24, 2006 @04:59AM (#15768269)
    I would rather have a historian predict the future than a self-appointed "Futurist."

    On the other hand, their proposed "technological singularity" has served well as the theme of a great many science fiction novels. ;-)

  • Faster and faster (Score:3, Insightful)

    by airship ( 242862 ) on Monday July 24, 2006 @11:22AM (#15769851) Homepage
    Post-humanism is like a snowball. As it rolls, it gets bigger and faster.

    I'll use myself as an example. I wore glasses from th 5th grade on. Six years ago, after 40 years of wearing glasses, I had cataract surgery that replaced my damaged lenses with plastic ones. (Complete with warranty cards, I might add; the future is weird.) I've had diabetes for 25 years. For the first 10, I treated it with diet. For the next 10, with pills. For most of the next 5, I injected a form of insulin that was created by RNA-modified bateria in vats. (For the previous 60 years, insulin had been taken from the harvested pancreases of slaughtered cattle.) For the last couple of months, I have been injecting tiny amounts of a new drug that was developed because a molecular biologist noticed that the molecular structure of a key insulin-regulating hormone was strikingly similar to that of gila monster venom.

    I take an additional 6 drugs that aid in further controlling my diabetes, control my asthma, keep my arthritis from crippling me, or act as preventatives for high blood pressure and heart disease.

    I am now 54 years old. In the Stone Age, I would have died before I was 20. Even in the early 20th century, I would have been lucky to make it to 30.

    We are very close to extending the human lifespan by one year every year. Don't think we Baby Boomers are going to get out of your way, kiddies. We're here for the long haul. :)

For God's sake, stop researching for a while and begin to think!

Working...