Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

NPR Looks to Technological Singularity 484

Rick Kleffel writes to tell us that NPR is featuring a piece with both Vernor Vinge and Cory Doctorow looking at the possibility of the "technological singularity" in the near future. Wikipedia defines a technological singularity as a "hypothetical "event horizon" in the predictability of human technological development. Past this event horizon, following the creation of strong artificial intelligence or the amplification of human intelligence, existing models of the future cease to give reliable or accurate answers. Futurists predict that after the Singularity, posthumans and/or strong AI will replace humans as the dominating force in science and technology, rendering human-specific social models obsolete."
This discussion has been archived. No new comments can be posted.

NPR Looks to Technological Singularity

Comments Filter:
  • by TwentyLeaguesUnderLa ( 900322 ) on Sunday July 23, 2006 @05:38PM (#15766876)
    So, they first say that you can't predict what'll happen after that singularity because The World Will Be So Different Than Now, and then proceed to give predictions of what'll happen after that singularity?

    Brilliant, real brilliant.
  • Re:My god! (Score:2, Insightful)

    by Anonymous Coward on Sunday July 23, 2006 @05:46PM (#15766897)
    "The future is already here; it's just not evenly distributed."
      -- William Gibson
  • by chriss ( 26574 ) * <chriss@memomo.net> on Sunday July 23, 2006 @05:49PM (#15766906) Homepage

    Well, I doubt it. I agree with most of the idea of the 6:17 cast and even agree that educational and social changes like widespread literacy may be considered a singularity, but I seriously doubt the timeframe of one generation/30 years they mention. Literacy was adapted over hundreds of years, network communities have been developing for at least 30 years and are still primitive and very far from a "collective mind". For me Wikipedia is "augmented intelligence", but before that I had the Encyclopedia Britannica on my iBook and before that an encyclopedia on my desk, so this to is evolved. And since the Wikipedia is created by so many, it may be considered a primitive product of the "meta intelligence" described.

    Btw, the piece from NPR focuses (very trendy) on collaboration and advanced information management, they do not lay great hope on a major breakthrough in AI.

  • Re:My god! (Score:3, Insightful)

    by Anonymous Coward on Sunday July 23, 2006 @05:52PM (#15766917)
    An author using /. to publicise their latest novel. Yawn.

    From what I've seen we are as near to creating decent AI as we are to producing fusion power stations.
  • by Futurepower(R) ( 558542 ) on Sunday July 23, 2006 @05:53PM (#15766919) Homepage
    From the Slashdot story, an example of science fiction: "Past this event horizon, following the creation of strong artificial intelligence or the amplification of human intelligence..."

    There is no "artificial intelligence". All intelligence that is called artificial intelligence is genuine. It's a rare example of people saying something is artificial when it is genuine. It's an example of disrespecting very intelligent programmers. Disrespect of technically knowledgeable people is very common.

    Computing is so famous now that people with little or no technical knowledge want to seem like they know something about it. But, they don't want to actually study anything. They just want to pontificate.

    --
    U.S. Government violence encourages other violence.
  • What they're talking about is the failure to extrapolate out using models. It's easy to say that the future will have this or that generalized feature, but hard when you move to greater and greater detail.
  • Re:Since when ? (Score:5, Insightful)

    by Zeebs ( 577100 ) <rsdrew@@@gmail...com> on Sunday July 23, 2006 @06:16PM (#15766988)
    They got the video phone right, it is possible as others have pointed out. What they got wrong was the market, and they were futurists after all so who can blame them on that.

    The problem with the video phone is that I can't roll out of bed and answer it. Video conferencing does have it's uses, but I need time to prepare so I don't look like my usual pile of ass who just rolled out of bed. That might make the telemarketers stop calling tho... hmmm

    It wasn't technology they guessed wrong, unless you count not having those things the jetsons did, instantly groom and dress out as you got out of bed. Now that would make the video phone take off.
  • Wikipedia, again? (Score:1, Insightful)

    by Anonymous Coward on Sunday July 23, 2006 @06:17PM (#15766993)
    How about NOT using Wikipedia as a reference? I don't understand what Slashdot's obsession with this highly-biased and easily tampered source of "information" is.
  • by Anonymous Coward on Sunday July 23, 2006 @06:21PM (#15767002)
    Before mankind discovered stone tools and fire, who could have predicted what would happen? Before agriculture who could have predicted what would happen. Before the industrial revolution who could have predicted what would happen. All that we can predict is that stuff will happen. My money is on another dark age.
  • by Anonymous Coward on Sunday July 23, 2006 @06:31PM (#15767034)
    so why should superintelligent robots have any concern with human society?

    OTOH when I consider what I do to the fireants on my lawn every year it does not gives me hope for mankind living in a world of super robots. They might view us as little more than a nuisance.

    Most likely they would go their way and we would go ours. We would have to learn to identify their intergalactic highways and not cross against the light, of course, otherwise:


    He didn't look left
    And he didn't look right,
    He didn't look at all,
    It was the middle of the night.
    He didn't see the station wagon car
    The skunk got squashed and
    There y'are!

    You gotta
    Dead skunk in the middle of the road,
    Dead skunk in the middle of the road,
    Dead skunk in the middle of the road,
    stinkin' to high heaven!

  • by QuantumG ( 50515 ) <qg@biodome.org> on Sunday July 23, 2006 @06:44PM (#15767070) Homepage Journal
    The hard takeoff concept of a seed AI has as a prerequisit the creation of a computer program that can understand and write source code. I'd probably try to make something like that to make my job as a programmer easier, but there's no way I'd let anyone know I had.. otherwise they wouldn't need me. Which makes you wonder, maybe someone already has one.
  • by E++99 ( 880734 ) on Sunday July 23, 2006 @06:45PM (#15767077) Homepage
    There is no "artificial intelligence". All intelligence that is called artificial intelligence is genuine.
    There is no artificial intelligence, because what is called "artificial intelligence" is actually just algorithms. The only intelligence involved is in the designing of them by humans. These "futurists" (science fiction writers) have been saying, "strong AI is right around the corner" for at least four decades now. As someone who designs neural networks and keeps up the latest research, I can assure you that we are no closer to "strong AI" than we were in the stone age. An artificial neural network is no more likely to aquire intelligence than a clay head with magic words spoken to it. I'm not knocking either idea ...just putting it perspective.
  • by sakusha ( 441986 ) on Sunday July 23, 2006 @06:45PM (#15767083)
    Whatever was the top story 30 minutes ago on BoingBoing.
  • by chriss ( 26574 ) * <chriss@memomo.net> on Sunday July 23, 2006 @06:55PM (#15767100) Homepage
    For me Wikipedia is "augmented intelligence", but before that I had the Encyclopedia Britannica
    Perhaps if you augmented your intelligence a bit more, you'd understand that it's not the same as knowledge or information.

    Okay, this came out wrong. I do not think that wikipedia represents intelligence and therefore it cannot be "augmented intelligence". I think that (one aspect of) intelligence is the ability to process information, evaluate it in combination with other information/knowledge acquired before, establish a position in a world model, decide on an action based on formerly known actions or develop a new action and finally perform it. So for me Wikipedia can augment a humans intelligence not simply by providing more information, but by providing it in a way that it may be added to the regular information processing habit.

    Let's say I make 500 conscious decisions every day (which shirt to wear, which food to eat, take the new job, press the red button etc.). For almost any of these decisions I can rely on a mix of internal information (already acquired knowledge and deductions) and external information (books, web, Wikipedia, ask someone). I will not visit the public library 500 times a day, but I may call up an article from the wikipedia 20 times a day. It's not just about availability of information, it's also about "process compatibility". Therefore the encyclopedia on the desk may not be counted as augmenting my "intelligence process" (access is too slow for me to be willing to use it all the time), while the wikipedia may. This depends on your personal process, I'm sure there have always been people who look up every foreign word they don't know while most try to guess, and wikipedia will not become a part of your routine unless you replace your modem by DSL or cable.

  • scienobabble (Score:3, Insightful)

    by neatfoote ( 951656 ) on Sunday July 23, 2006 @06:58PM (#15767106)
    Only bad things happen when people steal hard-science ideas to describe soft-science phenomena-- the ridiculous (and unaccountably persistent) idea of "social evolution" is one example, and as far as I can see, this "technological singularity" notion is another. History is a phenomenally complex system; even in hindsight, it's virtually impossible to find real patterns, and grafting the language of astrophysics onto a theory of social progress lends an undeserved air of gravitas and mathematical precision to what's essentially just fun speculation.

    Sure, things change, sometimes quite suddenly and unexpectedly. But really, the relationship between the development of literacy (NPR's example of a past singularity) and the subsequent course of history is nothing like the relationship between a real singularity and... anything. It's just a bad metaphor, and I think I'd have a lot more respect for "future studies" if they dropped it and came up with a new way of describing whatever phenomenon it is they're predicting
  • by Baldrson ( 78598 ) * on Sunday July 23, 2006 @07:03PM (#15767123) Homepage Journal
    At the risk of repeating myself:

    The C-Prize [geocities.com] is the path to superhuman AI.

    And as for the "threat" of superhuman AI:

    Even assuming AI were to develop the equivalent of genetic self-interest, (something that would take a long time even if humans turned them lose to reproduce without us selecting them appropriately) I'd much rather be in competition with a species that had the potential of being symbiotic due to having a different ecological nich. If it gets to the point that the solar output (forget the sun falling on Earth here -- that's too insignificant to consider important to a silicon based life form) is the limited resource, I suspect that the nich humans will fill will be orders of magnitude larger than they now fill on earth.

    The best hope humans have of the transhumanist wishful thinking is to develop superhuman AIs that find utilizing the gas giants to their advantage given the limited supply of silicon. Humans, as the highest form of organic intelligence, would be the natural species to transit to higher intelligence.

    Maybe the super AI's could get around this by using a straight carbon semiconductor form of intelligence or something but there is more going on in our brains than we understand. For example, I suspect there is a lot more quantum logic going on within our brains than currently thought by cognitive scientists and neurologists. It only makes sense evolution would have exploited every angle of the physics of the universe to create intelligence. My point in bringing in the possibility of quantum logic is that there are really many things we don't know about natural systems of high complexity and I suspect the same will apply even to super AI's. The fact that we might have the laws down cold at the quantum level doesn't mean we know how things operate in the higher complexity systems.

    Human brains are very valuable repositories of ancient wisdom about the universe and the most optimal thing for the super AIs to do -- at least for a while -- would be to transhumanize our brains for us.

    Moreover, if it is ok to pass laws to prevent the creation of intelligences greater than our own, why isn't it ok to pass laws dumbing down the smartest among us?

    The self-determination argument applied to humanity as a whole -- striving to maintain control of its own destiny by preventing the creation of higher non-human intelligences -- applies also to people who want to maintain control of their own destiny against those smarter than themselves.

    Personally I'm much more frightened of unenlightened self-interest than I am enlightened self-interest.

    I really wish it were possible to make some of the "smart" people who are really good at grabbing control of resources intelligent enough to understand that they are using those resources in very stupid, self-destructive ways.

    Indeed, it is this abysmal stupidity among the shrewdest among us that is my main motivation for promoting super AI.

  • Re:Since when ? (Score:5, Insightful)

    by JDevers ( 83155 ) on Sunday July 23, 2006 @07:05PM (#15767127)
    Are you serious?

    How about these:

    1791 Luigi Galvani accidentally closing an electrical circuit through a frog's leg, causing it to
    jerk violently. This rapidly led to the understanding of how nerves and muscles work.

    1879 Louis Pasteur accidentally inoculated chickens with an old cholera culture. The chickens should have died from cholera, but they got sick and then got better. After discovering the mistake, Pasteur re-inoculated the chickens with fresh culture and the chickens didn't even get sick. This lead to the modern vaccination.

    1895 Wilhelm Roentgen accidentally discovered X-rays.

    1928 Alexander Fleming accidentally discovered that a type of mold (later named Penicillium) significantly inhibited bacterial growth. This lead to antibiotics.

    Never assume that all discoveries are predicted before they are "discovered." I would actually say that most INSIGNIFICANT technological advancement is predicted well out, most of these are evolutionary. Many significant advancements are revolutionary and there is no way many of them could be predicted as there was no information related to the new process before the discovery of the process itself.
  • by dargaud ( 518470 ) <[ten.duagradg] [ta] [2todhsals]> on Sunday July 23, 2006 @07:16PM (#15767159) Homepage
    Like my father said after I explained this singularity 'thing' over dinner (and lotsa wine): "Puting a million calculators next to each other doesn't make an intelligent computer". Understating the question that we may have the hardware, but we are very far from having the software for that thing...
  • by RyanFenton ( 230700 ) on Sunday July 23, 2006 @07:21PM (#15767170)

    Experience. The hidden result of all reactions, real or imagined - observable experience.

    Regardles of what gods may exist, what greater reality may exist, or whatnot, the purpose to everything can be met with a system that pursues experience in all it's variety. If we are all that is, the eternal quest for experience will be it's own purpose. Endless experience would fulful all purposes.

    The trick is setting up a system of gathering experience that doesn't meet with stagnation. Stagnation can come in many forms - death/ceasation, returning to exactly the same state as some past point without being aware of it(looping), or any path that will inevitably lead to those states. Etropy is an obvious block towards seeking experience as an ultimate goal - but if totally unavoidable, then the ultimate goal would be maximizing exploration with the resources available.

    Ryan Fenton
  • by Dasher42 ( 514179 ) on Sunday July 23, 2006 @07:22PM (#15767177)
    Ever seen the indie film "Waking Life"? There's a segment where a post-humanist goes on about how predatory relationships will be obsolete in the post-singularity world.

    I saw that and thought of a recent simulation of an evolving ecosystem. Autotropes, herbivores, predators and parasites all evolved independently in a simulation that simply required growth and survival. I think they are naturally emergent phenomena. You can even explain the existence of defense attorneys and cold-call telephone soliciting this way.
  • by tcc3 ( 958644 ) on Sunday July 23, 2006 @07:37PM (#15767202)
    "That is why this Matrix was redesigned to the peak of your civilization. Or should I say our civilization? Because as soon as we started thinking for you, it became our civilization."
  • Re:Ye gods... (Score:3, Insightful)

    by Fred Ferrigno ( 122319 ) on Sunday July 23, 2006 @07:37PM (#15767203)
    The thing is, there have been several "singularities" in human history: the Agricultural Singularity, the Industrial Singularity, the Computer Singularity, and so on and so forth. Or, to use the term that most historians use - rather than "Singularity", "Revolution."

    My interpretation of the singularity is very different from what they seem to be talking about in the article.. err interview. They're talking about the influence of computers, artificial intelligence and whatnot -- what you might call "The AI Revolution" -- rather than the real singularity.

    The foundation of the technology singularity, as I always understood it, is that new technology (not necessarily AI) increases the pace of further technologic development, until development accelerates to infinity. The first part of the conjecture is easy to verify, as witnessed by the revolutions you mention. Humans lived on this earth for about 100,000 years before developing agriculture; after that it was about 9,000 years before the printing press and widespread literacy; 500 years or so till the industrial revolution; maybe 150 years until we had the first computers; and ~50 years until the development of the Internet.

    If we extrapolate this trend (which is what futurists do), future technological revolutions will increase in pace, some happening literally overnight, until they all seem to happen at once. That moment is the singularity. What happens after that is the stuff of bad science fiction.

    Personally, I think there's probably an upper limit on the pace of useful technological development. Just because Intel releases a new and faster chip doesn't mean I'm going to buy one before I've gotten the full use out of my current one. And there are certainly physical limits to technology as well: despite hundreds of years of trying, no one's yet managed to turn lead into gold. In the long run, I think the pace of development will slow (and there's some who say it has slowed) and eventually technology will just plateau, but not for a very long time.
  • by Humm ( 48472 ) on Sunday July 23, 2006 @07:38PM (#15767207)
    "existing models of the future cease to give reliable or accurate answers"

    The premise of this definition is that models of the future give reliable or accurate answers at present. What are the models they talk about? Special futurist models? Do these really give reliable or accurate answers today? Or do they mean all models of human behaviour, i.e. most models of the social sciences? Supply & demand will no longer determine price?

    If the models are found not to be good predictors of behaviour, they will be modified or replaced. You know... sort of like how it works right now?
    If patterns in human behaviour start changing rapidly because of rapidly evolving superhuman intelligence, then sure, our ability to model that behaviour will go out the window. But then, we wont be doing the modeling, superhuman intelligences will. I don't see why the emergence of superhuman intelligence would have to lead to a singularity.

    I believe the models will cope. Not "existing models", but tomorrow's models.
  • by sgt_doom ( 655561 ) on Sunday July 23, 2006 @07:56PM (#15767252)
    Wow!! Futurists predict.... Gosh, that's certainly got a lot of gravity behind it, given the rather obvious fact that "futurists" have been completely wrong on all their predictions to date, except, of course, those who claim - after the fact - to have been right.... My faith in "futurists'" predictions would be, ah, maybe equal to my faith in Paul Wolfowitz and Richard Perle on their knowledge and preditions of that Iraqi war (actually, invasion and occupation).
  • by JetScootr ( 319545 ) on Sunday July 23, 2006 @07:58PM (#15767262) Journal
    was the mastery of fire. There's no way the humanoids then could understand where it would lead. It didn't look like a singularity because history moved very slowly 200K-500K years ago.
  • by 1steve1 ( 73443 ) on Sunday July 23, 2006 @08:07PM (#15767288) Journal
    I have augmented my intelligence with Wi-Fi, the Laptop and teh Google. Am I considered a post-human? Also will the internet become the collective consciousness? I think not, not with the two tiered internet on its way :P
  • by Nefarious Wheel ( 628136 ) on Sunday July 23, 2006 @08:12PM (#15767302) Journal
    Can you imagine the "text book that anyone can edit" being used in any school...

    You don't think much of anyone, do you?

  • Re:I for one... (Score:3, Insightful)

    by kfg ( 145172 ) * on Sunday July 23, 2006 @08:48PM (#15767400)
    The Laws of Robotics . . .and a great big "OFF" button would be a start.

    Although I now post under my actual initials, in my day I've had two screen aliases. Yours is one of them. It feels kinda weird to reply to it.

    KFG
  • Re:Ye gods... (Score:4, Insightful)

    by apposite ( 113190 ) on Sunday July 23, 2006 @09:56PM (#15767559) Homepage
    In Australia we have a local idiot (Damien Broderick) who enthuses over the singularity and I find it incredibly irritating. I don't have a problem with the concept of a singularity, I DO have a problem with the insistence of some enthusiasts that the singularity is just round the corner. My biggest problem is that most of the pundits don't actually seem to work with technology.

    It is really easy as an observer to sit on the outside and say: "Wow, more neato stuff seems to be coming out faster and faster- why, if I extrapolate it will probably keep coming out faster and faster and we'll get this exponential curve." But that ignores the fact that:

    * The problems get harder
    * Technological adoption is generally limited by the speed at which society can absorb it, not by the technology
    * We've never found a silver bullet

    By which I mean:

    The problems get harder: Einstein may have been a genius- but we have our share of geniuses today. We almost certainly have many more geniuses actively involved in science (and physics research) than ever before- and they are well resourced (not fantastically, but OK). But they aren't producing Einstein like breakthrough physics because it is damn hard to improve on what we have. We know the current models have holes but we haven't worked out how to fix them- and not for want of trying.

    The same applies to lots of technical problems- both the technical research and the translation of that research into real world products. Batteries and fusion power both have enormous commerical incentives but somehow we haven't found the answer yet. We HAVE made improvements but the simple truth is: these are hard problems.

    See also the cost of electronic foundaries [wikipedia.org]- around a billion $US and climbing by roughly an order of magnitude with each succesive generation. That is where the bleeding edge of real world technology rests and it isn't cheap and it is just unbelievably tricky.

    Technological adoption is generally limited by the speed at which society can absorb it, not by the availability of technology: Science can in theory race ahead of everyday use but in practice it usually has to be supported by technology. Leaving aside silver bullet technologues (like AI- see below) scientific research needs to be translated into technologies that everyday people can use. And technology that everyday people use needs to be adopted, which means it needs to be understood and accepted. That isn't a formula for a singularity.

    In theory a small population could make a 'huge breakthrough' and race ahead leaving the rest of the world's population bewildered by the change, but every indication is that the be big problems need big resources to address. And even more resources to translate into actual out of the lab usage (see electronics foundries link above).

    We do see some impressive stuff (like Google) which catches our attention and is really useful but this is a tool that society adopts at its own rate. And Google is successful because it DOESN'T baffle and bewilder. It empowers the everyday person. That is pretty characteristic of succesful technology.

    We've never found a silver bullet: Science fiction stories often have a bit of hidden magic- the AI, fusion power, teleportation (aka worm hole gates, star drives, etc...) that definitively solves some problem (problem solving, energy, transport to the stars) with no big side effects. That is great for science fiction, but in the real world we don't do this (I won't say absolutely, but I can't think of a real life silver bullet). Everything is a careful trade off, the really big problems don't just go away.

    The big one is thinking: for all that computers help us do work they don't do what we would consider 'intelligent' things. Or when they do (like pattern recognition in breast cancer X-Rays) they are so limited in their scope that we st
  • by WittyName ( 615844 ) on Sunday July 23, 2006 @11:51PM (#15767819)
    > The singularity can't happen because intelligence has limits. The hypothetical machine that makes itself ever smarter doesn't make sense.

    Where is the limit? 200 IQ? 1000 IQ?

    Even then, the hypothetical AI has advantages over us. It can examine its own code (subconsious?)
    So, it can optimize slow, inefficient routines. Maybe it could even optimize its architecture via a
    custom instruction set. Or maybe even the base process, silicon, to quantum, or biotech.

    It would also have a much larger range of IO choices, as well as the number of channels.

    As well as non-fuzzy long term memory.

    Postulate this:
        1) the AI starts at 100 IQ
        2) every year it can think some percent faster
        3) larger amount/variety of input

    Questions:
    1) would it not give better informed answeres, faster, year after year?
    2) This would be more intelligent, right?

    Even if there is a cap at 200 IQ, if it keeps getting faster, it can evaluate more possible breakthrough
    ideas per time unit. Maybe limited by boredom?
  • by aepervius ( 535155 ) on Sunday July 23, 2006 @11:55PM (#15767833)
    keep in mind that such Ai would probably not be a world project, but rather a single country doing it. Let us just imagine this is China or US. So most probably the country would implement a friendlines toward THEM rather than global toward human. Now the parent post begins to make a lot of frightening sense "they are against us. We can't convince to join us or be friendly to us. They need to be eliminated as a threat. Change nuke targeting system to those country. Countdown to launch 10,9,8...".
  • by snowwrestler ( 896305 ) on Monday July 24, 2006 @12:15AM (#15767866)
    From a 15th century monk's perspective, today's curve is vertical. Of course to us it's clearly not. Thus the flaw of the hand-wringing over "the singularity" is illustrated--it suffers from the classic error of attempting to evaluate the future in the context of today. Of course when we get to the future, we'll be in the future too--so it doesn't matter what we think now.

    Ever hear of the generation gap? The youth of today are different from us--they've been raised from birth in a world of ubiquitous networked computing and ambient findability. (see? I can throw around stupid buzzwords too.) Talk of "The Singularity" is not much different from complaining that your kids spend all their time texting. It's making explicit the fact that you can't imagine keeping up as you age. Well duh. We won't be running the show in 2050--our kids and their kids will.
  • by Daniel the Great ( 845799 ) on Monday July 24, 2006 @12:54AM (#15767939)
    ... but I think biomimicry [biomimicry.net] is where it's at.

    I have to disagree with you there. Consider the biggest world-changing inventions so far - The car, the airplane, the printing press, the computer, networking, the wheel - none of these are substantially based on biological mechanisms.

    The path that evolution has taken over millions of years has lead to some amazingly complex and beautiful solutions to survival. But the environment that technological systems operate in now is very different and the time spans are compressed to hundreds and even tens of years.

    Since there is currently no Strong AI (that we know of) the jury is out as to how it will happen. But the chances of it closely mimicking a biological mechanism are about the same as for the previous inventions.

  • by sinewalker ( 686056 ) on Monday July 24, 2006 @01:07AM (#15767966) Homepage
    True. I'm not a Hofstadter appologist (he hardly needs one, and I'm certainly unqualified!) but I think this prediction should also be placed in it's context. Hofstadter was talking about the application of artificial reasoning in beating human chess players. The current chess champion systems aren't really reasoning, more like cheating: they spend endless cycles projecting moves forward in the problem space and then apply some huristics in selecting the next move. This is quite different to the lateral thinking and high-level pattern analysis that a human chess master applies, and makes best use of the computer's strength: high-speed drudgery work.

    In that light, then I would say that so far the prediction holds true, no chess master has been beaten by a computer program that applies reasoning instead of dumb search and huristics. Also, no machine has matched the three names composing the titel of the book and likely can't for a while.

    However, I'm not sure that this single prediction about chess accurately reflects the thrust of GEB anyway. Hofstadter appears to me to spend a great deal of GEB in explaining what in fact reasoning actually is, how it should be possible to mechanise. The prediction about chess doesn't jibe with the rest of the book as I remember. Perhaps I should look up the quote and then I'll understand?
  • by baby_robots ( 990618 ) on Monday July 24, 2006 @01:37AM (#15768011)
    There was a time when it was popular among chemists to believe that every chemical compound possible had already been synthesized by nature. This has been all but disproven in the chemical literature by many novel synthetic chemicals.

    While evolutionary mechanics are beautiful for creating a streamlined and efficient system, it has its limits. Biological organisms are hindered by lack of resources. While the things they do with carbon, nitrogen, and oxygen are unrivaled by any modern synthetic chemical techniques, there are many reactions that are all but impossible in biological systems due to the need for catalysts made from rare metals or extreme temperatures. Nature can't work with carbon nanotubes because it does not have any to work with.

    So what I am trying to say is that evolutionary systems are limited by the starting basis set, and expanding beyond that is impossible without an outside source.
  • Re:I for one... (Score:5, Insightful)

    by AGMW ( 594303 ) on Monday July 24, 2006 @04:18AM (#15768202) Homepage
    Now everything is poised to strip us of religion ...

    You say that like it might be a bad thing.

    Religion is all well and good when it is a personal thing and mayebe OK when you are following the teachings of people (or things) long gone, but once it forms into clumps or groups of people, and it would seem especially once these groups of people start following the teachings of people who are alive now, we start getting problems. It's the high priests, the living leaders of religions who decide they need to spread the word of their god at the point of their follower's swords and that's when the trouble starts!

  • by Angostura ( 703910 ) on Monday July 24, 2006 @04:35AM (#15768234)
    More amusingly, the summary gives the impression that existing models of the future actually provide accurate, meaningful answers.

    To which I feel compelled to reply "Bwhuahahahaha"
  • B.S. (Score:3, Insightful)

    by RKBA ( 622932 ) * on Monday July 24, 2006 @04:59AM (#15768269)
    I would rather have a historian predict the future than a self-appointed "Futurist."

    On the other hand, their proposed "technological singularity" has served well as the theme of a great many science fiction novels. ;-)

  • by fbjon ( 692006 ) on Monday July 24, 2006 @05:05AM (#15768276) Homepage Journal
    Yes, but the model they're using ("I imagine this could possibly happen..") is already horribly failure-prone in the first place.
  • Re:Since when ? (Score:3, Insightful)

    by teslar ( 706653 ) on Monday July 24, 2006 @06:32AM (#15768373)
    Never assume that all discoveries are predicted before they are "discovered."
    Quite right. As someone (I forget who) once said, great discoveries are not marked by the word "Eureka" but rather by "Hmmmm, that's funny...."
  • Re:I for one... (Score:4, Insightful)

    by moeinvt ( 851793 ) on Monday July 24, 2006 @07:51AM (#15768500)
    "The thread was assuming that a super AI was formed, and that they would rule over us . . . "

    Maybe it's better to be ruled by artificical intelligence than by the natural stupidity that rules over us now.

  • Plant Wheel (Score:3, Insightful)

    by Tony ( 765 ) on Monday July 24, 2006 @08:35AM (#15768664) Journal
    There are no creatures anywhere in nature which use wheels. Nor, as far as I know, plants.

    One word, my friend:

    Tumbleweeds.
  • by Gulik ( 179693 ) on Monday July 24, 2006 @09:37AM (#15769104)
    From a 15th century monk's perspective, today's curve is vertical. Of course to us it's clearly not.

    That's really not what's under discussion here -- I'm not more intelligent than a 15th-century monk. Putting that monk in the modern world would cause severe culture shock because of the disconnect between the world and his existing frames of reference. He'd have to run like mad to try to catch up, because he didn't have his whole life to become used to it, but a bright person could probably manage it.

    What the futurists are talking about is a different level of intelligence. A person (machine, augmented human, whatever) who has more basic potential than a human, in the way a human has more basic potential than a cat. Someone for whom advanced calculus solutions are as intuitively obvious and immediate as "2+2" is for you. Someone who remembers anything they've ever seen or heard the way you can remember what someone just said to you a moment ago. Someone who can picture deformations of multi-dimensional topographies as easily as you can imagine a checkerboard folding in the middle. And even those examples are pretty poor, coming as they are from an average human intelligence -- probably only the first step along the path these guys are trying to think about.
  • by alexgieg ( 948359 ) <alexgieg@gmail.com> on Monday July 24, 2006 @10:32AM (#15769484) Homepage
    Computers operate from logic, be it the simple boolean one or the highly abstracted contemporary mathematical logic in its many forms (heuristic, fuzzy, even paraconsistent) that in the end get translated into boolean anyway. Humans, on the other hand, do logic as one among many function which aren't themselves logical.

    Of course you can try to emulate the non-logical functions inside a logical framework, but by doing so the machine gets trapped inside a kind of "Gödel paradox", forever unable to explain itself for lack of sufficient axioms ("sufficient" meaning "infinite"). Self-consciousness is then literally impossible.

    This isn't so bad as it seems. It only means that machines, no matter how advanced, are and will always be extensios of human faculties. In other words, we are their conscience, in the exact same sense that we're the conscience "behind" our hands and feet. Or, if you like to see it this way, machines and humans are already a single thing, as they have always been, since the instant our first ancestor decided to throw his first rock.

    The day humanity ends is the day all machines die. Some of them can of course keep working after that, more or less as some of our body organs sometimes stay working after our brain dies. But death is already there, unavoidable, only waiting for the power source to shut down. Death is the only real human-machine "singularity", that point after which we know nothing about. Any other is mere fiction.
  • Faster and faster (Score:3, Insightful)

    by airship ( 242862 ) on Monday July 24, 2006 @11:22AM (#15769851) Homepage
    Post-humanism is like a snowball. As it rolls, it gets bigger and faster.

    I'll use myself as an example. I wore glasses from th 5th grade on. Six years ago, after 40 years of wearing glasses, I had cataract surgery that replaced my damaged lenses with plastic ones. (Complete with warranty cards, I might add; the future is weird.) I've had diabetes for 25 years. For the first 10, I treated it with diet. For the next 10, with pills. For most of the next 5, I injected a form of insulin that was created by RNA-modified bateria in vats. (For the previous 60 years, insulin had been taken from the harvested pancreases of slaughtered cattle.) For the last couple of months, I have been injecting tiny amounts of a new drug that was developed because a molecular biologist noticed that the molecular structure of a key insulin-regulating hormone was strikingly similar to that of gila monster venom.

    I take an additional 6 drugs that aid in further controlling my diabetes, control my asthma, keep my arthritis from crippling me, or act as preventatives for high blood pressure and heart disease.

    I am now 54 years old. In the Stone Age, I would have died before I was 20. Even in the early 20th century, I would have been lucky to make it to 30.

    We are very close to extending the human lifespan by one year every year. Don't think we Baby Boomers are going to get out of your way, kiddies. We're here for the long haul. :)
  • Re:I for one... (Score:3, Insightful)

    by giblfiz ( 125533 ) on Monday July 24, 2006 @01:52PM (#15771042)
    As David Brin would say: Shame on you for thinking that there was a golden age in the past. The only golden age we will ever have is one that we build.

    In the past men polluted as aggressively as they could. There was no thought at all given to protecting the planet. People if anything are much, much cleaner now. The key difference is that we are slowly but surely running out of space. We are not worse polluters than our ancestors, we are just being held to an effectively higher standard. (Don't get me wrong, I think it is vital that we meet it) Oh and G.M. crops are about as far away from pollution as you can get. Don't be such a neo-phoebe, if you don't want us all to starve your going to have suck it up and accept some G.M. crops, its another case of higher populations chaining the standards.

    As far as the stripping of man's values, I don't think you are looking at the horrors of history quite carefully enough. Man has been cruel and brutal for almost his entire history. It is only very recently that democracy, the abolishment of slavery, or the emancipation of women has occurred. Torture was considered a defacto standard for basically all of human history. We have come a long way, and I think we are still on an upward trend.

    I will be the first to admit that we are in a local valley. Things are worse in a lot of ways than they were 5-10 years ago from the perspective of cultural progress. But if you thing that this is the beginning of the end you are being overly pessimistic and melodramatic. Sure things are bad, and I bet that they are going to get a little worse in the next two years or so, but then they will start to get better.

    My god do I see the seeds for a bright future being planted today. A future of liberty, equality and trust. Technology is starting to enable some really increadable community tools. People are waking up and seeing that they need to play a part in the way the environment is handled. And we really are all getting smarter.

    The trend is still up! Its just the moment which is down. Honestly the only thing that scares me is the mass retirement of the baby-boomers, but hopefully that won't hit us too hard.

Happiness is twin floppies.

Working...