Forgot your password?
typodupeerror

BT Futurologist On Smart Yogurt and the $7 PC 455

Posted by kdawson
from the i'll-be-back dept.
WelshBint writes, "BT's futurologist, Ian Pearson, has been speaking to itwales.com. He has some scary predictions, including the real rise of the Terminator, smart yogurt, and the $7 PC." Ian Pearson is definitely a proponent of strong AI — along with, he estimates, 30%-40% of the AI community. He believes we will see the first computers as smart as people by 2015. As to smart yogurt — linkable electronics in bacteria such as E. Coli — he figures that means the end of security. "So how do you manage security in that sort of a world? I would say that there will not be any security from 2025 onwards."
This discussion has been archived. No new comments can be posted.

BT Futurologist On Smart Yogurt and the $7 PC

Comments Filter:
  • by DaveM753 (844913) on Wednesday September 27, 2006 @01:13PM (#16216629)
    I can't seem to open the containers without some of it splattering all over my glasses.
  • Futurologists... (Score:5, Insightful)

    by UbuntuDupe (970646) on Wednesday September 27, 2006 @01:15PM (#16216665) Journal
    So his portfolio has outperformed the S&P, I take it?

    *ducks*
  • Right. (Score:5, Interesting)

    by PHAEDRU5 (213667) <instascreed@g[ ]l.com ['mai' in gap]> on Wednesday September 27, 2006 @01:16PM (#16216695) Homepage
    And New York was going to need 100,000,000 telephone operators by the middle of the 20th century.

    Get a grip, for God's sake.
    • And New York was going to need 100,000,000 telephone operators by the middle of the 20th century.

      Why would it need telephone operators? Isn't Manhattan Island one big prison [imdb.com]?

    • Re: (Score:3, Insightful)

      by TheRaven64 (641858)
      New York has 100,000,000 telephone operators. It's just that most of them aren't human, they're little bits of refined sand.
    • Re:Right. (Score:4, Insightful)

      by Coeurderoy (717228) on Wednesday September 27, 2006 @02:31PM (#16218081)
      Well it was around 10 M and not 100M and yes it did, of course the real trick was to convince all the operators to work for free, but it worked and each telephone owning new yorker is his or her own telephone operator.

      That is the role of the "automatic" part in the modern phones :-)
  • by nystagman (603173) on Wednesday September 27, 2006 @01:16PM (#16216709)
    We already have people that are as dumb as computers. I say leave well enough alone.
  • I know some people that aren't any smarter than my current computer. Heck, in terms of chess, I'm one of them... my computer can kick my ass at chess. Right now we have computers that can feign intelligence, i.e. use the internet to pass a multiple-choice test, but this is not a true measure of intelligence. If in 2015 a computer literally breaks out of a research lab and starts a mission of doom, then I'd say we might have one as smart as a person.
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      If in 2015 a computer literally breaks out of a research lab and starts a mission of doom, then I'd say we might have one as smart as a person.

      At least one as smart as our President.

    • by Nascar_Geek (682890) on Wednesday September 27, 2006 @01:27PM (#16216871)
      Your computer didn't beat you at chess, a programmer did.

      When you have a computer that can beat you at chess without having a chess program installed, it's time to be concerned.
      • by geekoid (135745)
        people can't play chess without the chess program being installed either...it's called learning the rules of the game.

        Now, when you can just tell it the rules of the game, and it just remembers previous games to become better, you'll have a shadow of AI.

      • by MBGMorden (803437) on Wednesday September 27, 2006 @03:15PM (#16218905)
        Your computer didn't beat you at chess, a programmer did.

        Hmm. That doesn't explain why the chess program I wrote in college kicked my own ass every time I played against it :).
      • Re: (Score:3, Informative)

        by Flyboy Connor (741764)

        Your computer didn't beat you at chess, a programmer did.

        This is a common misconception. People say "A computer only follows the rules that the programmer gave it, so it's the programmer's knowledge and skills that are used to play the game." The misconception is that the programmer is NOT actually telling the computer how to play chess. The programmer only tells the computer how to THINK ABOUT playing chess. And by executing this thinking program, the computer designs its own chess-playing strategies.

    • If in 2015 a computer literally breaks out of a research lab and starts a mission of doom, then I'd say we might have one as smart as a person.

      Then we'd elect it as president.

    • by tambo (310170) on Wednesday September 27, 2006 @01:38PM (#16217025)
      I know some people that aren't any smarter than my current computer. Heck, in terms of chess, I'm one of them... my computer can kick my ass at chess. Right now we have computers that can feign intelligence, i.e. use the internet to pass a multiple-choice test, but this is not a true measure of intelligence.

      Intelligence is like terrorism (or pornography), in that it's definable only with broad, nebulous, debatable borders. Chess is one kind of intelligence, and our current logic models are excellent here. Art is another kind of intelligence, and our current logic models are terrible here.

      The problem with modern AI (and the flaw in Ian Pearson's predictions) is that we really don't understand many kinds and elements of intelligence. For instance:

      • Spontaneous thought: Why do we think? What motivates us to keep thinking when we don't have a task to solve, or a logical process to follow?
      • Associative memory: What element of our memory structure allows us to make prescient associations on the fly? Not just "green is a color, and so is blue," but "this song reminds me of one time when I was eating ice cream?"
      • Creativity: Why are we good at coming up with surprising and unexpected insights? Modern AI tries this by billions and trillions of fumbling attempts to introduce randomness - but most of them are rubbish. But this is like evolution - which takes thousands or millions of years to innovate (randomly, clumsily) - and not like creative engineering.
      • Emotion: We don't understand emotion at all. We've identified regions of the brain in which emotions occur, and particular hormones and hormone receptors that are involved. That's about it. The neuological basis of emotion remains a mystery.
      These are just a few things that any human-competitive intelligence would need, but that we don't understand. Accordingly, it's completely impossible to predict when we will be able to model it, since we don't even understand it yet.

      Anyone who tells you differently is trying to sell you their book. ;)

      - David Stein

      • by Quino (613400) on Wednesday September 27, 2006 @02:27PM (#16217973)
        Very true, I found this bit from the article silly:

        The other side of AI says that "my brain is magic, and I'm really smart and you can't possibly produce a robot as clever as me". I don't subscribe to that one - I think that's nonsense.

        At minimum he's misrepresenting "the other side" of AI. As one professor (in the only college class on philosophy I've taken, btw) recounted, in the 60's a universal human language translator was inevitable and right around the corner. The problem is that these predictions were being made by technologists and not linguists -- people who didn't understand the problem. And language, on the surface, is a simple problem: languages have rules, exceptions to these rules and vocabulary that can be exhaustively enumerated -- a custom-fit problem for computers, right?

        Turns out that machine translation from one language to another was a tad more complicated -- all due to a lack of understanding of linguistics. It's a problem for a linguist to solve, not a programmer or "AI Researcher".

        We'll first understand how our minds work, and then we'll be able to create strong AI. A shrink can better tell us when this might happen than a technology futurist (and of course, there's plenty of good arguments that this will never happen).

        IMHO, you're very right in pointing out that you run into basic problems once you start out trying to define what we mean by human intelligence; in fact, there's a very good argument to be made that when you start peeling away layers, a lot of what we understand as human intelligence is innately biological. As in, no strong AI in 100 years, no strong AI ever -- not created by human intelligence at any rate.

        This doesn't seem to be a popular view of intelligence with technology-minded people: we seem to assume that the brain is the hardware and that our mind is the software -- so all you need is the right program running on your 386 and "poof" you have human intelligence.

        That's how I felt myself actually; the prof's arguments didn't make sense to me until after a few years after I got my C in his class during an discussion of AI that I finally understood what he was saying all those years ago ...

        • Re: (Score:3, Interesting)

          by fyngyrz (762201) *

          We'll first understand how our minds work, and then we'll be able to create strong AI.

          I don't think so. While — as an independent AI researcher — I would not absolutely rule out a "we programmed it" solution, I really don't think that's what we're looking at. No more than we were "programmed" to be intelligent, in any case.

          What is needed is a hardware and probably somewhat software (at least as a serious secondary effort after pure software simulation uncovers what we need to do) syst

    • Re: (Score:3, Insightful)

      by Maxo-Texas (864189)
      A really smart computer would break out of the research lab and start a religion.

    • Re: (Score:3, Insightful)

      by Orange Crush (934731)

      I'm sure I'm not the first person to have thought of this, but what if we just simulated a biological human in software? Computers are already pretty good at simulating chemical reactions, physics, even as far as protein folding . . . why not learn to build an AI by taking it a step further and refine the simulation models until they can accurately simulate single cells up to macro-sized multicellular animals and eventually humans? On a powerful enough computer (no doubt well beyond what's feasible today)

    • Re: (Score:3, Interesting)

      by HiThere (15173) *
      You're confusing intelligence with several other factors. One is what effectors it has available, i.e., what mechanisms could it use to "breaks out of a research lab and starts a mission of doom". Another is motivations. Why would it want to do that.

      Note that robots have effectors, so that's not an insurmountable problem, merely a very different one (that's already being worked on). Note also how completely separated it is from intelligence.

      Then there's motivation. Why should an AI want to do any parti
  • $7 PC: Wrong (Score:5, Insightful)

    by UbuntuDupe (970646) on Wednesday September 27, 2006 @01:21PM (#16216771) Journal
    There will never be a $7 PC in the future, for the same reason there isn't one now: when technology improves, people want to spend the same, but get a better computer, and manufacturers cater to this. No one ever says, "Hey, maybe we'll use technology that isn't the latest and greatest, but instead make it much much cheaper and just as good as they were in the recent past."

    Well, no one except Nintendo.
    • by CaptnMArk (9003)
      Not $7, but there might be a $9.95 one.
    • Re: (Score:3, Insightful)

      by jandrese (485)
      In some ways it is accurate though. I mean back in 1950 you no doubt had futurists predicting that you'd get an ENIAC in the palm of your hand for $10 and look at what kind of calculators you can buy these days that are considerably faster than the ENIAC.

      Actually, you wouldn't have. Everybody back then thought we were going to build computers that took up entire city blocks and would get up into the millions of computations per second range. The personal computer took them by almost complete surprise f
      • by joe 155 (937621)
        I remember reading a prediction (I think it was from the 50/60s era but don't have a source) that one day there would be computers which weighs less than 3 tonnes and could fit in a standard size room!

        OK, so they might not see how far somethings will go but over estimate others (moon bases?). But I would say that their accuracy is probably not as bad as you think (I'll take Nostradamous as 0-1% accurate). If you look at the original star trek then quite a few things which they had we are getting close t
      • Re: (Score:3, Insightful)

        by Phrogman (80473)
        OH I think there is a market for a dirt-cheap PC, people would buy one (or maybe 20) of them and find uses for them readily enough. No, the problem is simply that the manufacturers have no desire to make the PC or the parts for it because the profit-margin would be so small for the materials used that it wouldn't be worth it from their perspective.

        Up until recently, you could get a pretty functional PC up here in Canada for around $1000. Back in 1988 it was $2000, and now its probably $600, but the principl
  • by BadAnalogyGuy (945258) <BadAnalogyGuy@gmail.com> on Wednesday September 27, 2006 @01:21PM (#16216787)
    When futurists look into their crystal ball to predict the future, they typically try to find the common themes of the present age and using their own special multiplier they derive some kind of super-present with basically the same things we have now, only bigger or faster or smarter.

    The problem is that they can only detect trends and can't really predict real things. So when you see a futurist going out on a limb and claiming that X is only 10 years away, they are hedging their bets that you will forget they ever made such a silly prediction 10 years from now. If they do manage to get something right, you can bet they'll be working overtime trying to get grants from RAND and MITRE for more futurism.

    However, the reading of trends is a very important role of sociology. Only by accurately predicting what sorts of stresses and issues we will face in the near-term future can we sufficiently prepare ourselves for them. The Rand corporation has a list of 50 books for thinking about the future. (http://www.rand.org/pardee/50books/) These offer insights into the past and present and into the minds of successful futurists.

    The one thing you will notice about successful futurists is that they don't go overboard predicting killer electronic e coli yogurts. Rather, they outline the likely changes in society and provide suggested remedies for foreseeable problems as well as suggested directions for societal growth.

    The area of futurism is very interesting and a strong futurist school of thought is vital to our success as a society. Cranks who like to come up with doomsday scenarios do the entire field a disservice.
    • by tambo (310170) on Wednesday September 27, 2006 @01:47PM (#16217211)
      The problem is that they can only detect trends and can't really predict real things. So when you see a futurist going out on a limb and claiming that X is only 10 years away, they are hedging their bets that you will forget they ever made such a silly prediction 10 years from now.

      Some of these trends are predictably reliable, though. Moore's Law is by no means perfect, but it's extremely likely that computers will continue to grow in processing power at a steady, exponential rate, at least for the next few decades.

      The problem is that some - including the typically brilliant Ray Kurzweil - believe that AI is limited by computational power. I don't believe that's the case. I believe that AI is limited by a woefully primitive understanding of several components of intelligence. It is impossible to produce artistic, emotive, sentient machines by applying today's AI models to tomorrow's supercomputers.

      Reliable predictions:

      1. Computers will continue to scale up in power.
      2. AI models will continue to evolve.
      3. Thanks to (2), We will eventually succeed at modeling the individual components of intelligence.
      4. Thanks to (1) and (3), we will eventually produce truly intelligent machines.
      That's the most any futurologist can tell you about AI. Anyone who promises more is trying to sell you their book. ;)

      - David Stein

  • Yes. It's a deliberately provocative point, because the AI field is pretty much split down the middle in terms of whether these things are achievable or not. I'm in the 30-40% camp that believes that there's really not anything magical about the human brain.

    We're getting a greater understanding of neuroscience, and starting to get some of these concepts built into the way that computers will work, and computers don't have to be a grey box with a whole stack of silicon chips in it - there's no reason why
    • Robot brains getting Master Degrees in 20 years?

      You just ruined my day. Your line above gave me a vision of Robot Lawyers....

  • He believes we will see the first computers as smart as people by 2015.

    That's bolder than a lot of strong AI proponents. Traditionally, it's 20-30 years down the road.

    As to smart yogurt -- linkable electronics in bacteria such as E. Coli -- he figures that means the end of security. "So how do you manage security in that sort of a world? I would say that there will not be any security from 2025 onwards."

    Unless you've got equally effective opposing nanotech, which I suspect there will be some research in.
    • Re: (Score:3, Interesting)

      by Grym (725290) *

      Unless you've got equally effective opposing nanotech, which I suspect there will be some research in.

      The confusing thing about all of his "yogurt"-preditctions is that they are internally inconsistant. At first he discusses how electrically-active bacteria could be oriented in such a way as to design a computer. This is entirely reasonable and is, in fact, how animal nervous systems function. THEN he goes on to these ridiculous claims about bacteria hacking electronics after being released in air cond

  • His predictions sound scary in part because we know that 1) people are weak and gullible and will accept their fate placidly like sheep, and 2) businesses are corrupt, putrescent, immoral, undead zombie lifeforms that will immediately try to eat our brains with these new technologies so they can get our money without our having any will whatsoever to resist.

    So yeah, the deck is stacked unless the planet is hit with a asteroid the size of Manhattan. Well that's something to look forward to I guess.
    • by geekoid (135745)
      1) people are weak and gullible and will accept their fate placidly like sheep,

      historically not true.

      People want a safe enviroment for there children and to be left the hell alone.

  • by aldheorte (162967) on Wednesday September 27, 2006 @01:25PM (#16216847)
    Please stop posting predictions of "futurologists". They are the modern era's form of witch doctors, shamans, medicine men, and other self-proclaimed prognosticators. Since BT apparently actually employs one, I am reminded of another article I read a long time ago which proposed today's corporations and brands as substitutes for an innate desire for membership in parallel to the tribes and clans of yore, replete with those who attempt to hold positions of power by their somehow unique predictions of the future that have no more or less probability of coming true than any random statement of anyone in the group, but dress it up in some sort of mysticism, whether spiritual, or false intellectualism, to make it sound divinely guided or erudite.

    I predict that in 2015, this guy will still be making predictions. His track record will be no better than random probability would have resolved. The time you have spent reading his predictions and even this response is time out of your life that you will never recover, and reading it will not put you to any better advantage than if you had not.
    • Re: (Score:3, Informative)

      by DocDJ (530740)
      Couldn't agree more with the parent. I used to work in the AI department of BT's research labs, and this guy was a constant embarassment to us with his ill-informed drivel. We'd try hard to build some kind of reputation in the field, and this moron would undo it all with his "robots will destroy humanity by the middle of next week" toss. He's like a less-scientific Captain Cyborg [kevinwarwick.org] (if such a thing is possible).
  • Uhhh... is that what Honda and Sony think or is that what HE thinks/wants? Me thinks the latter.

    • Re: (Score:3, Insightful)

      by abradsn (542213)
      I think he is onto something there. A few of his predictions align with what I know about my field, so I think they deserve some moderate credibility. One helpful idea that I use is the comparison of now to 15 years ago, and see what is different. In 1990 no one (read almost everyone) had the PDA or the internet, but we had the paper notebook, the phone, and the encyclopedia britanica. Fairly good stand-ins. Just look at everything in incremental improvements and try and predict a few of them out. I t
  • by us7892 (655683) on Wednesday September 27, 2006 @01:26PM (#16216863) Homepage
    [...] in around 2015-2020, you could say that we won't need people to write software, because you just explain what you want to a computer and it will write it for you, and there's no reason then to have people working in that job.

    Maybe not, but I'll have the job debugging all the mistakes the androids will have in their code. They'll be outsourcing debug work to us humans.
    • The trick is to get it to understand what the boss really wants, and not to build what he asks for. For additional difficulty the computer should know which change requests are just ideas of the moment that will be forgotten, or even denied to ever have existed, 3 weeks from now.
  • Man's a fool (Score:3, Insightful)

    by Srin Tuar (147269) <zeroday26@yahoo.com> on Wednesday September 27, 2006 @01:28PM (#16216891)
    This quote FTA:


    The other side of AI says that "my brain is magic, and I'm really smart and you can't possibly produce a robot as
    clever as me". I don't subscribe to that one - I think that's nonsense.


    Tells me all I need to know about this guy's predictions.
    He fails to understand that in the 40+ year history of AI research noone has demonstrated even the inklings or foundations upon which actual AI can be built upon.

    They may be nothing special about the human mind, but what ever the case is, we certainly havent figured it out yet. It's more likely that we'll have cold fusion by 2015 than AI.

    • by Azul (12241)
      Agreed.

      Even Turing failed miserably, predicting machines would be passing his Turing test around 2000.
      It doesn't sound like we've progressed that much from the time when he made his prediction;
      machines keep getting faster but as far as mathematics and theory is concerned, it doesn't seem we've come that far from where we were 50 years ago.
    • by ElephanTS (624421)
      Totally. I would like this guy's job though. What a way to make a living. By 2010 I reckon yoghurt will come out of my computer. Can I get paid now please?

    • We aren't even sure how a human mind processes things at a logical level, never mind replicating the physical system. There's competing theories, the two biggest being the Digital Computational Theory of Mind and the Connectionist Computational Theory of Mind. The bitch is, there's evidence for both. The mind acts one way sometimes, the other at other times, and sometimes as both at the same time. At this point, we are so far away form knowing how the mind works we can't even say how long it will take to fi
      • Re:Yep (Score:5, Insightful)

        by MindStalker (22827) <.moc.liamg. .ta. .reklatsdnim.> on Wednesday September 27, 2006 @02:22PM (#16217869) Journal
        Personally I think genetic style programming will show the most promise in AI. Whats funny about it is there is a good chance that once we achieve self-aware AI, we probably will understand its programming about as much as we understand the human mind.. We will be able to see.. AKA not so much. Sure we might be able to trace its paths and figure out its logic, but we might still not have any clue as to what really makes it self-aware and conscious.
    • Re: (Score:3, Insightful)

      by FleaPlus (6935) *
      He fails to understand that in the 40+ year history of AI research noone has demonstrated even the inklings or foundations upon which actual AI can be built upon.

      How does he misunderstand that? All he's saying is that there isn't any sort of magical power or cartesian dualism in the brain which somehow creates an immaterial mind/soul separate from the physical world. He isn't making claims that AI researchers have actually figured out things yet.

      And yes, I have spoken to people that think it's impossible to
  • Imagine a world with absolutely no security, as in "an armed society is a polite society". Someone offends you, you shoot them. Someone else shoots you. If weapons are plentiful and deadly enough, soon the world population would drop to the 100K's and everyone would be miles apart and would use their robots to keep it that way. Security would be enforced through distance. Mad Max meets the Terminator movies.
  • That's all I really need....and this lamp....
  • Lollipop! (Score:3, Interesting)

    by Azul (12241) on Wednesday September 27, 2006 @01:31PM (#16216927) Homepage
    in around 2015-2020, you could say that we won't need people to write software, because you just explain what you want to a computer and it will write it for you, and there's no reason then to have people working in that job.


    Uh, I thought that, explaining what you want to a computer, is precissely what programming is all about. Isn't source code a program's best specification? What are programmers doing if not explaining what they want from the computer?

    When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop.
  • I, for one, welcome our yogurt-eating AI 7 dollar PC overlords.
  • by sm62704 (957197) on Wednesday September 27, 2006 @01:35PM (#16216965) Journal
    Asimov thought the internet would be in a single computer called "multivac" and that robots would be a hell of a lot safer, and that the self-driving car "Sally" would be in production long before 2020.

    In 1955 Heinlein, in Revolt in 2100, had the protagonist heading to "the Republic of Hawaii", not able to forsee that fpur years later it would become a state.

    Roddenberry had automatic doors, cell phones, and flat screen monitors 200 years in the future rather than 30 years later (now). His writers had McCoy give Kirk a pair of reading glasses in Star Trek IV, not forseeing that twenty years later the multifocus IOD would be developed.

    This guy says we'll have six hundred million androids in ten years. He doesn't understand computers, or that AI is just simulation. "I'm in the 30-40% camp that believes that there's really not anything magical about the human brain." But he doesn't see that it is analog, and that thoughts, memories, and emotions are chemical reactions while digital computers are complex abacuses working exactly like an abacus (except it ises base 2 instead of base 10).

    He talks of that Warwick guy - "Kevin isn't really the first human cyborg". Nope, he isn't. Vice President Cheney is a cyborg, as he has a device in his heart. I'm one, as I have a device in my left eye (the aformentioned IOD). People have artificial hips and knees. "Captain Cyborg" isn't really a real cyborg, he's a moron like the writer of TFA.

    Nothing to see here - at least, nothing for anyone intelligent to see here.
    • by soft_guy (534437)
      McCoy give Kirk a pair of reading glasses in Star Trek IV, not forseeing that twenty years later the multifocus IOD would be developed

      What is a multifocus IOD??
    • Re: (Score:3, Informative)

      Asimov thought... that the self-driving car "Sally" would be in production long before 2020.
      http://en.wikipedia.org/wiki/Darpa_grand_challenge [wikipedia.org]

      There was a competition of self-driving cars (or SUV's, mostly, and one big truck) put on by DARPA last year, and five of them managed to complete a 132 mile desert course. Next year's DARPA challenge is in an urban environment with the requirement of obeying traffic laws. The U.S. Army is attempting to use robots for a significant portion of its noncombatant groun
    • Re: (Score:3, Interesting)

      by LionKimbro (200000)
      I wouldn't be so fast to say all futurology is bunk. Science fiction authors often intentionally abuse the single-advancement problem, [wikia.com] because stories must make sense to readers: Hence we have GATACA, taking place in a 1950's rockets-to-space vision, just with a single change: genetic selection.

      But not writing fiction:

      NISTEP [taoriver.net] used the delphi method [wikipedia.org] to great effect.

      Some examples:
      • Possibility to a certain degree of working at home through the use of TV-telephones, telefaxes, etc. (forecast: 1998)
      • Acquisition o
  • by Nom du Keyboard (633989) on Wednesday September 27, 2006 @01:35PM (#16216981)
    we won't need people to write software, because you just explain what you want to a computer and it will write it for you, and there's no reason then to have people working in that job.

    Boy have I heard this one before. It just used to be that computer languages would become so simple that the profession of Programmer would disappear because everyone would just be able to write their own programs. Sure hasn't happened yet.

    Someone once famously said: Computers are useless, they can only give answers.

    The problem here is, even if you had a computer like the one described here, you still need to be able to understand your problem well enough to cogently explain it to your computer. And that's where most people will fail. They don't understand their problems in the first place, and have no idea how to communicate the solutions they actually need.

    • by Jhan (542783)

      The problem here is, even if you had a computer like the one described here, you still need to be able to understand your problem well enough to cogently explain it to your computer.

      That describes my day job. The clients never know what they really want, and why should they? I'm the solution guy. I give them what they don't know they want.

      Research, a few (million) questions, some creativity and blam, presto, a system that does more-or-less what they didn't-know-they-wanted.

      If I, with an IQ of a measl

    • Re: (Score:3, Insightful)

      by CagedBear (902435)
      Yup. Besides...
      you just explain what you want to a computer and it will write it for you
      ... don't we have this now? Isn't it called a compiler?
  • The computer that's as smart as a BT employee arrived some time ago with the introduction of the TRS-80.

    As for simulating real humans: we're no closer today than in 1960.

  • Firstly, I think the poster didn't mean to use the term 'strong AI', which usually refers to needing to codify all the 'intelligent behavior', unlike weak AI, which must learn the behavior itself.

    I agree with his point that there is nothing magical about the brain, but I think he's off his rockers to say it will happen in 10-15 years. Perhaps he should beef up on some neuroscience papers before such grand claims. While I think there should eventually be a link between the AI and neuroscience worlds, it real
  • The cost will also be very low, with computers costing around ,5 - ,10. I really believe that ultra simple computing is a great idea for the future.

    My hope for the future is that someday everyone will create web pages with software that uses standard ascii, so that I see quotes, dollar signs, pound signs, etc instead of things being broken by things like "smart" quotes. Note that the above is how the last line renders with Firefox; my guess is it probably looks just fine if you are using Internet Explorer.

  • ...where to begin with such a blathering pile of bullshit?

    Hmmm... Well, let's tackle the AI thing.

    AI = Human Intelligence isn't going to happen. Ever. You might be able to get a machine that can take as many input data points as the human brain, and get it to execute as many data output points as the brain, but that's not intelligence. That's I/O and there's a big fat difference.

    Security won't exist. Really? So if some asshat barges into my house I won't be able to pound his skull to a bloody pulp wit

    • by tpjunkie (911544) on Wednesday September 27, 2006 @02:04PM (#16217481) Journal
      I think you're making a pretty bold statement saying that there is no possible way to create true human level AI. Who's to say that 25, 50, 100 years from today someone doesn't figure out a way of programing a computer/entity/whatever-sort-of-electronic-device -you-like with the ability to actually learn, process information, and even re-write its own programming in such a way that it achieves a level of I/O processes that perfectly emulate human intelligence and conciousness (which would be the definition of Artificial Intelligence)?

      Frankly, making broad statements like that sound an awful lot like insisting 640kb of RAM ought to be enough for anyone, or that a computer will never be smaller than a gymnasium, or that man will never fly or travel to the moon.
  • Isn't 2017 where the Myan calender ends? Maybe the Large Hadron Collider will finish us off.

    HA! (only joking, don't throw me any flamebait)
  • by Alchemar (720449) on Wednesday September 27, 2006 @01:51PM (#16217277)
    $2.75 at office depot

    item# 172008

    http://www.officedepot.com/ddSKU.do?level=SK&id=17 2008&x=0&Ntt=organizer&y=0&uniqueSearchFlag=true&A n=text [officedepot.com]

    A lot depends on how you define a computer, but think about what this would have been like in 1970?

  • The cheapest cell phones are around $15. The contain a CPU, screen, keyboard, and lots of software.
  • He believes we will see the first computers as smart as people by 2015.

    It's interesting to try to think about what must happen to a person to make them so divorced from reality that they make such claims. I can understand someone who knows nothing about computers making such a claim. But this is by someone who is supposedly an expert in the field. Is this person deliberately lying to be provocative? (No, the guy responds by saying it's 'realistic'.) Do they have no idea what is going on in AI? (Surely no

  • I'm going to guess 2100, so that I might not live to be proven wrong. Having a fast computer isn't enough. We have to duplicate all the major optimizations in natural intelligence from over a billion years of evolution, or it won't matter how much processing power we have. I remember seeing an old twilight zone (or maybe outer limits?) episode about astronauts reaching Alpha Centauri in the year 1999. Like that'll happen in the next 10000 years.

    Technology advances very rapidly, but rarely in the directions
    • by rewt66 (738525) on Wednesday September 27, 2006 @02:45PM (#16218365)
      It's more than just that technology doesn't advance in the direction that people expect. People (many of them, anyway) intuitively feel that all problems are about the same level of difficulty.

      But solving something that's NP-complete is not "just a little more difficult" than writing a word processor or an OS. It's so much harder that we need a totally new theoretical framework. Faster processors aren't enough to get us there. And the theoretical breakthroughs come a whole lot less frequently than processor speed increases.

      Flying cars? You know, we could probably do that today. It's just a personal STOL aircraft, basically. We can solve the technological problems there. What we can't solve is the rest of it. Between the power requirements (cost) and the driver knowledge needed to operate it, the market size is too small to be worth the effort to create such a beast.

      AI? We have the computers that could run the code (maybe). We don't know how to write the code. We probably won't know how next decade, either, or the decade after that.

      Smart bacteria? We could perhaps create them. Making them spy on keypresses? Possible. Finding the data you want in the stream of data coming from a trillion (or quadrillion) bacteria? He seems not to have addressed that one.

      Sending cans to other parts of the solar system? We've done that. Permanent colonies? It's a lot harder than just sending a bigger can with more stuff in it.

      We have breakthroughs in one area (CPUs, for instance) and people assume that other, related problems must be "only a little harder" and therefore about to be solved. But problems differ enormously in difficulty; the level of breakthroughs that we have now is nowhere near what we need for certain problems.
  • If you look at the Japanese market, you'll find that both Honda and Sony are making little androids already and they are not just doing that for fun. They are doing that because they seriously believe that they can sell millions of these things into the domestic market...

    Unfortunately, the current state of robotics is, in terms of cost-effectiveness, about where computers were circa 1955. For example, Honda's "little android," the Asimo (at least according to Wikipedia) still costs about $1 million per

  • by rlp (11898) on Wednesday September 27, 2006 @02:05PM (#16217499)
    Yogurt may not be smarter than me, but it has more culture.
  • Maybe I've just been unlucky, but I've been looking around for a new career for the past couple months, done something like 75 interviews... but I haven't seen any job listings for a Futurologist! I wonder what the benefits are like.
  • In order really pull off human-level AI, I think we'll need to model the brain at the subatomic level, because we probably aren't capable of understanding, truly understanding, how our intelligence works to the point that we could code it at a higher level than that. Basically you will have a complete human brain AND body, totally simulated. It will be just like a human - no smarter, no dumber.

    So why not just have sex and make a real human?

    Well, the advantage would (presumably) be that the simula
  • by Theovon (109752) on Wednesday September 27, 2006 @02:59PM (#16218625)
    First of all, I am a student of AI. I'm currently working on my Ph.D. in AI, studying, among other things, knowledge-based reasoning, machine learning, agent systems, and HCI. Also, I believe that strong AI is _possible_, in the sense that I believe humans are machines that function according to the laws of physics, so theoretically, a fast-enough computer could too. (Indeed, computation speed is the least of our problems.)

    The problem is that people who want to build strong AI are trying to do in decades what took nature billions of years. Certainly, directly engineering something is usually faster than evolving it, but even orders of magnitude speed up won't have strong AI systems any time soon.

    Since the dawn of computing, people have been assuming that once computers got fast enough, the AI problems would just solve themselves. The problem is that we're talking about very hard problems. Things that are easy for us (walking, visually recognizing objects, etc.) are hard for computers. Things that are hard for us (math, data processing, etc.) are easy for computers. Why? To do things that are hard for us, humans have developed, over thousands of years, detailed and exacting formalisms. Math has axioms, a syntax, and a set of mechanical processes to carry about. Even complicated proofs involve an extraordinary amount of simple symbol-pushing that a computer could do easily. Computers are based on exactly those same formalisms, so it makes sense that it would be easy to program a computer to do those things. Computers are NOT, however, built anything like how the human brain works, and that's why AI researchers use neural nets and genetic algorithms for so many things.

    Long before we're plagued by computers thinking for themselves, demanding rights, and taking over the world, we'll simply continue to be plagued more and more by increasingly catastrophic bugs introduced into increasingly more complex applications. Far from having autonomy, our bug-ridden software does exactly what it was coded to do, right or wrong, and we'll suffer from it. And all along the way, the blame for the problems will fall squarely on the human engineers who made the mistakes in the first place.
  • Ok, how? (Score:3, Insightful)

    by Metasquares (555685) <.slashdot. .at. .metasquared.com.> on Wednesday September 27, 2006 @08:38PM (#16222991) Homepage

    I suppose I'm one of those 60% - 70% of the "AI community" (or of the former AI community, because I left AI and went into algorithms, which was what I wanted to do more anyway, when people started spouting this nonsense), but I'm skeptical of any claims that we will develop strong AI by a certain time period that do not propose a reasonable way of doing so.

    Here's a hint: it's not about raw processing power! The challenge is still theoretical. We can't even implement AI on an oracle the way things are now, much less a real machine. Great, so you'll have a powerful enough CPU to simulate a brain. Any idea how you're going to write a program that simulates one, considering we don't know anywhere near the requisite level of detail of the brain's operation yet?

    When you touch something, it generates electrical signals in your nerves, which are essentially wires, and we look at it and think, "that's basically IT" - it's biological IT, so we need to talk to some biological companies to do that bit, but once we've got them in touch with electrical signals, it's basically our domain.

    No, it isn't IT. We design all of the hardware in IT; we know under what circumstances signals are going to be sent. It's kind of hard to have that level of control when you're talking about the nervous system, especially with how little we understand about it now. The best analogy I can think of is attempting to prove a theorem in physics and mathematics (the former is regarded as bad science; the scientific method is empirical). We make the rules in math, so we know when something agrees with those rules. New rules come from old ones, so everything remains intact. We don't make the rules of physics (or biology), and there's plenty we still don't know about those rules.

  • BT? (Score:3, Insightful)

    by BorgCopyeditor (590345) on Wednesday September 27, 2006 @08:49PM (#16223073)
    Hell is BT?

Pie are not square. Pie are round. Cornbread are square.

Working...