Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

BT Futurologist On Smart Yogurt and the $7 PC 455

WelshBint writes, "BT's futurologist, Ian Pearson, has been speaking to itwales.com. He has some scary predictions, including the real rise of the Terminator, smart yogurt, and the $7 PC." Ian Pearson is definitely a proponent of strong AI — along with, he estimates, 30%-40% of the AI community. He believes we will see the first computers as smart as people by 2015. As to smart yogurt — linkable electronics in bacteria such as E. Coli — he figures that means the end of security. "So how do you manage security in that sort of a world? I would say that there will not be any security from 2025 onwards."
This discussion has been archived. No new comments can be posted.

BT Futurologist On Smart Yogurt and the $7 PC

Comments Filter:
  • by tambo ( 310170 ) on Wednesday September 27, 2006 @01:47PM (#16217211)
    The problem is that they can only detect trends and can't really predict real things. So when you see a futurist going out on a limb and claiming that X is only 10 years away, they are hedging their bets that you will forget they ever made such a silly prediction 10 years from now.

    Some of these trends are predictably reliable, though. Moore's Law is by no means perfect, but it's extremely likely that computers will continue to grow in processing power at a steady, exponential rate, at least for the next few decades.

    The problem is that some - including the typically brilliant Ray Kurzweil - believe that AI is limited by computational power. I don't believe that's the case. I believe that AI is limited by a woefully primitive understanding of several components of intelligence. It is impossible to produce artistic, emotive, sentient machines by applying today's AI models to tomorrow's supercomputers.

    Reliable predictions:

    1. Computers will continue to scale up in power.
    2. AI models will continue to evolve.
    3. Thanks to (2), We will eventually succeed at modeling the individual components of intelligence.
    4. Thanks to (1) and (3), we will eventually produce truly intelligent machines.
    That's the most any futurologist can tell you about AI. Anyone who promises more is trying to sell you their book. ;)

    - David Stein

  • by StreetStealth ( 980200 ) on Wednesday September 27, 2006 @02:00PM (#16217413) Journal
    If you look at the Japanese market, you'll find that both Honda and Sony are making little androids already and they are not just doing that for fun. They are doing that because they seriously believe that they can sell millions of these things into the domestic market...


    Unfortunately, the current state of robotics is, in terms of cost-effectiveness, about where computers were circa 1955. For example, Honda's "little android," the Asimo (at least according to Wikipedia) still costs about $1 million per unit to produce, and still can't even hold the door for you.

    When they've come down about in price by about a factor of 10^3 and can actually hold a door open, the robot future will have arrived. It happened for computers and will happen for robots--it'll just take awhile.
  • by DocDJ ( 530740 ) on Wednesday September 27, 2006 @02:42PM (#16218311)
    Couldn't agree more with the parent. I used to work in the AI department of BT's research labs, and this guy was a constant embarassment to us with his ill-informed drivel. We'd try hard to build some kind of reputation in the field, and this moron would undo it all with his "robots will destroy humanity by the middle of next week" toss. He's like a less-scientific Captain Cyborg [kevinwarwick.org] (if such a thing is possible).
  • by Doctor Faustus ( 127273 ) <Slashdot.WilliamCleveland@Org> on Wednesday September 27, 2006 @03:03PM (#16218689) Homepage
    Asimov thought... that the self-driving car "Sally" would be in production long before 2020.
    http://en.wikipedia.org/wiki/Darpa_grand_challenge [wikipedia.org]

    There was a competition of self-driving cars (or SUV's, mostly, and one big truck) put on by DARPA last year, and five of them managed to complete a 132 mile desert course. Next year's DARPA challenge is in an urban environment with the requirement of obeying traffic laws. The U.S. Army is attempting to use robots for a significant portion of its noncombatant ground vehicles by 2015.

    I don't think that one is so far off.
  • by tambo ( 310170 ) on Wednesday September 27, 2006 @06:04PM (#16221441)
    But creativity != useful results. On a mass scale, that's precisely the same process the human society takes to innovate...

    Oh, I wholeheartedly disagree.

    Modern AI simulates creativity in, essentially, a two-step process:

    1. Randomize some part of a known process.
    2. Carry out the process and evaluate the result.

    Certainly many kinds of inventions are created that way. We have a term for that, and it's not "creative invention" or "engineering" - it's "serendipity."

    There are at least two other ways in which invention happens:

    • Creative invention involves sensing that a variation (randomly, let's say) is not just different, but interesting and potentially useful. You don't just see something new and try it with unexpectedly beneficial results - you predict the benefit before the experiment.
    • Engineering involves logical analysis of a problem, and the assembly of an elegant solution. If your prototype suffers a particular drawback, you don't prepare trillions of random variations and just try them until one works. You analyze the problem, consider alternatives, etc.

    We really have no idea how to model either of these behaviors yet. We've programmed around the problem by having computers "just try everything," which is hideously inefficient.

    - David Stein

  • by Flyboy Connor ( 741764 ) on Thursday September 28, 2006 @03:11AM (#16225507)

    Your computer didn't beat you at chess, a programmer did.

    This is a common misconception. People say "A computer only follows the rules that the programmer gave it, so it's the programmer's knowledge and skills that are used to play the game." The misconception is that the programmer is NOT actually telling the computer how to play chess. The programmer only tells the computer how to THINK ABOUT playing chess. And by executing this thinking program, the computer designs its own chess-playing strategies.

    Granted, in many cases the programmer helps the computer a bit by making suggestions, like "try to keep your knights in the middle of the board," much like a human teacher would give a student suggestions. But the computer is not, as was the case for chess programs 20 years ago, obliged to follow up on those suggestions. It only uses them to select amongst the possible moves to consider with priority.

    That is why chess programs surprise chess experts. That is why chess programs written by amateur chess players manage to defeat world champions.

    And for those who suggest that the computer only uses brute force to determine the best move, consider that a supercomputer that uses 1,000 top-of-the-line processors, employing the latest and greatest enhancements for alpha-beta search, would need about 20,000 years to play its opening move. Brute force is a start, but real chess intelligence is needed to play a strong game within tournament time.

    Face it, it is the computer which is highly skilled at playing chess, not the programmer.

All great discoveries are made by mistake. -- Young

Working...