Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

BT Futurologist On Smart Yogurt and the $7 PC 455

WelshBint writes, "BT's futurologist, Ian Pearson, has been speaking to itwales.com. He has some scary predictions, including the real rise of the Terminator, smart yogurt, and the $7 PC." Ian Pearson is definitely a proponent of strong AI — along with, he estimates, 30%-40% of the AI community. He believes we will see the first computers as smart as people by 2015. As to smart yogurt — linkable electronics in bacteria such as E. Coli — he figures that means the end of security. "So how do you manage security in that sort of a world? I would say that there will not be any security from 2025 onwards."
This discussion has been archived. No new comments can be posted.

BT Futurologist On Smart Yogurt and the $7 PC

Comments Filter:
  • Flying cars (Score:1, Interesting)

    by Anonymous Coward on Wednesday September 27, 2006 @01:14PM (#16216649)
    What, no flying cars? That's bloody useless.
  • Right. (Score:5, Interesting)

    by PHAEDRU5 ( 213667 ) <instascreed.gmail@com> on Wednesday September 27, 2006 @01:16PM (#16216695) Homepage
    And New York was going to need 100,000,000 telephone operators by the middle of the 20th century.

    Get a grip, for God's sake.
  • by Anonymous Coward on Wednesday September 27, 2006 @01:24PM (#16216831)
    If in 2015 a computer literally breaks out of a research lab and starts a mission of doom, then I'd say we might have one as smart as a person.

    At least one as smart as our President.

  • by aldheorte ( 162967 ) on Wednesday September 27, 2006 @01:25PM (#16216847)
    Please stop posting predictions of "futurologists". They are the modern era's form of witch doctors, shamans, medicine men, and other self-proclaimed prognosticators. Since BT apparently actually employs one, I am reminded of another article I read a long time ago which proposed today's corporations and brands as substitutes for an innate desire for membership in parallel to the tribes and clans of yore, replete with those who attempt to hold positions of power by their somehow unique predictions of the future that have no more or less probability of coming true than any random statement of anyone in the group, but dress it up in some sort of mysticism, whether spiritual, or false intellectualism, to make it sound divinely guided or erudite.

    I predict that in 2015, this guy will still be making predictions. His track record will be no better than random probability would have resolved. The time you have spent reading his predictions and even this response is time out of your life that you will never recover, and reading it will not put you to any better advantage than if you had not.
  • Lollipop! (Score:3, Interesting)

    by Azul ( 12241 ) on Wednesday September 27, 2006 @01:31PM (#16216927) Homepage
    in around 2015-2020, you could say that we won't need people to write software, because you just explain what you want to a computer and it will write it for you, and there's no reason then to have people working in that job.


    Uh, I thought that, explaining what you want to a computer, is precissely what programming is all about. Isn't source code a program's best specification? What are programmers doing if not explaining what they want from the computer?

    When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop.
  • by Ralph Spoilsport ( 673134 ) on Wednesday September 27, 2006 @01:45PM (#16217155) Journal
    ...where to begin with such a blathering pile of bullshit?

    Hmmm... Well, let's tackle the AI thing.

    AI = Human Intelligence isn't going to happen. Ever. You might be able to get a machine that can take as many input data points as the human brain, and get it to execute as many data output points as the brain, but that's not intelligence. That's I/O and there's a big fat difference.

    Security won't exist. Really? So if some asshat barges into my house I won't be able to pound his skull to a bloody pulp with my baseball bat? Ooooh- we're talking computer security? Well who ever promised computer security in the first place? If it's a transmissable dataset, it can be recieved, re-routed, intercepted and decoded, given enough time and resources, and that's today. There never WAS any computer security, so his argument is a strawman.

    Thirdly, he didn't say where the energy is going to come for all this.

    Fact: Kuwaits largest oilfield peaked last November.
    Fact: The Saudi's largest field (Ghawar) is puming between 30 and 50% seawater. They haven't announced that it is in decline, because it would set off international freak-out alarm bells, but everyone in the general know KNOWS that the Saudis are cooking the books and are at or close to peak.
    Fact: Americans continue to consume VAST quantities of energy and piss it away on trivial bullshit - from personal nonsense (like cellphones, gameboys, Xbox, rotisserie ovens, etc.) to larger potlach level wastes (like Las Vegas), and NONE of it is sustainable. Period.
    Fact: Besides energy rapidly approaching a massive down curve, we also rapidly approach the peaking and imminent depletion of our metals. Copper ore averages 5%. Phosphorus, chromium and magnesium production peaked years ago.

    His unadulterated adulation of Star Trek only serves to underline his chronic case of cranio-rectal inversion.

    Industrial Civilisation is (slowly) drawing to a close. It's not the end, yet, but in about 15 years, we'll be able to see it from there. After that, it is back to the land and farming. Forever. We Are Atlantis.

    RS

  • Re:Lollipop! (Score:2, Interesting)

    by smitty97 ( 995791 ) on Wednesday September 27, 2006 @01:50PM (#16217245)
    I really hate this damn machine,
    I wish that they would sell it.
    It never does just what I want,
    But only what I tell it.
  • by Grym ( 725290 ) * on Wednesday September 27, 2006 @02:12PM (#16217669)
    Unless you've got equally effective opposing nanotech, which I suspect there will be some research in.

    The confusing thing about all of his "yogurt"-preditctions is that they are internally inconsistant. At first he discusses how electrically-active bacteria could be oriented in such a way as to design a computer. This is entirely reasonable and is, in fact, how animal nervous systems function. THEN he goes on to these ridiculous claims about bacteria hacking electronics after being released in air conditioning systems or infecting our brains and controlling our thoughts. (And I wish I were exaggerating here...)

    First of all, this is internally inconsistant because removing the bacteria from their computing structures would remove their capacity for computation. Moreover, his claims don't address the fact that these bacteria would still be subject to the same growth demands as regular bacteria. Given that electronic circuitry is generally pretty dry and nutrient free, exactly how are these bacteria going to control electronics if they can't even survive? Also, how could these mind-control bacteria go unnoticed by the human immune system? There are only a few bacteria that are known to pass through the blood-brain barrier and these ALL result in INFLAMMATION (which stops the functioning) of the tissue. Lastly, even if one designed bacteria that were individually "intelligent," these bacteria would most certainly be unable to survive in the real world because they would be inherantly inefficient and uanable to compete with the normal microbal flora.

    I can't speak for his other predictions, but judging by his fundamental misunderstanding about basic microbiology, I'm inclined to believe that they're bunk as well.

    -Grym

  • by skiflyer ( 716312 ) on Wednesday September 27, 2006 @02:55PM (#16218561)
    Probably as long as programming has been a profession, someone has predicted that in 10-15 years, programmers wouldn't be needed anymore.

    And the funny thing is that no one realizes how many times it's happened to different degrees.

    "You'll just describe your problem to the computer"

    Sure, current languages aren't exactly plain english descriptions, but if described to someone writing assembly code or laying out punch cards years ago they'd probably view it as darn close.

    Languages like Prolog take the concept even farther for problems that fall in the right set, feed it rules, ask it questions, get answers.

    To me the question isn't if we can keep approximating (the human language of your choice) better in our programming, it's if we can do it and keep generating machine code which is efficient enough for us.
  • by HiThere ( 15173 ) * <charleshixsn@ear ... .net minus punct> on Wednesday September 27, 2006 @03:46PM (#16219483)
    You're confusing intelligence with several other factors. One is what effectors it has available, i.e., what mechanisms could it use to "breaks out of a research lab and starts a mission of doom". Another is motivations. Why would it want to do that.

    Note that robots have effectors, so that's not an insurmountable problem, merely a very different one (that's already being worked on). Note also how completely separated it is from intelligence.

    Then there's motivation. Why should an AI want to do any particular thing? Being intelligent enough to solve that problem if it wanted to doesn't cause it to want to do so. This is, again, a totally separate problem. Getting the answer to this one correct is vital to human survival. Nearly everyone appears to be ignoring it.

    As to how long we have to get it right... my guess would be decades, but not a large number of them. And there are several different modes of failure. Some will cause the computer to disassemble itself. (This doesn't even require intelligence, it's already happened, but doing it intentionally does.) Some will cause the computer to freeze in mental development. This happens to people too, so I don't consider it unlikely, even when the answer is "almost right". Some will cause the computer to attempt to "take over the world" (for varying different reasons, but I suspect that paranoia covers most of the likely ones).

    Don't try to understand in detail WHY a computer might do something unless you know it's motivational structure. If you do, you MIGHT get as close as you can with, say, the leader of a foreign country. If you don't, you may get as close as when you attempt to understand why a social wasp does something. (Note that in both cases you are missing significant clues...you don't have the same sensory apparatus as a wasp, e.g., so you can't know what it's "smelling".)

    Sorry, I know you weren't being serious...but this is something that programmers SHOULD be serious about (at least occasionally).
  • Re:Yep (Score:3, Interesting)

    by suffe ( 72090 ) on Wednesday September 27, 2006 @03:50PM (#16219537) Homepage Journal
    Hopefully it would be able to tell us.
  • by LionKimbro ( 200000 ) on Wednesday September 27, 2006 @04:06PM (#16219793) Homepage
    I wouldn't be so fast to say all futurology is bunk. Science fiction authors often intentionally abuse the single-advancement problem, [wikia.com] because stories must make sense to readers: Hence we have GATACA, taking place in a 1950's rockets-to-space vision, just with a single change: genetic selection.

    But not writing fiction:

    NISTEP [taoriver.net] used the delphi method [wikipedia.org] to great effect.

    Some examples:
    • Possibility to a certain degree of working at home through the use of TV-telephones, telefaxes, etc. (forecast: 1998)
    • Acquisition of observation data from unmanned probes around Uranus, Neptune, Pluto and outside the solar system. (1999)
    • Development of optical communication technology that can realize substantial savings in the use of copper. (1999)
    • Possibility of external fertilization or artificial womb. (2001)
    • Widespread use of heart transplant from human being by resolving problems such as transplant immunity, rejection and donor. (2001)
    • Practical use of rapid-transit railway using iron rail and iron wheel, which can run at 300 km/h. (2006)
    • Development of artificial ear. (2007)


    "So what," I hear you say. Well, "so," these figures are from 1971, 1976 and 1981: We're looking at 20-30 year technical forcasts. The forcasts were specific, useful, and relatively accurate. They included confidence levels. They were 60-70% accurate.

    Just because there some notoriously bad futurists [taoriver.net] that are very good at getting the press on the line, it doesn't mean the whole field is bunk.

    Personally, I'm just very glad that people have stopped thinking robots are bunk. If you asked anybody in 2000, "Will there be robots?" ...they'd almost universally say, "Not for HUNDREDS of years, if ever!" But there were many futurists who were paying attention, and who knew the answer.

    The general public envisioned the flying cars, [taoriver.net] not the people over at NISTEP. When NISTEP reports were published, who knew about them?

    As for your computerized brains: You might want to check out Blue Column [bluebrainproject.epfl.ch] and Blue Brain. [bluebrainproject.epfl.ch]

    Also, I haven't looked into this too deeply, but from what I've seen, the AI community has recently been flowering again. I have read in many places that they are making renewed progress, getting past the religious wars of the past: They are combining connectionist systems, rule-based systems, genetic systems, and so on. I don't see a good reason to be so pessimistic about it: Brain simulation on the one side, with a clear plan to 2020, and these traditional AI systems continuing to get better results, in a way that makes sense. Ray Kurzweil [kurzweilai.net] wrote a good overview piece, Why We Can Be Confident of Turing Test Capability Within a Quarter Century, [kurzweilai.net] and there are some very good (though very expensive) books on AI at the bookstore.
  • by fyngyrz ( 762201 ) * on Wednesday September 27, 2006 @09:46PM (#16223569) Homepage Journal
    We'll first understand how our minds work, and then we'll be able to create strong AI.

    I don't think so. While — as an independent AI researcher — I would not absolutely rule out a "we programmed it" solution, I really don't think that's what we're looking at. No more than we were "programmed" to be intelligent, in any case.

    What is needed is a hardware and probably somewhat software (at least as a serious secondary effort after pure software simulation uncovers what we need to do) system that can learn; not something that is complete, out of the box. The latter can be created by state replication once one or more AI have been built; but from my point of view, what you're looking at is an evolutionary process, which when fruitful, will yield something we can teach, and which can teach itself, and which we should be very careful to build in such a way as to be able to replicate both its hardware and the state of its hardware.

    You and your instructor are quite right that we do not understand our own (or animal) intelligence. The error in the subsequent thinking here is, I think, that you are both assuming we need to understand it well, or perfectly, to create it. That does not necessarily follow.

    For my part, I remain very confident that we are well past the point where we can make the right hardware; what we are missing is the right configuration. From a technical standpoint, it does not matter how large, or how slow, the initial success is; once the problem begins to resolve itself, we can apply our usual skills at shrinking and speeding up systems -- or even, making them remote via telepresence, if large is initially unavoidable -- until we have something we're satisfied with, and from there, of course, we can hand the task of making something better off to the AI itself.

    As an interesting side note, there are many interesting knowledge base projects going on right now which, while not likely (in my view, again, this is all IMHO) to yield an actual AI, will be a great resource for an AI; knowledge that can be tapped using straightforward rules and methods.

    I personally think we'll see strong, probably very strong, AI within a decade at most. I'm excited about it; I hope to contribute (my area is associative knowledge and the process of melding emotional and other modifying concepts to knowledge.) But I'll be delighted no matter where the results come from, as long as they come!

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...