BT Futurologist On Smart Yogurt and the $7 PC 455
WelshBint writes, "BT's futurologist, Ian Pearson, has been speaking to itwales.com. He has some scary predictions, including the real rise of the Terminator, smart yogurt, and the $7 PC." Ian Pearson is definitely a proponent of strong AI — along with, he estimates, 30%-40% of the AI community. He believes we will see the first computers as smart as people by 2015. As to smart yogurt — linkable electronics in bacteria such as E. Coli — he figures that means the end of security. "So how do you manage security in that sort of a world? I would say that there will not be any security from 2025 onwards."
Flying cars (Score:1, Interesting)
Right. (Score:5, Interesting)
Get a grip, for God's sake.
Re:Computers as smart as "some" people im sure (Score:2, Interesting)
At least one as smart as our President.
Witch Doctors, Futurologists, and Cranks (Score:5, Interesting)
I predict that in 2015, this guy will still be making predictions. His track record will be no better than random probability would have resolved. The time you have spent reading his predictions and even this response is time out of your life that you will never recover, and reading it will not put you to any better advantage than if you had not.
Lollipop! (Score:3, Interesting)
Uh, I thought that, explaining what you want to a computer, is precissely what programming is all about. Isn't source code a program's best specification? What are programmers doing if not explaining what they want from the computer?
When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop.
Um, this "futurologist" is a moron... (Score:2, Interesting)
Hmmm... Well, let's tackle the AI thing.
AI = Human Intelligence isn't going to happen. Ever. You might be able to get a machine that can take as many input data points as the human brain, and get it to execute as many data output points as the brain, but that's not intelligence. That's I/O and there's a big fat difference.
Security won't exist. Really? So if some asshat barges into my house I won't be able to pound his skull to a bloody pulp with my baseball bat? Ooooh- we're talking computer security? Well who ever promised computer security in the first place? If it's a transmissable dataset, it can be recieved, re-routed, intercepted and decoded, given enough time and resources, and that's today. There never WAS any computer security, so his argument is a strawman.
Thirdly, he didn't say where the energy is going to come for all this.
Fact: Kuwaits largest oilfield peaked last November.
Fact: The Saudi's largest field (Ghawar) is puming between 30 and 50% seawater. They haven't announced that it is in decline, because it would set off international freak-out alarm bells, but everyone in the general know KNOWS that the Saudis are cooking the books and are at or close to peak.
Fact: Americans continue to consume VAST quantities of energy and piss it away on trivial bullshit - from personal nonsense (like cellphones, gameboys, Xbox, rotisserie ovens, etc.) to larger potlach level wastes (like Las Vegas), and NONE of it is sustainable. Period.
Fact: Besides energy rapidly approaching a massive down curve, we also rapidly approach the peaking and imminent depletion of our metals. Copper ore averages 5%. Phosphorus, chromium and magnesium production peaked years ago.
His unadulterated adulation of Star Trek only serves to underline his chronic case of cranio-rectal inversion.
Industrial Civilisation is (slowly) drawing to a close. It's not the end, yet, but in about 15 years, we'll be able to see it from there. After that, it is back to the land and farming. Forever. We Are Atlantis.
RS
Re:Lollipop! (Score:2, Interesting)
I wish that they would sell it.
It never does just what I want,
But only what I tell it.
Re:Smarter and Smaller. At least one's a good bet. (Score:3, Interesting)
The confusing thing about all of his "yogurt"-preditctions is that they are internally inconsistant. At first he discusses how electrically-active bacteria could be oriented in such a way as to design a computer. This is entirely reasonable and is, in fact, how animal nervous systems function. THEN he goes on to these ridiculous claims about bacteria hacking electronics after being released in air conditioning systems or infecting our brains and controlling our thoughts. (And I wish I were exaggerating here...)
First of all, this is internally inconsistant because removing the bacteria from their computing structures would remove their capacity for computation. Moreover, his claims don't address the fact that these bacteria would still be subject to the same growth demands as regular bacteria. Given that electronic circuitry is generally pretty dry and nutrient free, exactly how are these bacteria going to control electronics if they can't even survive? Also, how could these mind-control bacteria go unnoticed by the human immune system? There are only a few bacteria that are known to pass through the blood-brain barrier and these ALL result in INFLAMMATION (which stops the functioning) of the tissue. Lastly, even if one designed bacteria that were individually "intelligent," these bacteria would most certainly be unable to survive in the real world because they would be inherantly inefficient and uanable to compete with the normal microbal flora.
I can't speak for his other predictions, but judging by his fundamental misunderstanding about basic microbiology, I'm inclined to believe that they're bunk as well.
-Grym
Re:This is the KICKER (Score:3, Interesting)
And the funny thing is that no one realizes how many times it's happened to different degrees.
"You'll just describe your problem to the computer"
Sure, current languages aren't exactly plain english descriptions, but if described to someone writing assembly code or laying out punch cards years ago they'd probably view it as darn close.
Languages like Prolog take the concept even farther for problems that fall in the right set, feed it rules, ask it questions, get answers.
To me the question isn't if we can keep approximating (the human language of your choice) better in our programming, it's if we can do it and keep generating machine code which is efficient enough for us.
Re:Computers as smart as "some" people im sure (Score:3, Interesting)
Note that robots have effectors, so that's not an insurmountable problem, merely a very different one (that's already being worked on). Note also how completely separated it is from intelligence.
Then there's motivation. Why should an AI want to do any particular thing? Being intelligent enough to solve that problem if it wanted to doesn't cause it to want to do so. This is, again, a totally separate problem. Getting the answer to this one correct is vital to human survival. Nearly everyone appears to be ignoring it.
As to how long we have to get it right... my guess would be decades, but not a large number of them. And there are several different modes of failure. Some will cause the computer to disassemble itself. (This doesn't even require intelligence, it's already happened, but doing it intentionally does.) Some will cause the computer to freeze in mental development. This happens to people too, so I don't consider it unlikely, even when the answer is "almost right". Some will cause the computer to attempt to "take over the world" (for varying different reasons, but I suspect that paranoia covers most of the likely ones).
Don't try to understand in detail WHY a computer might do something unless you know it's motivational structure. If you do, you MIGHT get as close as you can with, say, the leader of a foreign country. If you don't, you may get as close as when you attempt to understand why a social wasp does something. (Note that in both cases you are missing significant clues...you don't have the same sensory apparatus as a wasp, e.g., so you can't know what it's "smelling".)
Sorry, I know you weren't being serious...but this is something that programmers SHOULD be serious about (at least occasionally).
Re:Yep (Score:3, Interesting)
Re:"Futurology" is bunk (Score:3, Interesting)
But not writing fiction:
NISTEP [taoriver.net] used the delphi method [wikipedia.org] to great effect.
Some examples:
"So what," I hear you say. Well, "so," these figures are from 1971, 1976 and 1981: We're looking at 20-30 year technical forcasts. The forcasts were specific, useful, and relatively accurate. They included confidence levels. They were 60-70% accurate.
Just because there some notoriously bad futurists [taoriver.net] that are very good at getting the press on the line, it doesn't mean the whole field is bunk.
Personally, I'm just very glad that people have stopped thinking robots are bunk. If you asked anybody in 2000, "Will there be robots?"
The general public envisioned the flying cars, [taoriver.net] not the people over at NISTEP. When NISTEP reports were published, who knew about them?
As for your computerized brains: You might want to check out Blue Column [bluebrainproject.epfl.ch] and Blue Brain. [bluebrainproject.epfl.ch]
Also, I haven't looked into this too deeply, but from what I've seen, the AI community has recently been flowering again. I have read in many places that they are making renewed progress, getting past the religious wars of the past: They are combining connectionist systems, rule-based systems, genetic systems, and so on. I don't see a good reason to be so pessimistic about it: Brain simulation on the one side, with a clear plan to 2020, and these traditional AI systems continuing to get better results, in a way that makes sense. Ray Kurzweil [kurzweilai.net] wrote a good overview piece, Why We Can Be Confident of Turing Test Capability Within a Quarter Century, [kurzweilai.net] and there are some very good (though very expensive) books on AI at the bookstore.
Re:Computers as smart as "some" people im sure (Score:3, Interesting)
I don't think so. While — as an independent AI researcher — I would not absolutely rule out a "we programmed it" solution, I really don't think that's what we're looking at. No more than we were "programmed" to be intelligent, in any case.
What is needed is a hardware and probably somewhat software (at least as a serious secondary effort after pure software simulation uncovers what we need to do) system that can learn; not something that is complete, out of the box. The latter can be created by state replication once one or more AI have been built; but from my point of view, what you're looking at is an evolutionary process, which when fruitful, will yield something we can teach, and which can teach itself, and which we should be very careful to build in such a way as to be able to replicate both its hardware and the state of its hardware.
You and your instructor are quite right that we do not understand our own (or animal) intelligence. The error in the subsequent thinking here is, I think, that you are both assuming we need to understand it well, or perfectly, to create it. That does not necessarily follow.
For my part, I remain very confident that we are well past the point where we can make the right hardware; what we are missing is the right configuration. From a technical standpoint, it does not matter how large, or how slow, the initial success is; once the problem begins to resolve itself, we can apply our usual skills at shrinking and speeding up systems -- or even, making them remote via telepresence, if large is initially unavoidable -- until we have something we're satisfied with, and from there, of course, we can hand the task of making something better off to the AI itself.
As an interesting side note, there are many interesting knowledge base projects going on right now which, while not likely (in my view, again, this is all IMHO) to yield an actual AI, will be a great resource for an AI; knowledge that can be tapped using straightforward rules and methods.
I personally think we'll see strong, probably very strong, AI within a decade at most. I'm excited about it; I hope to contribute (my area is associative knowledge and the process of melding emotional and other modifying concepts to knowledge.) But I'll be delighted no matter where the results come from, as long as they come!