By your own admission, AI *might* eventually be capable of the kind of "malice that people seem to be afraid of". And that malicious developers can cause destruction even sooner.
Not the GP, but yep, bad things are possible. Yay!
And the laws of physics clearly predict that strong AI is possible. or do you consider intelligence to be some kind of supernatural quality?
Invoking "the laws of physics allow it" as an argument that we should actually be worried about something happening here on earth in the near future is pretty slim evidence, no? I mean, the laws of physics allow a LOT of stuff to be possible.
That said, this isn't really about the laws of physics -- it's about basic biological systems here on earth which have intelligent properties. So, it's a lot easier to create intelligent life than invoking the laws of physics. (People have babies all the time.) The question is how long it will take us humans to figure out a way to create something that has certain intelligence properties... and that could be next year, next decade, next century, next millennium....
Also it is the experts in AI who are predicting that AI will be possible and achieved in a matter of decades. Why would you even come out and pretend that it isn't?
Because the "experts in AI" have a pretty bad track record for predicting advances -- the cynic in me would say probably because many of them get their grants funded by predicting major advances.
Back in the 1950s, the "experts in AI" predicted that a group of 10 smart dudes could get together and solve all the major problems of AI (like natural language comprehension, true adaptive learning, etc.) in 2 months over the summer. Over fifty years later, we're nowhere close to solving most of their identified problems -- most of our advances are due to better searching algorithms, faster hardware, and more data. Not really significant advances in true adaptive learning.
Alan Turing, in the same era, predicted by the year 2000 that we'd have machines so fluent in natural language that we'd have to debate the word choices that could be substituted in Shakespearean sonnets to tell the difference between a human and a computer. Instead, we get crap reported again and again and again that the "Turing test" was "passed" by some idiotic program that pretends to be a retarded non-English-speaking teenager who's acting like a 5-year-old.
How low our "bar" has sunk that we need to have such declarations every year or two to keep proving to ourselves that we have great "AI."
No -- we don't. We've barely squeaked by with any significant advances toward the kinds of goals articulated in the 50s about strong AI.
Now, I'm sure you're all going to talk about Deep Blue and chess. But how do these chess programs win? By doing exhaustive searches far ahead of what humans are capable of and having exhaustive libraries of games and strategies far greater than any human is capable of. I'm not saying these computers aren't significant advances in SOMETHING. But they aren't exhibiting the kind of efficient adaptive intelligence that the original "strong AI" proponents thought would happen when they proposed chess as a worthy goal for AI. It's like comparing someone with a high-IQ, advanced math and logic skills solving a complex problem in 5 steps with another guy who brute-forced the problem on a supercomputer and ran quadrillions of simulations until he came up with the right answer by elimination of other possibilities. Is the latter displaying anything like the "intelligence" of the former?
A similar thing with Watson. Natural language processing admittedly has made big strides in the past few years, but mostly because we've finally given up on the models of language and linguistic cognition that all those "experts in AI" insisted were the solution for decades. Instead, when you use Google to do something like translation or whatever, it guesses solely on the basis of huge databases... it doesn't "understand" language. Hell, we're still working on having simple language recognition be able to understand what the antecedent of a pronoun in a sentence is, let along grasp the meaning of a full sentence in natural language or paragraphs or larger contexts. But throw a big enough database at something, and limit the forms of questions you can ask it, and it will be able to do some awesome things. "Intelligence" in the sense of true adaptive learning, concept formation, or efficient understanding anything like humans (or even many animals) have? No way.
But go back and read the kind of crap predictions that "experts" made after Deep Blue a couple decades ago. Compare them with what happened. Compare what the "neural net" afficionados have been saying since the 1980s with what actually happened. Compare what the AI "cognitive science" weirdos have been talking about since the 1970s.
The "experts in AI" have always had predictions that were a crapshoot.
So, why would *I* even "come out and pretend that" strong AI may not be possible in the next few decades? Because, based on empirical evidence, the "experts" have often have unrealistic expectations in this area.
are you saying that people have no right to worry about problems that aren't likely to happen for 20 years? is that the cut off date?
I'd say we should worry about problems that are pressing and/or longer-term problems where we actually have a pretty good idea that they might be happening sometime soon. Strong AI will probably happen at some point in the future; I'm with you there. But I frankly have no idea whether it's gonna be here in year or in a thousand years. I'd like to guess based on acceleration in technology that we'll be somewhere close in the next couple centuries, but I don't really know... even after following a lot of aspects of AI over the past decades.
And despite all of these crazy warnings, I think we'll have plenty of time to ramp up to the point where we want to start worrying. When you can show me a computer that has the cognitive skills and adaptive capabilities for learning of a 5-year-old, then we start worrying. Maybe even a 2-year-old. Right now, we have machines that are really good at doing certain kind of tasks -- but a machine that can brute-force a way to beat a grandmaster at chess isn't much more "intelligent" than a toaster.
When that machine that beats a grandmaster at chess can also figure out by itself how to build a toaster and serve me breakfast... then I'll be worried.