Frankly given the fuck-ups with the government right now (the F-35 jet, Obamacare web site, etc. and that's just recently) I have trouble believing it is capable of producing something so effective. They might be willing to try, but whatever "secret project" you've "uncovered" probably only works in very specific conditions, if at all.
(Bermuda Triangle)
Oh.
most of the human race will have more leisure time
That's what they said 100 years ago. The human race will have more leisure time! And yet now we are more overworked, overstressed, and overburdened than ever before. We work harder and harder to fit into a repressive world economy that has grown beyond the control of the majority of humanity. We are locked in a cycle of supplication and apathy, unable to affect our own destinies and only able to hope that the life we are given is not too terribly painful.
If the robots come, they will not be interested in suppressing that majority. We are already under the control of a massive machine. Perhaps the rich and powerful should fear that intelligent machines will come to take the reins.
Don't piss on Javascript. Sure, the standard library is terrible and poor cross compatibility makes it impossible to do anything interesting in a browser without shims, but purely as a language the whole "class = function = object" idea is truly magnificent in its own way, especially with the implementation of anonymous functions and closures. The ability to override "this" as well provides many useful metaprogramming tools; just recently I used it to load an external library into its own independent global scope (since it is not well behaved and I don't want it messing with the existing global scope). I always find pleasure writing Javascript when the task is narrow enough and I've got everything I need. And just so you know I have used C, C++, C#, Java, Ruby, Lisp (Scheme), PHP, and Javascript.
(more on-topic, I can't speak towards Perl, but PHP can be done right and when it is it can be maintained by anyone, although most of "anyone" will probably write you a horrible kludgy mess instead)
If you want to be driven by above-average drivers only, you can request a higher-rated driver from Uber (and pay more per mile) or — if Uber's vetting process seems insufficiently rigorous to you — go for a different company altogether. But don't try to impose it on the rest of us.
This statement, I think, is the defining difference that the Internet will make on public policy. It used to be that if you wanted a higher quality, you had to find a quality brand you could trust, and if the market doesn't favor lots of competition for whatever reason, a quality brand just wouldn't exist without government intervention. After all, why would a rational profit-seeking corporation do anything right if it put them at a cost disadvantage against other corporations already doing quite well by doing it wrong? So we got ourselves lots of government regulation to force companies to provide a quality product.
But now with the Internet, a brand like Uber can effectively sell us the quality we're willing to pay. The taxi market is traditionally so monopolistic that the only way to make good quality available is to legally require it from everyone. But the Internet makes that obsolete. What follows is a "15 round fight" not just over Uber, but about every industry touched by the Internet. The worst part is that many people will fight for obsolete leftist/rightist ideologies in which they are already emotionally invested, even though the issues were never that simple anyway.
Now that commercial-grade engines are available free-as-in-beer
Free-as-in-FREE-beer! It doesn't make sense the way you say it! Beer is not automatically free!
It's possible that the game programming camp is setting the children up with a point-and-click game dev engine
There's no better point-and-click game dev engine for learning than Scratch; if that's what they used, expect the kids coming out to have learned some of the logical skills involved in computer science. Syntax is easy if you've already got a good grasp on splitting up complicated problems into smaller ones, breaking smaller problems down to math and logic structures, and integrating many smaller components into a larger system, all of which is the core of computer programming and is what you learn in Scratch without worrying about syntax.
We have a lot of subconscious mental faculties that are beyond even our most complex computers. One big one is still the ability to make intelligent conversation.
This no matter what some marketers at the University of Reading have thrown together at the last minute. It's a shame we still have such a gullible population when it comes to computers. Oh well. Time to go program a GUI in Visual Basic to track some IP addresses...
Today we have computer navigation, plain-language database queries, and speech processing such as Siri. AI? No. Table lookup, elaborate.
You've got the beginnings of a well known thought experiment called the Chinese Room:
Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.
So the person running the commands manually, operating the same as the computer processor basically, has passed the Turing test. However neither the human being nor the "computer" really understands Chinese.
What you are implying about Siri et al is that human beings operate differently. But how do you know that human intelligence isn't just a series of elaborate table lookups? While the stated purpose of the Chinese Room thought experiment is, according to Wikipedia, "to challenge the claim that it is possible for a digital computer running a program to have a "mind" and "consciousness" in the same sense that people do", it actually proves something else: as long as you define human "consciousness" as going beyond mere computation, it is impossible to test that it exists. But since all we can observe of a human are its inputs and outputs, that is the only basis upon which we can compare human "intelligence" to AI. The basis of the Turing test is to measure whether the inputs result in similarly intelligent outputs.
If AI research has taught us anything, it's that humans are much more intelligent than we thought we were. We have a lot of subconscious mental faculties that are beyond even our most complex computers. One big one is still the ability to make intelligent conversation. Siri may be able to understand some requests enough to deliver the desired response, but a lot of the time her level of comprehension is below a retarded four year old.
I do think, however, that if a computer could fool a dog into believing it was also a dog 100% of the time, then it would have the intelligence of a dog, with a caveat. The dog being fooled would need to understand the philosophical nature of the test and also understand how the computer is likely to fail. Otherwise it would be like asking Siri to provide feedback for an English paper; she just does not understand the question being asked.
The question is really more about "are this AI's actions indistinguishable from a being we know is intelligent". If the test administrator is qualified to judge that, and the test is run enough for the results to be statistically significant, it's perfectly reasonable to suggest that because the actions are indistinguishable, so too must be the level of intelligence behind them.
Please don't blame this on computer scientists. This story was almost certainly generated by marketing types trying to line up with some anniversary of some kind. I'm not exactly sure because TFA for the original story says something about "60th anniversary of Turing's death" and "created in 2001". And the 30% is clearly a lie to make up for their failure to even reach the 59% at which Cleverbot was already tested.
"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"