The other cost of the S is the difficulty in obtaining and using certificates that are recognized by browsers without bothering the user. That's why the Let's Encrypt project is trying to make it free and easy.
Yes, you understand exactly!
But climbing higher in the tree will never get you to the moon. Programs that do better than humans in one particular area will not develop to the point that they have general intelligence. They'll be idiot savants, great at one specific thing to the point of being better than any human (like playing chess or Jeopardy, driving a car, performing surgery, or even writing a symphony), but a complete idiot at everything else.
I also think these programs will never get as good as the best humans at certain activities, like doing significant novel scientific research, proving hard math theorems, doing general programming, or translating languages. Certain activities do require general intelligence, not just one narrow specialty.
Uh, well how do you incrementally add 1 to "thousands" and wind up at "tens of thousands" at some point? Randomly?
Or did you mean count up to 2 billion, at which point you report billions and billions served and stop incrementing?
I think the situation is worse than that. Not only do we not have anything approaching a decent understanding of how actual intelligence works, it's probably way too complicated for a human to understand. Perhaps we could construct a computer system that could in some sense "understand" how the brain works, and it could design a better brain. That better brain could in turn build a better computer system, ad singularity. Actually I never thought of that approach before.
But does an agent have to understand how to make a thing before a thing can come into existence? Biological and AI research has shown that this is not the case.
As someone who has a recent graduate degree in computer science, and has a fair amount of experience in applying AI techniques, let me offer my take on the matter. What is referred to as "artificial intelligence" today will never, ever result in an agent that we could consider intelligent, by the standards of human intelligence. Nearly all AI research is focused on solving very narrow problems. To give one example, Watson is no more than a sophisticated search engine. It's barely more intelligent than Google's search engine. It has no understanding of the queries it gets and the results it gives. It merely gives the illusion of understanding because it often gives results that we might expect from a human who does understand. One quote that sums this up is "Believing that writing these types of programs will bring us closer to real artificial intelligence is like believing that someone climbing a tree is making progress toward reaching the moon."
The small percentage of AI research that really is trying to make something that can truly think and understand has had very limited results. The Cyc project, for example, might be the most successful of these projects but it's also been widely criticized. One might say that it understands what it is processing in some way, but its results are not that impressive. If you have a specific problem domain, it's much more effective to use standard machine learning and natural language processing techniques, which generally do some kind of simple number crunching to perform a statistical analysis.
Will we someday create an AI that's truly intelligent? I think so, and I have written up how I think it can be done, but I think we'll have cheap fusion power, interstellar space travel, and nanobots that perform apparent magic before we have AI that can do what an average human can do. I'm not worried about the singularity happening in my lifetime.
The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone