I interpret Google's search results being regarded as speech as recognizing that Google can choose how to tell people what information Google has indexed on various web sites. In other words, when Google crawls the web, builds an index, then allows you to conduct searches against their index, they can return the results to you however they want to. If you don't like it, you can build your own web-crawling, indexing, and search engines and have at it.
As such, Google is just telling people what is out there on the web, not claiming that what is there is true or false. You could even view Google as helping folks if there is libel out there, because without Google it might be hard to for the injured party to even find the libel.
I didn't state my position well. I'm trying to make the claim that the API is nothing more than an interface, regardless of which side of the interface one implements. Unless there is something patentable in the structure or operation of the interface (because of invention), or copyrightable in the expression (because of some kind of original expression of ideas) the interface itself shouldn't be/can't be protected by law from use by someone else. In software the notion of "calling" a subroutine might be claimed to establish a difference between using the interface as the caller versus using it as the callee. In the case of a printer/ink cartridge (or toner) interface, which side is the caller and which is the callee may be a matter of opinion and ultimately irrelevant. I would argue that the printer actually calls upon the ink cartridge to supply the ink, rather than the ink cartridge calling upon the printer to do something with the ink, making the situation exactly analogous to software from a requestor/servicer point of view, yet copyright can't/hasn't been used to prevent ink cartridge third parties from "duplicating" that interface. The claim with respect to software APIs is, I believe, that duplicating the API on the callee side is a copyright violation where as duplicating it on the caller side is not (or else no one would be able to write software that used that API to call for a service) without violating copyright.
My argument is that since the API is nothing more than a minimal description of the interface, copyright can't be used to prevent duplication of the interface for software any more than it can be used to prevent duplication of the interface for an ink or toner cartridge. The fact that in the case of software the written description of the interface IS the interface is immaterial because it is just information alone, without a minimum of original creativity. A Supreme Court decision (http://en.wikipedia.org/wiki/Feist_v._Rural) established, for example, that a phone directory was not copyrightable because it didn't contain a minimum amount of original creativity. Even using the previous doctrine for copyright (sweat of the brow) I wouldn't think that an API was copyrightable, because there isn't likely to be significant time and energy invested in the API (assuming that the API isn't some brilliant creative piece of work).
It seems that if APIs can be placed under copyright, then all interfaces can be placed under copyright.
AT&T could have copyrighted the telephone interface and prevented people from buying non-AT&T phones to connect to the AT&T network. Laser printer manufacturers could stop coming up with DMCA-based attempts to wipe out toner cloners - just copyright the interface. Automobile manufacturers could wipe out the whole aftermarket parts market - just copyright the interface.
I don't think Robert Goddard http://en.wikipedia.org/wiki/Robert_H._Goddard was a Nazi.
I know that Robert Goddard's time came before the "Space Race". I just want to make the point that we Americans didn't just have our prize from WWII, Wernher von Braun to inform us about rocketry.
In the interests of full disclosure, I was born and raised in Massachusetts, which may explain my more immediate familiarity with Robert Goddard.
The acronym STEM breaks down into Science, Technology, Engineering, and Mathematics; and this is a good way of categorizing the different kinds of learning that lie behind really knowing computer science and other STEM disciplines.
A "computer science" degree at an accredited 4-year college should cover topics related to computer science in all four of these areas:
Science: The science behind the technology, including chemistry and physics necessary to implement the technology that is used to build computational devices
Technology: Hardware and software technologies, including: logic gates, CPUs, memory (primary and secondary), communications interfaces, operating systems, compilers, databases, programming languages, etc.
Engineering: techniques for analyzing problems and engineering solutions to those problems (using software, typically)
Mathematics: binary math, formal logic, formal language theory, etc.
All of these together provide the grounding to enable the student, after matriculation, to go forth and do good things. As with all disciplines, the better prepared the student, the more deliberate the graduate's approach to solving problems and being productive.
I have an aquaintance who was (by her own admission and grades) fairly poor at her CS studies, but who was exposed to lots of core computer science STEM material in college. She is now an excellent practitioner/manager in a software development position. She is often frustrated by the plethora of "programmers" (both young and old) who didn't get exposed to the full range of computer science material. They don't know how to think about different ways to solve problems, or what the machine is doing, or how their code is translated into lower level instructions, and this limits their ability. Using tools at a higher level of abstraction is absolutely essential for achieving modern rates of productivity in programming (i.e., Java instead of assembler) but doing so without an understanding and appreciation of what your tools are doing for you is like running a race with blinders on.
I'm sympathetic to some of the ideals of the Tea Party. I believe that the 2nd amendment describes a pre-existing individual right to keep and bear arms for private defense as well as to maintain the security of a free state. I tend to vote conservative. I was aghast at some of the provisions of the Patriot Act and other similar legislation when they were proposed and stunned that they were voted into law.
I am certainly a person, although you may claim that I'm not a "true Tea Party person" because I'm not rabidly foaming at the mouth in support of the entire platform. I was opposed to those programs well before 2009.
I think actual people's beliefs are more nuanced than the overly broad "left wing", "right wing" labels would suggest, and that there are a number of "right wing" folks who are very concerned about privacy and freedom, not willing to trade it away for an illusive security benefit.
I had responsibility for a corporate data network a number of years ago, when cross-country link speeds were substantially lower than they are now. About 80 sites distributed across the US. We charged a flat rate based on number of "subscribers" (network users) at each location to put a location on the network as part of our cost-recovery strategy. The CIO asked me to develop a traffic-based charge instead/in addition to the subscriber charge. We analyzed the situation as follows:
If the source of traffic is charged for providing data to the network, it will limit the services that the source chooses to provide, even if those services are very beneficial to the rest of the network.
If the sink for the traffic is charged for the data it receives from the network, it can cause the sink to be charged for data it didn't request or cause that site to stop using services that create a better overall result for the corporation as a whole.
Locations subscribe to the network because they want access to services and because they want to be able to provide services. Charging by traffic would force providers and consumers into a level of analysis and complexity that would ultimately limit the usefulness of the network, and stifle creativity and growth. On top of that, adding cost-accounting to the network based on traffic would add about 30% to the cost of operating the network.
All kinds of "unfairness" exist in the network world. Our more distant locations thought it very unfair that they had to pay big bucks for a lower speed connection than our customers located at a corporate hub site, even though our actual cost to connect those customers was several times what we charged them, for example. Because we were a corporation, we could decide that the cost of the network was a pooled cost that benefited everyone, and that the best cost-model for the corporation's benefit was the flat-rate subscriber cost regardless of distance.
The commercial world is a little bit different than the corporate world, because the sources/sinks are more polarized (I'm suspect NetFlix puts more traffic onto the network than they pull off of the network). But the arguments seem similar in nature. Consumers on the network are there specifically because they expect to get traffic from providers. Verizon would have a much harder time selling Verizon's services (especially higher download rates) if the rest of the Internet didn't have other firms providing data that Verizon's consumer customers wanted to download. The value of the network is in both the sourcing and sinking of the traffic flows combined. Focusing the accounting on just one direction of flow ignores that value.
Where there's a will, there's a relative.