Slashdot videos: Now with more Slashdot!
Oreskes argues that scientists failed us, and in a very particular way: They failed us by being too conservative. Scientists today know full well that the "95 percent confidence limit" is merely a convention, not a law of the universe. Nonetheless, this convention, the historian suggests, leads scientists to be far too cautious, far too easily disrupted by the doubt-mongering of denialists, and far too unwilling to shout from the rooftops what they all knew was happening. "Western scientists built an intellectual culture based on the premise that it was worse to fool oneself into believing in something that did not exist than not to believe in something that did."
Why target scientists in particular in this book? Simply because a distant future historian would target scientists too, says Oreskes. "If you think about historians who write about the collapse of the Roman Empire, or the collapse of the Mayans or the Incans, it's always about trying to understand all of the factors that contributed," Oreskes says. "So we felt that we had to say something about scientists.""
An update pipeline, backed by a company with a good development methodology, is the best insurance against long-standing unplugged security holes. Look at all of the terrible, abandoned consumer routers full of security holes, for instance.
That said, before many folks are willing to such companies and their products into our homes, they need to earn our trust.
You *do* have ultimate control. You can elect to not buy the product, go with a competitor, or use an entirely different class of product entirely.
This is why proper privacy and property rights must properly legally extend to data hosted in cloud services.
The private companies that offer cloud-based services are not what worry me. There are a lot of sound economic reasons (see: the devops movement) for why this kind of product architecture (where a physical product, coupled with always-on connectivity and a remote cloud-hosted service) makes a whole lot of sense. There are a lot of market incentives for these companies to clearly delineate what they will and will not use the data (and sensors) for. Moreover, there can be a large degree of diversity between the various single-function cloud services one uses (even if Nest was recently acquired by Google). People care about their privacy, but they also balance it against the utility these kinds of products offer. I have a Nest Protect, and I'm comfortable trusting it a lot more than a regular standalone. Thus, they *consent* to the introduction of such technology into their lives, with the entirely reasonable expectation of benefit.
Another great example is the Tesla Model S, which is so dependent on cloud-services that it comes with a bundled 3G modem and data plan.
However, governments see the concentrated user data in data-centers on their soil as entirely too delicious to ignore. Not only does the immediately visible claim of increased security ("we could have caught the terrorists!") tend to outweigh the more general argument for individual property and privacy rights in the political sphere, but institutional incentives on the part of powerful government agencies and their contractors to grow their mandate mean that they'll heavily lobby for such intrusions.
I think most of us geeks grew up terrified of the very idea of the Orwellian Telescreen. However, it's not the technology that's evil (many of us have plenty of devices with a camera integrated with a display), but the threat of its use without consent.