An update pipeline, backed by a company with a good development methodology, is the best insurance against long-standing unplugged security holes. Look at all of the terrible, abandoned consumer routers full of security holes, for instance.
That said, before many folks are willing to such companies and their products into our homes, they need to earn our trust.
You *do* have ultimate control. You can elect to not buy the product, go with a competitor, or use an entirely different class of product entirely.
This is why proper privacy and property rights must properly legally extend to data hosted in cloud services.
The private companies that offer cloud-based services are not what worry me. There are a lot of sound economic reasons (see: the devops movement) for why this kind of product architecture (where a physical product, coupled with always-on connectivity and a remote cloud-hosted service) makes a whole lot of sense. There are a lot of market incentives for these companies to clearly delineate what they will and will not use the data (and sensors) for. Moreover, there can be a large degree of diversity between the various single-function cloud services one uses (even if Nest was recently acquired by Google). People care about their privacy, but they also balance it against the utility these kinds of products offer. I have a Nest Protect, and I'm comfortable trusting it a lot more than a regular standalone. Thus, they *consent* to the introduction of such technology into their lives, with the entirely reasonable expectation of benefit.
Another great example is the Tesla Model S, which is so dependent on cloud-services that it comes with a bundled 3G modem and data plan.
However, governments see the concentrated user data in data-centers on their soil as entirely too delicious to ignore. Not only does the immediately visible claim of increased security ("we could have caught the terrorists!") tend to outweigh the more general argument for individual property and privacy rights in the political sphere, but institutional incentives on the part of powerful government agencies and their contractors to grow their mandate mean that they'll heavily lobby for such intrusions.
I think most of us geeks grew up terrified of the very idea of the Orwellian Telescreen. However, it's not the technology that's evil (many of us have plenty of devices with a camera integrated with a display), but the threat of its use without consent.
Of course, if government is declared as responsible for nutrition, then then naturally it must also be responsible for the effects thereof.
This is a significant reason why state control must always beget more state control: regulators must make an at least ostensible attempt to correct unintended effects that are the result of a given intervention. The domain of responsibility becomes effectively unbounded.
While devising a complex system by means of patches in ad infinitum can work (see, Linux kernel), but only if that system's usage is constrained by voluntary choice.
Sadly, this means that folks with a given expertise (say, medical), will say things like they do in TFA: the sort-sighted view that governments should generally increase or at least maintain spending in order to avoid the expected bad effects of backing out on a responsibility.
There can be no substitute for individual responsibility.
Machines have less problems. I'd like to be a machine. -- Andy Warhol