Comment Re:updates, updates, ... (Score 3, Interesting) 126
They stop being secure.
They were never secure
Is a system with no security defects known by anyone secure?
This might appear to be a philosophical maundering like "If a tree falls and no one is around to hear it, does it make a sound?", but it's not. It's a very serious question, with real implications... and the answer is yes.
Consider FakeID, a serious vulnerability in the Android app signing infrastructure that basically allowed any app to be claim to be from any provider -- including claiming to be signed by the OEM, to obtain system privileges. The bug was introduced in 2.2.1 in 2010 and existed in all versions of Android until 2014.
But no one knew.
Once the flaw was revealed Google was easily able to go back and examine the certificates of all apps uploaded to Google Play during the entire period of time between when the vulnerability was introduced and when it was fixed. Google also examine the contents of other app stores, and non-store app repositories. There wasn't a single instance of an app with a faked certificate chain, anywhere, until the public disclosure of the bug. Snowden's documents gave no hint that the NSA knew of it. Hacking team's archives had no hint of it. If anyone, anywhere, knew of the bug they were incredibly circumspect and careful with their usage. The more reasonable conclusion is that no one knew.
During those four years, all Android devices were vulnerable to this serious security hole, but none of them were exploited. So, the Android app signing architecture was effectively secure even while it was technically broken.
And as I said above, this is not just a semantic quibble. If your definition of "secure" (under some defined threat model) assumes that the system must have no security defects, then no system of any complexity ever has been or ever will be secure. Security is only meaningful within a defined context, and that context includes the knowledge of the adversaries. Of course, we can never know what the attackers do or do not know, but we can reasonably suppose that if they don't use a devastatingly effective attack it's because they don't know about it. This means that systems actually do get less secure over time, unless they're maintained.
(Note, BTW, that the mere existence of a known security defect also doesn't necessarily make the system insecure. That also depends on threat model and on whatever other mitigations may be in place. For example, take the libstagefright bug. 90% of Android devices in the world are running Ice Cream Sandwich or higher, and have ASLR enabled in their kernels which makes a bug like the one in libstagefright very hard to exploit.)