Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Yes, same error, but you missed it. The fundamental problem is that truely secure non-centralized key verification is HARD. If the bank publishes their GPG key, why would you trust it?
Tools for managing one's trust network barely exist. This problem is not isolated to GPG. This problem is so difficult that the more commonly used protocols, HTTPS and S/MIME, solve it effectively by ignoring it and replacing it with a system in which individuals have little or no control over their trust network. Marlinspike has participated in efforts to improve the trust network for HTTPS, but makes the same error, as use of his tools requires one to trust him.
The case in the article seems like an example of this kind of problem with the systemd team. Instead of working with one of the prominent bootloaders to get the UEFI trust chain worked out, they just adopt an infrequently-updated nonstandard (sounds like = buggy) bootloader and run with it. This has the effect of abandoning all the work already put in by the prominent bootloaders to get corner cases working. It's a shortcut so systemd can add a bullet to their feature list, but provides the feature in such a way that it is buggy for many use cases.
I don't object so much to replacing sysv init, but the systemd team appears to have a tendency toward repeatedly reinventing the wheel badly just to get things done faster, and being kinda rude about it, and that makes one a bit uneasy. Though I'm honestly unsure if this is just the sensationalization of a few usual cases or more typical behavior.
The other option is that Microsoft could acknowledge reality - they are not fixing things fast enough to resist targeted attacks. MS's statement about it "not being seen in the wild" demonstrates that they don't understand the current state of exploits. Google's hypothetical attacker is one who will go to lengths to keep an exploit from being used specifically so that MS won't fix it. Also a monthly schedule for updates is a huge liability against such an attacker, as they know their window of opportunity. MS is stuck in the old model that an exploit is not important unless it has been seen in the wild. While that is all well and good for preventing worms from spreading (and therefore protecting MS's image) it is not good enough to protect your company's data from a targeted attack that can buy or discover a zero-day vulnerability. That is reality.
Another way to look at it is that people using MS stuff have chosen interoperability over security. Thus the longer patch testing cycle, and the once-a-month updates. Therefore they shouldn't be surprised when it is demonstrated that... they chose interoperability over security.
So the model for search as well as for gmail was the user trading their privacy for a service. Thus "built on the concept of invading privacy". I think this is a much more even trade on the search side - I'm not averse to reporting to google which of their search results I looked at for a given query before I left the page. That provides better search. But I think one can make an argument that even offering a service in which you are scanning the user's email to market things to them is inherently evil. If you found out your IT staff at a company was just trolling through email for anything you would fire them in a second. Then it just went downhill from there. Though Big Brother Facebook beat Big Brother Google in the race to the bottom.
The fact is that the vendor you purchased your device from (Verizon) actively discourages third-party updates is between you and them. In most cases you cat jailbreak your device and install cyanogenmod, which is pretty similar to what you describe. The status of vendor-supplied updates has been discussed since the inception of Android. Google has mostly made the situation better compared to before Android, since updates for many devices are now controlled by the hardware vendor instead of the network provider. When you purchased your device, you chose to get something from a vendor (Verizon) who is well-known to be hostile to its customers. Don't complain that google didn't save your bacon. You could have bought a Google nexus 7, which is still getting updates, though the latest makes the old ones too slow to use. (In fact they did save your bacon, because you could just root your device to install cyanogenmod. Except that it appears that verizon patched the hole that was being used to root it! Wow that's hostile.)
In the case of Windows, you probably purchased your machine from someone like Dell (not comcast, which would be the closest analog of Verizon in the PC world) and it at least purported to have software from a separate vendor, Microsoft. Verizon, by locking the bootloader, actively prevents you from using system software from another vendor.
I think typically each file would be encrypted with a separate symmetric key. Then you can choose who is able to decrypt it by sticking a header with this key encrypted for various public/private key pairs. Then all you have to do is remove one of the encrypted keys, not re-upload the whole file.
As far as I know asymmetric encryption is never used the way you say in practice. It is too slow. It is used to encrypt a key for a symmetric cipher that is then used to encrypt the actual data. And that "combining your private key and their public key" statement is nonsense. Your private key is useless for securing information originating from you, since your public key is, well, public. It is useful for authenticating that information came from you, which is independent of recipient.
This is all setting aside the fact that once a party has access to some data, "revoking" that access has a sortof squishy meaning because they can just keep a copy of what they retrieved before.