Slashdot videos: Now with more Slashdot!
The certificate system is badly broken on a couple of levels. Most obvious and relevant to the OP is that there are 650 root CAs that can issue certs, including some state-run CA's by governments with potentially conflicting political interests or poor human rights records.
It is useful to think about what we use SSL certs for:
1) Establishing an encrypted link between our network client and a remote server to foil eavesdropping and surveillance.
2) To verify that the remote server is who we believe it to be.
Problem 1 is by far the most important, so much more important than number 2 that number 2 is almost irrelevant, and fundamental flaws with feature 2 in the current CA system make even trying to enforce verification almost pointless. Most users have no idea what SSL verification actually means or what any of the cryptic (no pun intended) and increasingly annoying alerts warning of "unvalidated certs" mean anyway.
What I find most annoying is that the extraordinary protective value of SSL encrypted communication is systematically undermined by browsers like Firefox in an intrinsically useless effort to convince users to care about verification. I have never, not once, ever not clicked through the warnings on a web site to access it. And even though I often access web sites from areas that are suspected of occasionally attempting to infiltrate dissident organizations with MITM attacks, I still have yet to see a legit MITM attack in the wild myself. But I do know for sure that without SSL encryption my passwords would be compromised - how many of us get spam from friends with Yahoo accounts? Yahoo still does not SSL encrypt login by default and so accounts are regularly compromised by spammers. Encryption really matters and is really important to keeping communication secure. Anything that adds friction to encryption should be rejected.
Self-signed certs and community certs (like CACert.com) should be accepted without any warnings that might slow down a user at all so that every website, even non-commercial or personal ones have no disincentive to adding encryption. HTTPSEverywhere. Routers should be configured to block non-SSL traffic (and HTML email, but that's another rant. Get off my lawn.).
Verification is unsolvable with SSL certs for a couple of reason, some due to the current model, some due to reasonable human behavior, some due to relatively legitimate law-enforcement concerns:
Obviously the OP makes clear that the current model is badly broken because the vast majority of issuing companies have every reason to minimize the cost of providing a cert which means cutting operational costs and increasing the risk of human error. Though even at a well run notary, human error is likely to occur, especially as notaries in different countries, speaking different languages can issue certs for companies in any other location. Certificate issuance by commercial entities is fail. A simple error can, because registrar certs are by default trusted, compromise anyone in the world. One mistake, everybody is at risk. Pinning does not actually reduce this risk in advance, though rapid response to discovered breaches can limit the damage.
But even if issuance were fixed, it wouldn't necessarily help. Most people would happily click through to www.bankomerica.com without thinking twice. Indeed, as companies may have purchased almost every spelling variation and point them all toward their "most reasonable" domain name, it isn't unreasonable to do so. If bankomerica.com asked for a cert in tashkent, would the (or even should they) be denied? No - green bar, wrong site. Even if they were non-SSL encrypted, it isn't practical to typo-test every legit URL against every possible fake, the vast majority of users would never notice if their usual bank site came up unencrypted (no cert at all). This user behavior limitation fundamentally obviates the value of certs for identifying sites. But even a typo-misdirection is assuming too much - all of my phishing spam uses brand names in anchortext leading to completely random URLs, rarely even reflective of the cover story, the volume of which suggests this is a perfectly viable attack. This user problem is mostly an issue for average users and below, but (hopefully) less so for dissidents or political activists in democracy challenged environments that may be subject to MITM attacks because (one hopes) they might actually pay attention to cert errors or use perspectives or crossbear. User education can help, but in the end you can't really solve the stupid user problem. If people will send bank details to Nigeria to assist in the transfer of millions to help a nationality abandoned astronaut expatriate his back pay, there is no way to educate them on the difference between https://www.bankofamerica.com/ and http://www.bankomerica.com./ The only viable solution is distributed trust as implemented by GPG (explicit chain of trust) or Perspectives (wisdom of the masses); both of these seem infinitely more reliable than trusting any certificate registry, whether national or commercial and both escape the cert mafia by obviating the need for a central authority and the overhead entailed.
Further, law enforcement makes plausible arguments for requiring invisible access to communication. Ignoring the understandable preference for push-button access without review and presuming that sufficient legal barriers are in place to ensure such capabilities protect the innocent and are only used for good, it is not rational to believe that law enforcement will elect to give up on demanding lawful intercept capabilities. Such intercept is currently enabled by law enforcement certificates which permit authorized MITM attacks to capture encrypted data without tipping off the target of the investigation. Of course, if the US has the tool, every other country wants it too. Sooner or later, even with the best vetting, there is a regime change and control of such tools falls into nefarious hands (much like any data you entrust to a cloud service will sooner or later be sold off in an asset auction to whoever can scrape some residual value out of your data under whatever terms they way, but that too is a different rant). Thus it is not reasonable for activists in democracy challenged environments to assume that SSL certs are a secure way to ensure their data is not being read. Changing the model from intrinsic, automatic trust of authority to a web-of-trust model would substantially mitigate the risk of lawful intercept certs falling into the wrong hands, though by making such certs useless or far harder to implement (LE would have to go to specific sites to get either a cert copy or to directly gather decrypted traffic, which would tend to favor US-based LE over foreign entities that might have a harder time convincing a US-based company to give up user data, though big cloud players with an international presence don't have a choice about this).
There is no perfect answer to verification because remote authentication is Really Hard. You have to trust someone and the current model is to trust all or most of the random, faceless, profit or nefarious motive driven certificate authorities. Where verification cannot be quickly made and is essential to security, out of band verification is the only effective mechanism. Sadly, the effort to prop up verification has made at the compromise of encryption, most recently Gmail rejecting self-signed certs for POP. That's insanely stupid. False security is being promoted at the expense of real security.
That's the thing about clouds, they're always changing. If you want consistent, reliable webmail, run roundcube on your own server and stop gifting Google your data.
If you want to retain ownership of your data, host it on your own server.
My data centers are all so small they'd be lost in the caverns of the likes of Google or FB, but in applications where ownership of the data is important; and this should apply to sovereign governments, most companies, and even most small businesses; availing oneself of an external data hosting or processing service is giving away the farm.
There are a variety of security concerns unique to the "cloud" environment that should worry anyone who has some liability or risk associated with unintended exposure of their data: from other users of the same physical hardware, from the typically faceless third party employees operating it, and from joining a collective target. A counter argument is that cloud vendors tend to be expert at security and are likely to have more resources to stay current and be vigilant than any single client of theirs, just as a law of scale. But, as DropBox's password fiasco proves, this assumption is not always true - or, perhaps more accurately, a statistical reduction in the likelihood of execution risk is not an elimination of that risk and the consequences of false assumptions can be severe when the failure is of a central repository.
One is safe in using a "cloud" service (such a fluffy marketing term for "third party hosted IT") for data that is intrinsically public, such as this forum or a Facebook post. For a company's HR database, not so much. For a government to have a "cloud computing strategy to lower costs" is very sad. The OP references a statistic that is driven in large part by Google and Facebook, services of such massive scale that vertical integration into the hardware makes sense. It is not intrinsically a refutation of owned and operated hardware. That these vertical integrations have grown to such scale as to rank as major hardware vendors in their own right is impressive, but not in and of itself a "tide" against enterprise hardware. That the vendors of enterprise hardware would seek to own a piece of the emerging market for low cost, low atomic reliability (mitigated by macro reliability) compute systems isn't an abdication of more proven product lines, rather a reasonable foray into new product lines.
The OP finds the data supportive of a popular meme: that cloud computing will replace enterprise computing. This may be true if Zuckerberg's "no privacy" jihad is extended to "no secrets" as well, but as long as companies and governments have secrets and people value privacy, there will be a market for owned and operated hardware since he that owns the hardware owns the data (and when you host your data with a third party, you implicitly trust every employee there). While it is in theory possible to secure remotely hosted data through encryption (and perhaps even to allow remote processing of fully encrypted data), the overhead of securing one's secrets against the third party's prying eyes (and those of their other customers) significantly undermines any touted (but generally unproven) cost savings of "cloud computing."
I use GPG/OpenPGP for some mail and "secure" web mail for other applications. I do not use third party web mail (such as gmail) because I can't control the dissemination or privacy (or longevity) of my mail and while my life is generally boring enough to fit within Eric Schmidt's idea of privacy ("If you have something that you don't want anyone [someone] to know, maybe you shouldn't be doing it in the first place [at least not though a google property]."), I occasionally write a personal opinion of someone I wouldn't want them to be able to Google later or share a business detail that could be economically damaging or embarrassing (or is subject to NDA) and gMail and all other web mail services are effectively public.
I've used PGP (and eventually GPG) since about '94 and my keyring has about 20 people on it: more than 1 new key a year! Alas, 25% of those keys expired in the late 90s. My address book has about 1500 entries. Why so few keys? As the OP pointed out, it isn't all that difficult.
The answer for me is that the model for encouraging encryption has to be more like S-WAN than GPG-like. I'd love to turn on "encrypt everything" and forget it, but I'd get an error message for 99% of my correspondents, so obviously that isn't going to happen. So I set my prefs to reply to encrypted messages with encryption, which is fine, but it means I rarely (almost never) initiate an encrypted thread.
What I'd like is an opportunistic encryption mode where any message to an address in my keyring is encrypted by default. Any message to anyone I don't have a key for gets a nice little
One annoying problem is that encrypted mail is not searchable. To solve that, I want my client to extract a keyword list on decryption then upload that keyword list to (my own) server as an unencrypted header to enable searching (implemented, of course, with a stop list for words you wouldn't want to appear in the clear even out of context or perhaps particularly out of context).
For the truly paranoid, this list could be a hash list, though you could still fairly effectively dictionary hash fish, but it would provide some security and reduce the easy availability of information. In fact, all headers could be hashed and still generally be searchable (except maybe date ranges).
I also want my server to store my public key and encrypt all incoming mail with it. Of course it is already transported in the clear, but it makes my server less vulnerable. Once the mail has had an index extracted and the body encrypted, someone cracking into my IMAP server would, at least, not find a historical trove of clear-text data. And my friends without keys would get annoying sig files evangelizing encryption.
If you put your data in the cloud, you put it in the hands of not just the US government, but every government the cloud company does business with. And also in the hands of every underpaid employee in the company; and while some companies may claim otherwise, their claims are unverifiable and unenforceable. "Cloud" services have their place - it is for data that is intrinsically public and ephemeral. Nobody should ever trust any cloud service with data that is proprietary or private or irreplaceable.
Most obviously, the "free" services are predicated on exploiting the value of their users as product to customers that are not the users. The model makes sense in some cases, for example a forum, where the shared public content is willing coproduced by users of the forum, exchanging their content creation efforts for use of the forum itself, the forum exploiting that content to attract eyeballs to advertisers that pay the bills.
While there are strong logical reasons why cloud services are intrinsically untrustable (ultimately, he who owns the hardware, owns the data), a simple thought experiment proves the folly: how hard is it to bribe an employee of a cloud service to give you inappropriate access to someone's data? Do you think you couldn't find one employee in one company somewhere? While one may be able to find companies that are currently resistant to easy attacks, cloud companies come and go like the
At best, the loss of yet another fleeting cloud service means only the loss of the associated data and whatever codependent business line the cloud service customer bet on the serial risk of the success of the cloud company itself.
The premise of handing your proprietary data to another person for remote, invisible processing and care is fundamentally flawed. Your interests are not aligned and their interests will evolve and ultimately diverge or fail.
Foreign companies (and US as well) are well advised to be wary of cloud services.
I carry a Tom Bihn Brainbag with their sleeves in it for laptops since 2007. I carried two laptops with it (T60+Dell M40) for a long time, now a W500+Sony NX5+acc and a crap load of other goodies. It has been on about 500 (?) maybe 600 flights with me, well over 1,000,000 air miles, was strapped to a pallet in a CJ in Afghanistan (and offloaded at the wrong fob where it spent the night and finally got back to me awfully dusty), bounced around Iraq, and accompanied me to to other difficult, sometimes less than gracious environments without any failures. The zippers are tight and with an occasional NikWax have kept the contents dust-free and dry.
My only complaint is that the Freudian Slip doesn't organize enough stuff - I wanted to make a rigid MOLLE style insert for the front pocket to strap sacks of cables and crap to and keep organized, and still keep an eye out for semi-rigid containers for delicate things, but so far nothing has been smashed inside, the straps and zippers work like new, no real fraying. The waist strap on mine has been a vestigial annoyance, but newer models have removable ones.
The only system failures are that the sternum straps disappeared one by one, but my GF has a later edition of the same bag and gave me hers since they interfere with her anatomy and the updated ones work better, no problems since. She's had hers for almost as long and almost as many miles and pretty much the same difficult travel schedule with no problems at all.
If it ever fails, I'll get another. It would be really cool if they had a ballistic spectra option and it would be very cool if there was an easy option to lock the zippers.
Nobody would ever, ever put an explosive device or weapon on a child if we decided that children were too precious to scan.
So there could not possibly be a problem with systematically allowing a certain class of people through security unscanned.
So, anybody help you find anything yet?
I only know of BRL-CAD that would be suitable for defining geometry that you could actually fabricate (as opposed to geometry for pretty pictures).
It has hit