The problem is that requiring HTTPS doesn't make sites more secure. It prevents an attacker who can't obtain a legitimate SSL certificate for the domain from running a mid-transit MITM attack, nothing more. The biggest problems seem to be a) phishing attacks that convince the user to visit a rogue site eliminating the need for MITM, b) local system compromises (client- or server-side) that have access to the cleartext traffic and don't need an MITM, and c) rogue CAs who issue certificates for domains the recipient isn't authorized for which allows for mid-transit MITM with HTTPS. The first two can't be mitigated by anything other than smarter users (HAH!), and mitigating the third requires massive changes to certificates so it's possible to determine whether a certificate belongs to a given site without depending on anything in the certificate and without depending on the CA having validated the recipient.
Doesn't that depend on the configuration and purpose? If the HTTP server's running on my own machine and the URL is "http://localhost/...", am I automatically insecure because I can't get an SSL certificate for "localhost"? And how would an attacker not already on my machine exploit this?
If I can't test the full capabilities of a Web site because the browser won't let me, I'm going to have to switch browsers and relegate Firefox to testing-only just like IE is currently.
So, how exactly do they propose to recover from a compromise of these kinds of systems where it's impossible to change the authentication data? And these systems will be compromised, history has taught us that. At least with a password or a certificate carried in a two-factor dongle I can change/reissue it and what the crooks have is no longer valid. I don't like systems whose failure mode in the event of a compromise is catastrophic.
The article misses one point in it's analogy to paying for promotion: who's being paid. When I pay a store for special placement, I'm paying the store for special placement of my stuff on it's shelves. That's fine, it's the store's shelves and they're free to handle them however they choose. But suppose that, instead of placement on the store's shelves, I'm paying the store for special placement in the customer's pantry? Once I pay the store they'll send people to customer's homes to put my products front and center in the customer's pantry even if the customer didn't buy them and if that leaves the customer without enough space for what they did buy then tough luck, what the store put there is locked down so only the store can move it and they won't. That's not fine. It's not the stores shelves, and nobody's paying the customer for special placement on their shelves.
Ah, but the argument might be that it's not the customer's line, it belongs to the ISP. If so, then exactly what is that bill the customer's being sent every month for then? We already have situations like this. If I'm renting an apartment the landlord still holds the title to it but it's my apartment as long as I'm paying the rent and the landlord isn't free to just do anything to it he pleases any time he pleases. If I'm making payments on a car loan the bank holds title to the car but it's still my car and as long as I'm making the payments the bank can't just come in and borrow it any time they please or have it repainted to a color they like or anything like that. In the same way, the customer's paying for Internet access and as long as they pay the bill every month it's their Internet access and the ISP doesn't have an unrestricted right to decide how chunks of it must be used (unless, as with the boxes that disable a car if payments aren't made on time, it's made completely clear up front that this is being done and why and it serves a reasonable purpose (use of that box after a payment has been missed is one thing, but if the finance company tries to claim a right to use them when they think a payment might be missed soon (even though payments are still current) the courts would reject that as unreasonable even if the contract tried to allow it).
It may work out for candidates, though. Right now the company tends to start low and let the candidate name a higher figure, then go back and forth ending up somewhere in the middle. If their initial offer's too low the candidate will just name something higher, and unless the candidate's really cocky the company stands a good chance of getting them for less than they were willing to offer. With no negotiation the company knows there may well be competing offers out there so if they make their offer too low the candidate, knowing they can't negotiate, will probably walk away. Where before the company had an incentive to low-ball the offer and negotiate up, now they have an incentive to offer the most they'd be willing to pay this candidate to minimize the chance of losing the candidate to a competing offer.
NB: this is also why companies try to get the candidate to give an expected salary first, knowing that that sets an upper limit and the candidate is caught between asking for as much as possible and keeping the salary down so the company doesn't decide it's more than they'll consider.
I'd rather vendors worked the same way, give me their best price and I'll tell them whether it's within my budget or not. But then I'm a tech, not a salesman, I prefer to minimize the rigamarole so I can get back to doing productive work.
Well, we already have seamless transfer of public keys. That's the whole point of the PGP keyservers, after all. As far as revocation, your argument fails to take compromises into account. The ability to revoke a key is what allows me to handle a case where someone's broken into my computer and gotten hold of my private key. If I couldn't revoke my key, they could impersonate me forever using the stolen private key. Expiration serves a similar purpose, limiting the timeframe when a stolen key could be useful even absent a revocation. Properly done, expiration is handled before it happens by distribution of a new key signed by both itself and the old key. Since the attacker doesn't have the old key (it hasn't been revoked) he can't forge the old signature, and if both the old and new signatures are valid the new signature can't have been created by an attacker and the new key is clean. Both expiration and revocation become even more critical when I'm dealing with people I don't know directly, and let's face it we very rarely communicate only with a small circle of people we know personally.
And no, the CA system isn't inherently less vulnerable than self-signing alone. Self-signing without some additional authentication leaves you trusting the word of a malicious party about their identity, and they're highly unlikely to tell you the truth about that. That's why a self-signed PGP key by itself can't be trusted (unless you got it directly from it's owner by a secure channel), you need additional signatures from trusted parties to affirm it's authenticity. The problem is that the certificate system itself only permits one signature on a certificate/key. PGP had it right by permitting an arbitrary number of signatures on a key. If I require at least 3 different root CAs to vouch for a certificate, it becomes much much harder for any party to compromise things. In part that's because it takes more effort to compromise 3 root CAs, but it's also because it makes revoking a root CA certificate much less of a problem. Right now revoking a root CA certificate instantly invalidates every single certificate issued by that CA. Allowing multiple signatures would mean it would only invalidate those certificates where that CA was the last remaining trusted CA signing the certificate. OTOH if my certificate were signed by Equifax, Experian and Verisign and it was found Verisign had given their root key to the government, my certificate would still be valid after Verisign's root certificate was forcibly untrusted because I've still got 2 trusted CAs vouching for it. I'd only be in trouble if Equifax and Experian had both already had their root certificates untrusted and I'd failed to get additional signatures done by other CAs before Verisign went.
This is what certificate pinning was made for. If the browser knows what certificates the site ought to be using, it can simply refuse to connect to anything in the site's domain that isn't using one of those expected certificates. This doesn't even require CA-issued certificates, self-signed ones would be equally secure except for the fact that browsers complain about them. Note that this is just a slightly more permissive form of the server authentication built into the SSL protocol.
You could, that's essentially doing what they do to get distilled liquors from the other side (distillation takes the alcohol and removes water, you're taking the water and adding alcohol). The problem is that above a certain concentration alcohol starts absorbing water from the air. That's one of the two reasons it's so hard to get pure alcohol for use as a laboratory solvent. You could use Palcohol to mix concentrated alcohol, but frankly it'll be easier and cheaper to buy stronger stuff ready-made from your neighborhood liquor store. You can get 95% ABV neutral spirits under trade names such as Everclear, Gem Clear and Golden Grain Alcohol, and that's more concentrated than anything you could mix from Palcohol without a lab and a strong background in organic chemistry.
How about making part of the browser installation a check for whether DNT's been set one way or the other, and if it hasn't then prompt the user for how they want it set? It's one dialog during the first installation with a track/do-not-track answer (with no default button so just pressing Enter without thinking won't do anything), and then there's no ambiguity whatsoever about whether the DNT status is the user's choice or not.
My plan for billing data is to put the whole thing on a separate off-line system dedicated to the job. The customer-facing system for updating billing information won't have complete information, credit-card numbers and such will be masked (assuming we need them, as much as possible I plan to offload that to services that do payments for a living). Customer updates will be split, masked data will be used to update the customer-facing system's data while the complete copy will go through a write-only interface to the back-end systems after being encrypted using a solid public-key system so any encryption keys an attacker could get hold of won't permit decryption even if they get their hands on the change data in transit.
Out front the database will be handled by a back-end Web service so the front-ends that handle Web browser requests won't have direct access to the database. All requests get session-authenticated so a compromise of a front end doesn't give unlimited access to the back end. And the whole system's designed so any front-end or back-end node that's compromised can be instantly killed off without causing problems for the overall system. If I can get the time to re-image a node from a clean image low enough, I should be able to buy enough time by blowing their footholds out from underneath them to identify the attack and block it. Engineer the interfaces so I can do security updates to the nodes on-the-fly without disrupting things and that all should make life utterly miserable for anyone trying to get access to the data. DoS is a separate matter, but there's solutions for that I can use too.
"I'm a denizen of a.s.r, of course I'm paranoid. The question is am I paranoid enough?"
That's why I said "or a replacement". At the least, the ISS can serve as a construction shack while assembling that replacement, and as a source of parts and refined/processed raw materials to expand it's replacement. It's replacement may not even ever be truly separate, it may start as new modules attached to the ISS and once those new modules have enough space the original ISS modules would be disconnected and cannibalized.
And the ISS will help how, exactly? The entire ISS came from the Earth's surface. Unless you have a really fancy plan to do asteroid/lunar mining, that's where all future materials will ultimately come from too.
Yep, it did. And yep, we will need asteroid or lunar mining of some sort to get raw materials. Like I said, we can't sustain orbital manufacturing and construction while lifting the majority of the materials and supplies from the surface, which means we'd better stop dismissing lunar and asteroid mining and such as sci-fi dreams and start figuring out how to make them work. As far as the ISS, it helps because it's there. A city doesn't just appear full-blown, and neither does orbital infrastructure. The ISS is a structure already in orbit you can expand to house more people, so that your workforce for the next step can have a place to stay in orbit rather than commuting to and from the surface all the time. It may be in low orbit, but the biggest fuel cost and the biggest constraints on weight and size aren't in getting from low orbit to high orbit, they're in getting from the surface to low orbit. And ultimately it'll end up being recycled into raw materials or basic parts for something else once it's no longer needed (for instance if the solar panels are still in working order they can be disconnected and attached to something else that needs more power capacity).
No, it's not going to be easy or simple. Colonizing North America wasn't easy or simple either, but we did it. And as for Star Trek having ruined a generation's sanity, all it did was encourage them to set a goal and then figure out how to go about getting there. Though I'll admit that attitude does seem kind of insane to the couch potatoes. Not really my problem though, my entire career my motto's been "They don't pay me to not get the job done." and the older I get the less reason I see to change it.
If the US wants to go to Mars for more than a single short mission, it's going to need the ISS or a replacement. We'll need to be able to build ships in orbit so they aren't limited by the constraints of the first hundred or so miles of the trip (lifting the ship up from the surface to Earth orbit), that's the only way we'll be able to build them large enough for the crew, supplies and equipment needed for a mission of more than a week or two. And if we want this to be a sustained thing, sending more than a couple-three missions, we're going to need to be able to build ships without shipping the majority of their components up from surface.
We can already see the parallels from large historical construction projects in the US. For Hoover Dam they didn't ship the concrete in from the nearest cities and they didn't have the workers commuting between the dam site and those cities. They set up the cement plant on-site to make the concrete from local materials and a town sprang up at the site to house and supply the workforce. For resources (silver, gold, timber, cattle, oil, etc.) it's worked the same way, people moved to where they were needed and the facilities and infrastructure to house, support and feed those people grew with the population. Because frankly you just can't run an oil field in Texas with all your workers and suppliers back in New Orleans.
The major problem with C++ is that it's popularity means there's more crap code written in it by bad programmers than any other language. But, to borrow from a quote, a bad programmer can write bad C++ in any language. I've had plenty of experience with bad programmers and bad code, and the problems rarely stemmed from the language used. They usually stem from the programmer not understanding the language or the environment and from an all-too-common mule-headed desire to design their part of the software the way they want it to work rather than in a way that fits with the rest of the software. Languages where this isn't a problem are typically new enough that there's only been one "right way" to do things taught. C++ is old enough that there's a variety of approaches built up over time, leading to the problem.
As for C++ being so popular, that's because well-written C++ can beat most other languages in performance. I've learned one thing over the decades: good engineering in software is a great priority as a developer, but from the business side it's irrelevant. Business cares that it gets the correct results and it runs fast enough. It could be the worst Rube-Goldbergesque contraption under the hood, but as long as it gave the right results and performed like a Formula 1 car they'd be ecstatic. C++ makes it easy to achieve that in the complex software common in commercial environments.