I don't get it, it looks like a horrible scooter, a horrible briefcase, and a horrible travel case. It probably weighs a ton to carry, and it costs $6,000?
This looks like an idea better left in a cartoon. It's not compatible with the real world.
the majority of police are good people with a genuine desire to do good in the world
So are the majority of "criminals".
My apologies, the second to last paragraph should read "in order to use SPDY or HTTP2 even for "HTTP" requests"...
The extra "HTTPS" is nonsensical in this context and should not be there.
This is the same question as what to do with "HTTP" (not HTTPS) requests when transported over HTTP2 (which is supposed to be all TLS) and SPDY (which is already all TLS, and which HTTP2 is based on). Usually it's framed in the context of "do we need to authenticate and verify TLS certificates when the user didn't originally request HTTPS?"
Some people are of the opinion that "TLS is TLS, and if you can't 100% trust it, there's no point." And I can see the logic in that. Obviously that should always be the case when you've explicitly requested an HTTPS connection, and ideally, at some point in the future, it would be nice to be the case for all network connections, all the time.
But when you step back, you have to realize that those connections are currently completely unencrypted and untrusted - they're HTTP, not HTTPS. And that the march to encryption is slow. The majority of websites have no TLS encryption capability at all, maybe as many as 20% of the remainder are self-signed, and quite lot of the rest may have certs which don't match the domain being requested. (The same is no doubt true of apps, mobile or otherwise.) And the latter problem, particularly, is quite difficult to solve for technical reasons in a lot of cases critical to the orderly and economical operation of the internet, such as CDNs.
This goes beyond the usual lament that sites will need to pay $100+ per year to get a cert - that's not really the problem, though from my experience most site owners will have to be dragged kicking and screaming before they bother to install a cert and get HTTPS running properly. Even if a cert is installed, most of them want to redirect back to HTTP at any opportunity.
Besides performance, cost, and administrative hassle, the big problem is the royal pain that it can be to take care of all the issues of trusted certs across hosting providers, CDNs, lead generation partners, etc. That's because in a lot of cases, those providers are hosting assets under a variety of domains - sometimes hundreds or thousands of domains - on single shared servers (or many copies of shared servers), each with a single IP address shared among the various domains. It's shared hosting all over again, this time writ large across global CDNs and the like. Even with your own hosting provider, you might face the same problem on development and staging environments even if not on production, making testing difficult. And while they're working on the problem, so far HTTPS does not play well with shared hosting. (On top of that, a lot of ad networks don't support HTTPS at all, so they introduce the mixed content problem into your pages. If your site depends on ads, you might not be able to serve them over HTTPS connections, which is why some sites offer HTTPS only to paying customers.)
The whole idea of SPDY or HTTP2 being "TLS-only" is laudable, to gain opportunistic encryption even when the user didn't request HTTPS. But by so thoroughly breaking sites with mixed content or untrusted certificates (either expired or self-signed or for the wrong hostname or whatever), I'm of the opinion that all it's doing is delaying the adoption of TLS for websites. Rather than going "oh well, to get HTTP2, we'll have to fix this", most sites, faced with the hassle and resulting broken pages, will drag their heels adding HTTPS or enabling HTTP2, forcing downgrades to HTTP 1 for many years to come.
Encryption absolutists portray the question in simple terms: why would you not want to trust your encrypted connection? You'll be vulnerable to man in the middle attacks, therefore they should always be authenticated and verified. But the real question is: when users haven't specifically requested HTTPS, is it better to have those connections mostly be COMPLETELY unencrypted and untrusted (which are even more susceptible to MITM), but when they are encrypted to trust them (even if the user can't see that they're encrypted or trusted)? Or for a larger proportion of them to be encrypted, but not necessarily always trusted in the face of potential MITM attacks? Considering that untrusted connections at least protect against PASSIVE surveillance, I kind of think there's some merit to the latter argument.
Also, when you have something like SPDY, what are you going to do if there is a certificate error or mixed content? Most of the point to using SPDY is to speed things up, while opportunistically providing TLS in the background, without the user's knowledge, and if the user didn't request HTTPS, they aren't expecting the connection to be secure, private, or trusted. On most sites it would be regular HTTP, in some browsers even that site would be, and in general they'll never know if it is. So if there is a cert problem, what do you do? Break their webpage? That's just rude, stupid, and frankly, a broken protocol. The user didn't ask you to use SPDY or TLS, they just want to see the page. On a non-SPDY browser they'd just be getting HTTP, and nothing would be broken. Maybe decide you can't trust the connection, back the whole thing out and try again over port 80? That's just as bad: first you're going from partial privacy to a complete lack of privacy (and losing your pipelining and header compression in the process), so it's not like it's "better" to have done that, you're just making things worse. And second, you're slowing things down by having to stop partway through the TLS handshake (or maybe even later in loading the page) and go back to re-request the whole thing as HTTP, when the point was to speed things up. That's just plain broken. So really, the ONLY rational option when SPDY encounters mixed content or a certificate problem in the course of serving a "HTTP" request (not "HTTPS"), is to just load the content anyway and not complain, even though it's trying to use TLS all the time. It was doing the TLS opportunistically, invisibly, in the background, and without being asked to, and the alternative is completely unencrypted HTTP, so if it fails the right thing to do is say "oh well, I tried to opportunistically encrypt, but it's not perfect, I'll just continue even though it's not trusted" - NOT to moralistically tell the user they can't see their webpage, or to throw out the baby with the bathwater in a snit and tell them to try again over HTTP (which is exactly what they were originally requesting, but you've wasted their time in the meantime).
The problem being, by requiring sites to be 100% perfect in order to use HTTPS, SPDY, or HTTP2 even for "HTTP" requests, many of them will choose to remain with unencrypted HTTP 1.1 instead, and how is that better? (And by the way, this choice often rests with the browser vendors, who may or may not choose to support protocol options for unvalidated TLS connections.)
Now what this means for "trusted" proxies is kind of an open question. In some cases I guess it could be a preferable alternative to not validating certificates at all or falling back to HTTP, so to the extent it avoids either of those scenarios, it might be a good thing. But since it won't necessarily solve some TLS certificate problems, I don't know if it will make much difference. Either browsers will support unvalidated TLS for background encryption of SPDY or HTTP2 "HTTP" connections (in which case there would be no need for trusted proxies at all), or else most sites might still resist HTTPS/SPDY/HTTP2 as long as possible, making it kind of irrelevant for them.
No, not at all, and I'm fairly sure Comcast has not been.
Previously, Netflix had to go through a middleman to get to Comcast (Cogent, as well as Level 3 and others). They already had to pay those middlemen, and the connections they were getting to Comcast were increasingly congested, probably due to transit providers not wanting to pay for peering even if they were sending a lot more traffic in one direction than the other, and thus the other end not wanting to invest in additional infrastructure to handle that increased one-way traffic. This is typical, has been the standard practice for the life of the Internet, and has nothing to do with "Comcast vs Netflix" or "net neutrality" etc. Peering agreements are supposed to assume roughly equal traffic in both directions from both parties, otherwise the one causing the imbalance in traffic is expected to pay.
Now, Netflix are paying Comcast directly to cut out the middleman and get better, less-congested, direct connections. This means they don't have to pay the other transit providers for the traffic they'll now be sending directly to Comcast, AND it seems their payments to Comcast will be less than what they were paying Cogent et al for the same bandwidth.
So for Netflix, this is win-win: they can cut their bandwidth bill AND get better performance and less congestion streaming movies to Comcast customers. What's the problem?
Net neutrality is a real concern, but this particular case is not an example of it.
Net neutrality is a real issue, but this is not an example of it, it's just Internet infrastructure working as it always has and as it's intended to.
Previously, Netflix did not have a direct peering arrangement with Comcast, so they paid Cogent and others for transit to Comcast.
Now, they have arranged to directly connect their network to Comcast (which was NOT the case before), and, since they are not supplying the roughly equal traffic in both directions typical of "no-pay" peering agreements, they have agreed to pay Comcast for this arrangement.
What they are paying Comcast for direct peering appears to be LESS than what they were paying Cogent et al previously for transit to Comcast... And they have a more direct, and presumably better performing, set of connections now.
This is a win-win for everyone, and has nothing to do with net neutrality. It's a simple arrangement to implement more direct and lower-cost traffic relaying.
First, emissions per capita is a worthless measure. The only one that makes sense is CO2 per $GDP.
I guess the $GDPs are the only thing that matters? That's nice to know, it won't be a problem once all the people of the world are gone, leaving only the $GDPs behind to enjoy the mess.
The US is destroying democracy because in practice voting anywhere outside of the US is useless.
Voting inside the US is useless too. Do you seriously think the US is still a functioning democracy at the federal level?
As a cord-cutter, I simply decided I have no interest in watching anything that I'm getting snubbed from. I'm too busy anyway, it's a great excuse not to be watching TV. If it eventually shows up on Netflix, I might eventually watch it (but in the case of the Olympics, probably not). Otherwise, I don't care and it might as well not be happening. I didn't watch the Super Bowl, and I won't be watching the Olympics, and frankly, despite years of religiously thinking I always needed to watch major events like these, I don't miss either in the slightest. (Or the Grammys/Oscars, etc.)
If I'm curious about the ads from the Super Bowl, I can watch them on YouTube, if I don't get bludgeoned with them over and over for the next year anyway. (So far I don't care enough to even look, but isn't it sad that the commercial advertisements are like 10x more interesting than the actual event? Oh wait, the ads ARE the event, the whole reason they want you to watch, like with all television, it's just the Super Bowl is the only place that's made blatantly obvious...)