Comment Re:workshop (Score 1) 229
There are, however, conceivable reasons why they might not want to (or be able to) give that $5 to Valve.
There are, however, conceivable reasons why they might not want to (or be able to) give that $5 to Valve.
I'd say the main purpose is to encrypt more stuff, and "not throwing a wobbly when you see a self-signed cert" is just a part of that. (Since you can't just turn off cert warnings and be done with it; you need some way to enable encryption without enabling authentication.)
It's not just for forms, or whatever "submit" was supposed to mean. All HTTP requests to the site except for the first one (per session? I'm not sure how long these headers are cached for) will go over TLS.
No, we created it to make it actually possible to do unauthenticated encryption with self-signed certificates on public websites. Currently, nobody uses self-signed certs because of the invalid cert warnings.
<meta> tags or HTTP headers are sent after the SSL negotiation, so neither of them can change the negotiation behavior. (Putting text on the page telling people to ignore the warning doesn't work either, because they'd need to ignore the warning just to see the text.) The only way a new header is going to work is if you use http:// for the first request, and then include a header that tells the browser it can pull the same pages over TLS, but without doing authenticity checks on the certificate.
Which is pretty much how this Alt-Svc header works.
Valid certificate not required. In particular this means you can use self-signed certs without a big massive warning.
Obviously a valid certificate via https:// is better, but if your choice is between a self-signed cert that throws a big warning and unsecured http://, you're going to choose the latter. Alt-Svc adds the option of delivering your http:// site over an encrypted connection.
(Nitpicker's corner: yes, the connection will be unauthenticated, which yes, means an active MITM can still read the contents. An active MITM is harder to pull off than passive sniffing, is obviously more evil, and is detectable, which makes this better than unsecured HTTP even if you don't get 100% perfect protection with it.)
If you're at the point where you can insert arbitrary HTTP headers into a connection, you don't really need to insert a header that causes the client to make requests from one of your own servers in order to sniff the data in the connection. Just sniff the connection.
It's the same third party that lets you have random-site.com rather than an IP, so you're stuck with them anyway.
So, with the third party out of the equation, how does one know that the security certificate you receive from random-site.com is the one that random-site.com intended you to receive?
By comparing the fingerprint with the list of valid fingerprints for the site, as published by the site via DANE.
Of course, browsers refuse to implement that...
I don't know that. It should be perfectly possible to make a machine that can drive as well as, or better than, a human can. Have we managed to make that already? I don't know, but from the info Google have been publishing, it actually looks like we have, or are pretty damn close.
Just because it's a machine doesn't automatically mean that it sucks at making decisions. Humans are machines too, and we let them drive.
If it would boot your Linux distro it'd also boot whatever malware was trying to trojan Windows and that's exactly what they're trying to avoid
No it's not. Malware is the excuse, much like child porn or terrorists are the excuse for internet filtering (and more or less anything else you want to force through as a law these days).
The real goal is to make it as hard as possible to switch away from Windows.
I might be guessing wrong here, but I'm thinking the primary intention of these new TLDs was to earn ICANN shitloads of money. It costs $185,000 just to apply for one, and $25,000/year to keep it.
Every Fortune 500 company doing the same thing would be a dream come true for them.
Perhaps... thus turning a lack of capacity into a profit center for the ISP. They're bad enough at having enough capacity as it is, without giving them a profit incentive to make their connections as bad as possible.
Because bandwidth works differently to those.
For electricity, water and gas, every bit of it you consume has to be produced somewhere and then shipped to you. This isn't true for bandwidth; bandwidth is produced on a constant basis at every link in the internet and is then thrown away if it's not consumed immediately. As a result, any bandwidth used at off-peak times has zero impact on the production cost, because you're using bandwidth that would've had to be thrown away anyway.
Yeah, it looks like the protocol involves sending a UDP packet to 239.255.255.250 port 1900, and waiting for any devices to send a packet back. The return packets will come from the devices' unicast address rather than the discovery multicast address, so you can't rely on normal state tracking to let the return packets in automatically.
The bugs are a bit convoluted, because there's a lot of them and this code originally landed for Mobile before being ported to desktop. There's Bug 1090535... the actual discovery code lives in SimpleServiceDiscovery.jsm.
Nope. Bug 1054959: it's searching your network for Roku or Chromecast devices so you can fling videos and tabs to them.
No... maybe. It depends.
Amdahl's law is in full force here. There comes a point where increasing the bandwidth of an internet connection doesn't make pages load faster, because the page load time is dominated by the time spent setting up connections and requests (i.e. the latency). Each TCP connection needs to do a TCP handshake (one round trip), and then each HTTP request adds another round trip. Also, all new connections need to go through TCP window scaling, which means the connection will be slow for a few more round trips. Keep-alive connections help a bit by keeping TCP connections alive, but 74% of HTTP connections only handle a single transaction, so they don't help a great deal.
Oh! by the way, not everybody's connection is like yours, specially over mobile networks.
Mobile networks (and, yes, satellite) tend to have high latency, so round-trips are even more of the problem there. Also... when people shop for internet connections, they tend to concentrate on the megabits, and not give a damn about any other quality metrics. So that's what ISPs tend to concentrate on too. You'll see them announce 5x faster speeds, XYZ megabits!!, yet they don't even monitor latency on their lines. And even if your ISP had 0ms latency, there's still the latency from them to the final server (Amdahl's law rearing its ugly head again).
Given all that, I think I'm justified in saying that the main problem with page loading times isn't the amount of data but the number of round-trips required to fetch it. Reducing the amount of data is less important than reducing the number of, or impact of, the round-trips involved. And that's the main problem that HTTP/2 is trying to address with its fancy binary multiplexing.
(Now, if your connection is a 56k modem with 2ms latency, then feel free to ignore me. HTTP/2 isn't going to help you much.)
"Ninety percent of baseball is half mental." -- Yogi Berra