Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Opposite? (Score 1) 42

I'd say the main purpose is to encrypt more stuff, and "not throwing a wobbly when you see a self-signed cert" is just a part of that. (Since you can't just turn off cert warnings and be done with it; you need some way to enable encryption without enabling authentication.)

It's not just for forms, or whatever "submit" was supposed to mean. All HTTP requests to the site except for the first one (per session? I'm not sure how long these headers are cached for) will go over TLS.

Comment Re:Opposite? (Score 1) 42

No, we created it to make it actually possible to do unauthenticated encryption with self-signed certificates on public websites. Currently, nobody uses self-signed certs because of the invalid cert warnings.

<meta> tags or HTTP headers are sent after the SSL negotiation, so neither of them can change the negotiation behavior. (Putting text on the page telling people to ignore the warning doesn't work either, because they'd need to ignore the warning just to see the text.) The only way a new header is going to work is if you use http:// for the first request, and then include a header that tells the browser it can pull the same pages over TLS, but without doing authenticity checks on the certificate.

Which is pretty much how this Alt-Svc header works.

Comment Re:Opposite? (Score 2) 42

Valid certificate not required. In particular this means you can use self-signed certs without a big massive warning.

Obviously a valid certificate via https:// is better, but if your choice is between a self-signed cert that throws a big warning and unsecured http://, you're going to choose the latter. Alt-Svc adds the option of delivering your http:// site over an encrypted connection.

(Nitpicker's corner: yes, the connection will be unauthenticated, which yes, means an active MITM can still read the contents. An active MITM is harder to pull off than passive sniffing, is obviously more evil, and is detectable, which makes this better than unsecured HTTP even if you don't get 100% perfect protection with it.)

Comment Re:Good. +1 for Google. (Score 1) 176

So, with the third party out of the equation, how does one know that the security certificate you receive from random-site.com is the one that random-site.com intended you to receive?

By comparing the fingerprint with the list of valid fingerprints for the site, as published by the site via DANE.

Of course, browsers refuse to implement that...

Comment Re:Bulls... since when will self driving cars have (Score 4, Insightful) 451

I don't know that. It should be perfectly possible to make a machine that can drive as well as, or better than, a human can. Have we managed to make that already? I don't know, but from the info Google have been publishing, it actually looks like we have, or are pretty damn close.

Just because it's a machine doesn't automatically mean that it sucks at making decisions. Humans are machines too, and we let them drive.

Comment Re:Now if they will sell them without MS Windows (Score 0, Troll) 161

If it would boot your Linux distro it'd also boot whatever malware was trying to trojan Windows and that's exactly what they're trying to avoid

No it's not. Malware is the excuse, much like child porn or terrorists are the excuse for internet filtering (and more or less anything else you want to force through as a law these days).

The real goal is to make it as hard as possible to switch away from Windows.

Comment Re:Get ready for metered service (Score 1) 631

Because bandwidth works differently to those.

For electricity, water and gas, every bit of it you consume has to be produced somewhere and then shipped to you. This isn't true for bandwidth; bandwidth is produced on a constant basis at every link in the internet and is then thrown away if it's not consumed immediately. As a result, any bandwidth used at off-peak times has zero impact on the production cost, because you're using bandwidth that would've had to be thrown away anyway.

Comment Re:Firewall through the Firewall? (Score 1) 147

Yeah, it looks like the protocol involves sending a UDP packet to 239.255.255.250 port 1900, and waiting for any devices to send a packet back. The return packets will come from the devices' unicast address rather than the discovery multicast address, so you can't rely on normal state tracking to let the return packets in automatically.

The bugs are a bit convoluted, because there's a lot of them and this code originally landed for Mobile before being ported to desktop. There's Bug 1090535... the actual discovery code lives in SimpleServiceDiscovery.jsm.

Comment Re:Great if optimizing the wrong thing is your thi (Score 3, Interesting) 171

No... maybe. It depends.

Amdahl's law is in full force here. There comes a point where increasing the bandwidth of an internet connection doesn't make pages load faster, because the page load time is dominated by the time spent setting up connections and requests (i.e. the latency). Each TCP connection needs to do a TCP handshake (one round trip), and then each HTTP request adds another round trip. Also, all new connections need to go through TCP window scaling, which means the connection will be slow for a few more round trips. Keep-alive connections help a bit by keeping TCP connections alive, but 74% of HTTP connections only handle a single transaction, so they don't help a great deal.

Oh! by the way, not everybody's connection is like yours, specially over mobile networks.

Mobile networks (and, yes, satellite) tend to have high latency, so round-trips are even more of the problem there. Also... when people shop for internet connections, they tend to concentrate on the megabits, and not give a damn about any other quality metrics. So that's what ISPs tend to concentrate on too. You'll see them announce 5x faster speeds, XYZ megabits!!, yet they don't even monitor latency on their lines. And even if your ISP had 0ms latency, there's still the latency from them to the final server (Amdahl's law rearing its ugly head again).

Given all that, I think I'm justified in saying that the main problem with page loading times isn't the amount of data but the number of round-trips required to fetch it. Reducing the amount of data is less important than reducing the number of, or impact of, the round-trips involved. And that's the main problem that HTTP/2 is trying to address with its fancy binary multiplexing.

(Now, if your connection is a 56k modem with 2ms latency, then feel free to ignore me. HTTP/2 isn't going to help you much.)

Slashdot Top Deals

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...