Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Who'll spit on my burger?! (Score 1) 870

A Burger King near me has had one of these ordering kiosks for several years. I always use it, I see some other people use it, but overall it's probably faster to let the human cashier handle the order, and most people don't bother with it. (It's got too many "newbie-friendly" voice prompts and delays and swooshing animations and whatnot, not to mention the frequent prompts of "would you like to add [X] to your order?".)

Comment Re:An overview, IMHO: (Score 1) 516

You're mostly correct, except that, relative to cost of living, quite possibly a majority of the population has been getting POORER when it comes to hourly earnings.

This is usually covered up by using mean income instead of median, by using household income (not taking into account the number of hours worked by a "household") instead of income per hour worked, etc.

Comment Re:Greenspan's right (Score 1) 516

OK, got it. And I totally agree. Exactly right that while software engineers may be the upper-middle of the income spectrum among ordinary wage earners, they're still closer to the bottom compared to the actual high end of the spectrum. But they're a convenient target for those wishing to deflect attention from the real problem.

Comment Re:Greenspan's right (Score 4, Insightful) 516

I disagree. I think we would fix a ton of other problems that are strongly resistant to other solutions if just about the only thing we did focus on was income inequality. The problem is, right now, we don't do anything at all about income inequality except allow it to get worse every year for the last 40 years.

Comment Re:Won't do any good. (Score 1) 264

Maybe, but a lot is probably the police thinking twice and reserving the situations where they "lose the footage" for rare occasions. They can still abuse it, sure, but it's not worth abusing it most of the time, it looks better and is easier to just do what they're supposed to in most cases. Like if they have to explain why the footage is missing, that's awkward, there are questions. The questions might not leave the department, but it's still a pain to even be asked.

Comment Re:This is the "http?" question with HTTP2/SPDY (Score 1) 177

Also, I could point out that requiring validation of TLS certificates for SPDY/HTTP2 prevents actual shared hosting from opportunistically encrypting all the zillions of sites they host, which would be trivial right now (chances are they DO have a certificate installed... in the ISP's name... but not for every site they host). While this wouldn't allow real trusted "HTTPS" connections, it would allow for a LOT of sites to suddenly be using encryption routinely without either the site owners or the end users even knowing it. All the hosting provider would need to do would be to enable SPDY, or later HTTP2, on their servers, and it would start opportunistically encrypting all the hosted sties using the hosting provider's certificate.

Comment This is the "http?" question with HTTP2/SPDY (Score 1) 177

This is the same question as what to do with "HTTP" (not HTTPS) requests when transported over HTTP2 (which is supposed to be all TLS) and SPDY (which is already all TLS, and which HTTP2 is based on). Usually it's framed in the context of "do we need to authenticate and verify TLS certificates when the user didn't originally request HTTPS?"

Some people are of the opinion that "TLS is TLS, and if you can't 100% trust it, there's no point." And I can see the logic in that. Obviously that should always be the case when you've explicitly requested an HTTPS connection, and ideally, at some point in the future, it would be nice to be the case for all network connections, all the time.

But when you step back, you have to realize that those connections are currently completely unencrypted and untrusted - they're HTTP, not HTTPS. And that the march to encryption is slow. The majority of websites have no TLS encryption capability at all, maybe as many as 20% of the remainder are self-signed, and quite lot of the rest may have certs which don't match the domain being requested. (The same is no doubt true of apps, mobile or otherwise.) And the latter problem, particularly, is quite difficult to solve for technical reasons in a lot of cases critical to the orderly and economical operation of the internet, such as CDNs.

This goes beyond the usual lament that sites will need to pay $100+ per year to get a cert - that's not really the problem, though from my experience most site owners will have to be dragged kicking and screaming before they bother to install a cert and get HTTPS running properly. Even if a cert is installed, most of them want to redirect back to HTTP at any opportunity.

Besides performance, cost, and administrative hassle, the big problem is the royal pain that it can be to take care of all the issues of trusted certs across hosting providers, CDNs, lead generation partners, etc. That's because in a lot of cases, those providers are hosting assets under a variety of domains - sometimes hundreds or thousands of domains - on single shared servers (or many copies of shared servers), each with a single IP address shared among the various domains. It's shared hosting all over again, this time writ large across global CDNs and the like. Even with your own hosting provider, you might face the same problem on development and staging environments even if not on production, making testing difficult. And while they're working on the problem, so far HTTPS does not play well with shared hosting. (On top of that, a lot of ad networks don't support HTTPS at all, so they introduce the mixed content problem into your pages. If your site depends on ads, you might not be able to serve them over HTTPS connections, which is why some sites offer HTTPS only to paying customers.)

The whole idea of SPDY or HTTP2 being "TLS-only" is laudable, to gain opportunistic encryption even when the user didn't request HTTPS. But by so thoroughly breaking sites with mixed content or untrusted certificates (either expired or self-signed or for the wrong hostname or whatever), I'm of the opinion that all it's doing is delaying the adoption of TLS for websites. Rather than going "oh well, to get HTTP2, we'll have to fix this", most sites, faced with the hassle and resulting broken pages, will drag their heels adding HTTPS or enabling HTTP2, forcing downgrades to HTTP 1 for many years to come.

Encryption absolutists portray the question in simple terms: why would you not want to trust your encrypted connection? You'll be vulnerable to man in the middle attacks, therefore they should always be authenticated and verified. But the real question is: when users haven't specifically requested HTTPS, is it better to have those connections mostly be COMPLETELY unencrypted and untrusted (which are even more susceptible to MITM), but when they are encrypted to trust them (even if the user can't see that they're encrypted or trusted)? Or for a larger proportion of them to be encrypted, but not necessarily always trusted in the face of potential MITM attacks? Considering that untrusted connections at least protect against PASSIVE surveillance, I kind of think there's some merit to the latter argument.

Also, when you have something like SPDY, what are you going to do if there is a certificate error or mixed content? Most of the point to using SPDY is to speed things up, while opportunistically providing TLS in the background, without the user's knowledge, and if the user didn't request HTTPS, they aren't expecting the connection to be secure, private, or trusted. On most sites it would be regular HTTP, in some browsers even that site would be, and in general they'll never know if it is. So if there is a cert problem, what do you do? Break their webpage? That's just rude, stupid, and frankly, a broken protocol. The user didn't ask you to use SPDY or TLS, they just want to see the page. On a non-SPDY browser they'd just be getting HTTP, and nothing would be broken. Maybe decide you can't trust the connection, back the whole thing out and try again over port 80? That's just as bad: first you're going from partial privacy to a complete lack of privacy (and losing your pipelining and header compression in the process), so it's not like it's "better" to have done that, you're just making things worse. And second, you're slowing things down by having to stop partway through the TLS handshake (or maybe even later in loading the page) and go back to re-request the whole thing as HTTP, when the point was to speed things up. That's just plain broken. So really, the ONLY rational option when SPDY encounters mixed content or a certificate problem in the course of serving a "HTTP" request (not "HTTPS"), is to just load the content anyway and not complain, even though it's trying to use TLS all the time. It was doing the TLS opportunistically, invisibly, in the background, and without being asked to, and the alternative is completely unencrypted HTTP, so if it fails the right thing to do is say "oh well, I tried to opportunistically encrypt, but it's not perfect, I'll just continue even though it's not trusted" - NOT to moralistically tell the user they can't see their webpage, or to throw out the baby with the bathwater in a snit and tell them to try again over HTTP (which is exactly what they were originally requesting, but you've wasted their time in the meantime).

The problem being, by requiring sites to be 100% perfect in order to use HTTPS, SPDY, or HTTP2 even for "HTTP" requests, many of them will choose to remain with unencrypted HTTP 1.1 instead, and how is that better? (And by the way, this choice often rests with the browser vendors, who may or may not choose to support protocol options for unvalidated TLS connections.)

Now what this means for "trusted" proxies is kind of an open question. In some cases I guess it could be a preferable alternative to not validating certificates at all or falling back to HTTP, so to the extent it avoids either of those scenarios, it might be a good thing. But since it won't necessarily solve some TLS certificate problems, I don't know if it will make much difference. Either browsers will support unvalidated TLS for background encryption of SPDY or HTTP2 "HTTP" connections (in which case there would be no need for trusted proxies at all), or else most sites might still resist HTTPS/SPDY/HTTP2 as long as possible, making it kind of irrelevant for them.

Slashdot Top Deals

When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle. - Edmund Burke

Working...