Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Untrustworthy != Useless (Score 1) 175

If Yahoo ends up holding the private keys, then it's completely untrustworthy and useless.

Let's hypothesize that Yahoo does this the worst way possible, so we can play to everyone's fears. Let's say the users aren't even going to have the key on their machines ever, and instead, Yahoo explicitly announces they have your private key, and their server will do all the decryption and signing for you (your machine won't even be doing it in Javascript), and they're under US jurisdiction and therefore subject to CALEA and NSLs, and furthermore just to make things worse, let's just say that they even publically admit that they would happily provide keys to any government who asks, without even a warrant or sternly-worded letter. But when you ask 'em if they really mean every government, "even Russia?" they reply with "no comment" so you're not sure they're really publically admitting everyone to whom they'll give the key.

There. Did I cover all the bases? Did I leave anyone's pet fear out?

Sorry, let's add a few more things. Let's say Yahoo's CEO is a Scientologist, all their network admins are required to be either Holocoaust Deniers or Creationists, and every employee is required to have at least 25% of their investments in MPAA companies. The receptionists all have iPhones, the corporate mission is the next president of the USA must have either Clinton or Bush as their last name, and henceforth all their web ads will be for either Amway or Herbalife. All the interns are spies for Google and Microsoft and Chinese industries, except for a few which are spies for Mossad, FSB, or Al-Qaeda. The head janitor is being blackmailed by two unknown parties for his participation in a kiddie porn network, and the top sysadmin hasn't heard about Heartbleed yet, the top programmer (who bears the title "Grand Wizard" on his business card) doesn't believe in comments, their implementation of OpenPGP uses a 1938 Luftwaffe cipher as its entropy source for generating session keys, and the company weather station's thermometer was installed on a south-facing patio that gets direct sun all day long.

You may possibly harbor doubts about trusting this company. Yet in that situation, switching to Yahoo email would be more secure than what most people have right now, with plaintext email. So how's that "useless?"

Comment Re:Awesome!! (Score 1) 175

Now all I have to do is get my father, my mother, my sister, my half-sister, my grandmother, my wife, and my assorted friends to learn what PGP is and how to read the emails I send them.

You jest, but don't you see how popular webmail providers adding insecure PGP implementations to their platforms would be a pretty good first step to doing exactly what you say?

Comment Re:It's a TRAP! (Score 4, Insightful) 175

Where did it say in there that users would hand over private keys to a third party?

It's implied by the fact that it's webmail. Does your browser have an OpenPGP library? Does it check all the Javascript that it downloads and executes, against some repository's whitelist? You have to assume the key isn't handled safely, unless you can answer Yes to these questions. And a lot of webmail users expect the server to be able to search and that's obviously impossible unless the server can read, so it's not like the unsafeness stems just from potential trickery.

That said, the more interesting question is what social effect this might have. Even "bad" use of OpenPGP could start conditioning more people to being familiar with, tolerating, expecting PGP. Get into a better frame of mind, and better habits can come later. And with good habits, some security could eventually emerge. The security wouldn't be there for Yahoo webmail users, and yet some users might end up having Yahoo webmail to thank for it.

And let's face it, the barriers to secure communication are almost entirely social; we choose to have insecure communications. Anyone who is working on that problem is working on The Problem.

Comment Re:Huh? (Score 1) 406

There are over 30,000 deaths in the US alone in automobile accidents; even supposing automated vehicles cut that number by 90%, 3,000 multi-million dollar settlements every year would destroy the automobile industry in the US.

3,000 multi-million dollar settlements sounds like a lot of money, but the 30,000 multi-million dollar settlements that we're already paying insurance premiums to pay for, is even more. Yet the system is apparently economically viabile even in 2014 when the costs are ten times higher. A scenario where where the accident rate is a tenth, is a scenario where insurance costs a tenth, so the total cost of a vehicle is somewhat less. This would be good for the auto industry, not bad.

If you tell someone they have a choice of two cars, one where they pay $70/month to State Farm (called "careless human's liability insurance"), and another where they pay $7/month to Ford (called "careful AI's liability insurance fee", because you're not buying insurance from Ford's AI, but rather, funding its insurance), that second one is more likely to result in a car purchase.

Comment Re:Perhaps they can ask Google to forget that page (Score 1) 273

There would have to be a "work under this title" (something copyrightable) which becomes accessible by putting in the fuse. If plugging in the fuse causes their copyrighted AC-available icon show up on the dashboard, for example, then it'd be a DMCA violation to plug in the fuse without their authorization. Also, it might become illegal to manufacture or traffick or sell fuses without Chrysler's authorization, but that's subjective and subject to judges' whims (how they decide to interpret your fuse's primary purpose, commercially significant uses, Chrysler's marketing, etc).

But if all it does is enable the air conditioner (if there's no copyrighted work protected by it), then it's not a DMCA violation.

This wouldn't ever happen, though. Suppose you made your own copyrighted work and also had it become accessible only by plugging in the exact same sort of fuse. If you became "commercially significant" enough, then Chrysler's own fuse sales to their own customers would become illegal (devices that circumvent your DRM). It's for this reason that all DRM schemes need to be trade secrets or patented, to keep different copyright holders from using each other's schemes (or at least keep 'em from doing it without a contract to cooperate). That's why no one would really use fuse as DRM. It's not that they'd worry about their customers "hacking," but because they'd need to worry about someone (anyone!) coming and suddenly making their own business illegal.

Comment Re:Reads like a "Modest Proposal" to me (Score 1) 282

I think the reasoning is fine, because of these words: "...if the behaviour which is currently criminal is to remain criminal..."

Your example is a simple crime, where the victim had an experience related to the crime (so there's a body to be found by the police, or a surviving victim who says "ouch, someone shot me"). They are talking about certain types of crimes where neither the victim nor anyone closely watching the victim would never have any idea that a crime happened. All the evidence is completely disconnected from the victim.

I publish a magnet link. You read it, and use it to acquire a file. Someone who isn't there and sees absolutely no effect on their life, is defined as a victim because the action is "currently criminal." Maybe it's because they hold a copyright on the contents of the file, or because the file contains a picture of them without clothes (taken by hidden camera when they were 17 years and 364 days old), or because the file contains some other information related to them.

You can't detect these kinds of things.

The House of Lords is saying that if these are going to remain crimes, then the laws should be enforced, and if we ass/u/me that getting laws enforced is far more valuable to our society than liberty, efficiency, etc then it's important that the watchers know about every transaction that is happening and who is involved. They need to know that I transmitted information to you (and who both of us are) and what that information was. Until they have all that information, they can't even begin to guess whether or not a crime occurred. Maybe the file contained a picture of my dog rather than a 17-year-old human, and they need to know who took the dog picture and that I sent it to you, so that they know it wasn't a copyright violation.

Of course it's absurd, but that's because the premise is absurd. Their reaction to it, is quite rational. But that's my point: it almost looks like (especially in the paragraph that I quoted) they might be calling the bluff, pointing out the inevitable consequences of having externally un-detectable things be crimes. If they weren't that clever and didn't mean to do that, too bad, but even if it's an accident, they did it.

It's not an accident, though. Look at it (emphasis mine): "if it's to remain criminal" (see the wiggle room there?) and "currently criminal" and "there is little point in [doing this] at the same time [as doing that]" and "difficult question."

I'm not saying this is ingenious, but it really is a fairly well-crafted.

Comment Reads like a "Modest Proposal" to me (Score 1) 282

The techdirt article quotes this delicious excerpt:

From our perspective in the United Kingdom, if the behaviour which is currently criminal is to remain criminal and also capable of prosecution, we consider that it would be proportionate to require the operators of websites first to establish the identity of people opening accounts but that it is also proportionate to allow people thereafter to use websites using pseudonyms or anonymously. There is little point in criminalising certain behaviour and at the same time legitimately making that same behaviour impossible to detect. We recognise that this is a difficult question, especially as it relates to jurisdiction and enforcement.

I can't even say I really disagree with that reasoning. Can't you see how there are two completely different ways to reach a conclusion from that paragraph?

Comment Why use public CA an internal server? (Score 4, Insightful) 92

Who are these people, that would give a damn about this change?

You don't need an intermediary not-you authority for this job. And in fact, using one can only possibly decrease the security, in the best case scenario. Even the worst most incompetent company in the world, would make a better CA for its internal servers, than the best, most trustworthy public CA.

Comment Re:Is there an SWA Twitter police? (Score 1) 928

Whoa there. This was no mere bad judgement call. Having him thrown off the plane was over-the-top malicious, totally beyond what I ever expect from anyone who is "having a bad day." I sincerely believe such a person really shouldn't be in any sort of position where they might have that amount of power over other people.

Put a hundred random people in the same sort of bad-day position, and I don't expect one of them to behave like this one did. This one is truly exceptional, and does not merely "have bad days." This is the kind of person whose news stories are usually headlined something like "gunman kills five then self."

I might be willing to excuse them, if say, their psychiatrist were to explain how this was anomalous for their character and that their medication was defective, or something like that. OTOH that can be handled in their lawsuit against the medication manufacturer, and then this psycho will never need a job where they exercise power over other people again.

Comment Please let me explain this (Score 1, Funny) 928

I happen to be the executive who works at Southwest and made the decision, upon seeing the tweet, to call the gate and have him kicked off. Please allow me to explain my decision.

I work in the PR department, and managing publicity is my job. When I saw the tweet, I realized it was bad publicity. I don't like my company getting bad publicity, and I seek to avoid it, or replace it with good publicity.

So I threw our tweeting customer off, thereby solving the bad publicity problem! See? Now do you get it?

...

(Why is everyone looking at me like I'm a idiot?)

Comment Re:Let's sell child porn to The Netherlands (Score 1) 109

..the sale is criminalized in The Netherlands.

My point is that the court's recent decision suggests the above is an outdated, quaint law which no longer reflects the society that The People wish to have, nor which reflects the new way of thinking about reponsibility and the relationship between demand and the victimizing acts which serve that demand.

Thus, I'm sure the Dutch people will soon be revising their kiddie porn laws. Huh? Whaddya mean, "no?" Why not? ;-)

Comment Re:Why do we bother? (Score 1) 109

Look, just install the telescreens in our homes already.

Be patient. We're still in the voluntary phase of that, right now. If enough people say no to the unauditable smartphones and smart TVs, we can eventually get to compulsory installation, but for right now, what's the hurry? People are doing it without even being told to.

Comment Let's sell child porn to The Netherlands (Score 2) 109

Though we'll face some risks from our own governments, it's a relief to know at the Dutch government would have no problem with me selling kiddie porn (as long as it was made in America) to Dutch citizens. "No crime happened here, within our jurisdiction," they'd say.

In fact, the Dutch government should tolerate our new businesses even more than this NSA thing, since the victims (whereever their rights were violated) won't even be Dutch citizens. No Netherlander will have any reason to say their government let them down.

Comment Re:New SSL root certificate authority (Score 1) 129

Thanks for the insult. It hardly stung.

Unless you worked at Netscape in the mid-1990s, no insult was intended.

All I meant is that by the very early 1990s, we (and by "we" I mean people smarter than me; I was clueless at the time) had a pretty good idea that CAs wouldn't work well outside of real power hierarchies (e.g. corporate intranets). But then a few years later the web browser people came along and adopted X.509's crap, blowing off the more recent PKI improvements, in spite of the fact that it looked like it wouldn't work well for situations like the WWW.

Unsurprisingly, it didn't work well. Organizing certificate trust differently than how real people handle trust, 1) allows bad CAs to do real damage, and 2) undermines peoples' confidence in the system.

A very nice way of saying this, is that in hindsight, the predicted problems are turning out to be more important than we thought most people would care about. ;-) It's almost as though now (no fair! you changed the requirements!!) people want SSL to be secure.

Keeping the same organization but with new faceless unaccountable trust-em-completely-or-not-at-all root CAs won't fix the problem. Having "root CAs" is the problem, and PRZ solved it, over 20 years ago.

I expect you to start the project shortly.

It's a little late to start, but I do happen to still be running an awful lot of applications (web browser being the most important one) which aren't using it yet.

Slashdot Top Deals

"Why can't we ever attempt to solve a problem in this country without having a 'War' on it?" -- Rich Thomson, talk.politics.misc

Working...