Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Cat blog (Score 1) 148

But, but... That doesn't make any sense!
Using HTTP, the connection isn't encrypted in either direction. If they can see the original request, they can also see the original response, so why not just cache that?

It's an absolutely crazy implementation, I agree (particularly speaking as someone implementing something which analyzes HTTP downloads right now). It's not caching, but some sort content analysis; my guess, and it is only a guess, is that it's intended as a workaround to copyright. Genuine caching is OK, for cacheable content, but I don't think this use would be covered by that copyright exemption: by fetching their own copy from the server like a regular web spider, they're no longer "making a copy". The other possibility is bandwidth: being a major ISP, it might be easier to intercept only the requests in-line, then queue them up for spidering by a separate system; intercepting the downloaded content as well would mean forcing all traffic through the analysis system in realtime.

Mine just hashes and logs the objects as they get fetched. Of course, I'm doing it in the firewall, with the user's knowledge and consent. I just remembered, though, a friend who works for an anti-malware vendor company mentioned to me that their security proxy does the same bizarre duplication rather than scanning in transit, which IIRC screwed up music streaming services, so presumably there's a good reason for that. (Weird, because if I were shipping malware, I'd find that all too trivial to circumvent by serving different content to the client and the scanner.)

Comment Untrustworthy != Useless (Score 1) 175

If Yahoo ends up holding the private keys, then it's completely untrustworthy and useless.

Let's hypothesize that Yahoo does this the worst way possible, so we can play to everyone's fears. Let's say the users aren't even going to have the key on their machines ever, and instead, Yahoo explicitly announces they have your private key, and their server will do all the decryption and signing for you (your machine won't even be doing it in Javascript), and they're under US jurisdiction and therefore subject to CALEA and NSLs, and furthermore just to make things worse, let's just say that they even publically admit that they would happily provide keys to any government who asks, without even a warrant or sternly-worded letter. But when you ask 'em if they really mean every government, "even Russia?" they reply with "no comment" so you're not sure they're really publically admitting everyone to whom they'll give the key.

There. Did I cover all the bases? Did I leave anyone's pet fear out?

Sorry, let's add a few more things. Let's say Yahoo's CEO is a Scientologist, all their network admins are required to be either Holocoaust Deniers or Creationists, and every employee is required to have at least 25% of their investments in MPAA companies. The receptionists all have iPhones, the corporate mission is the next president of the USA must have either Clinton or Bush as their last name, and henceforth all their web ads will be for either Amway or Herbalife. All the interns are spies for Google and Microsoft and Chinese industries, except for a few which are spies for Mossad, FSB, or Al-Qaeda. The head janitor is being blackmailed by two unknown parties for his participation in a kiddie porn network, and the top sysadmin hasn't heard about Heartbleed yet, the top programmer (who bears the title "Grand Wizard" on his business card) doesn't believe in comments, their implementation of OpenPGP uses a 1938 Luftwaffe cipher as its entropy source for generating session keys, and the company weather station's thermometer was installed on a south-facing patio that gets direct sun all day long.

You may possibly harbor doubts about trusting this company. Yet in that situation, switching to Yahoo email would be more secure than what most people have right now, with plaintext email. So how's that "useless?"

Comment Re:Awesome!! (Score 1) 175

Now all I have to do is get my father, my mother, my sister, my half-sister, my grandmother, my wife, and my assorted friends to learn what PGP is and how to read the emails I send them.

You jest, but don't you see how popular webmail providers adding insecure PGP implementations to their platforms would be a pretty good first step to doing exactly what you say?

Comment Re:It's a TRAP! (Score 4, Insightful) 175

Where did it say in there that users would hand over private keys to a third party?

It's implied by the fact that it's webmail. Does your browser have an OpenPGP library? Does it check all the Javascript that it downloads and executes, against some repository's whitelist? You have to assume the key isn't handled safely, unless you can answer Yes to these questions. And a lot of webmail users expect the server to be able to search and that's obviously impossible unless the server can read, so it's not like the unsafeness stems just from potential trickery.

That said, the more interesting question is what social effect this might have. Even "bad" use of OpenPGP could start conditioning more people to being familiar with, tolerating, expecting PGP. Get into a better frame of mind, and better habits can come later. And with good habits, some security could eventually emerge. The security wouldn't be there for Yahoo webmail users, and yet some users might end up having Yahoo webmail to thank for it.

And let's face it, the barriers to secure communication are almost entirely social; we choose to have insecure communications. Anyone who is working on that problem is working on The Problem.

Comment Backward (Score 2) 72

Conversely, I seem to find (in the UK at least) that cheaper ones and shops are more likely to have free WiFi, while pricier hotels and bigger chains seem to be more likely to charge for it. The poshest one I've spent any time in - part of the same chain as the Savoy in London - charges crazy prices (and has lousy mobile reception), though it's a rock-solid signal throughout the large building; a much cheaper hotel nearby just had a Wifi access point on ADSL somewhere, with no password, for anyone to use.

A question of attitude I suppose: a small hotel thinks £20 or so a month is a trivial investment to make guests happier, like having newspapers in reception; a bigger chain sees it as spending millions across the chain to roll out a service which should generate revenue.

Comment Re:Cat blog (Score 4, Informative) 148

Still, HTTPS would at least prevent your ISP from monitoring your browsing activity.

That's part of it - a valuable enough part in itself, IMO; at least one UK ISP, TalkTalk, has started duplicating HTTP requests made by their customers: so, if you request http://example.com/stuff on one of their lines, 30 seconds later they'll go and request the same URL themselves for monitoring purposes. Obviously, enabling SSL prevents this kind of gratuitous stupidity - and the previous incarnation of such snooping, Phorm. If enough websites enable SSL, ISPs will no longer have the ability to monitor customer behavior that closely, all they will see are SSL flows to and from IP addresses, and whatever DNS queries you make to their servers, if any. (Use encrypted connections to OpenDNS or similar, and your ISP will only ever see IP addresses and traffic volume - exactly as it should be IMO!)

Comment Re:Huh? (Score 1) 406

There are over 30,000 deaths in the US alone in automobile accidents; even supposing automated vehicles cut that number by 90%, 3,000 multi-million dollar settlements every year would destroy the automobile industry in the US.

3,000 multi-million dollar settlements sounds like a lot of money, but the 30,000 multi-million dollar settlements that we're already paying insurance premiums to pay for, is even more. Yet the system is apparently economically viabile even in 2014 when the costs are ten times higher. A scenario where where the accident rate is a tenth, is a scenario where insurance costs a tenth, so the total cost of a vehicle is somewhat less. This would be good for the auto industry, not bad.

If you tell someone they have a choice of two cars, one where they pay $70/month to State Farm (called "careless human's liability insurance"), and another where they pay $7/month to Ford (called "careful AI's liability insurance fee", because you're not buying insurance from Ford's AI, but rather, funding its insurance), that second one is more likely to result in a car purchase.

Comment Re:Useless (Score 1) 177

So, I agree with you that simply predicting reverse/affirm at 70% accuracy may be easy, but predicting 68000 individual justice votes with similar accuracy might be a significantly greater challenge.

In fact, it looks like very much the same challenge: with most decisions being unanimous reversals, it seems only a small minority of those individual votes are votes to affirm the lower court decision. So, just as 'return "reverse";' is a 70+% accurate predictor of the overall court ruling in each case, the very same predictor will be somewhere around 70% accurate for each individual justice, for exactly the same reason. (For that matter, if I took a six-sided die and marked two sides "affirm" and the rest "reverse", I'd have a slightly less accurate predictor giving much less obvious predictions: it will correctly predict about two-thirds of the time, with incorrect predictions split between unexpected reversals and unexpected affirmations.)

This is the statistical problem with trying to measure/predict any unlikely (or indeed any very likely) event. I can build a "bomb detector" for screening airline luggage, for example, which is 99.99% accurate in real-world tests. How? Well, much less than 0.01% of actual airline luggage contains a bomb ... so a flashing green LED marked "no bomb present" will in fact be correct in almost every single case. It's also completely worthless, of course! (Sadly, at least two people have put exactly that business model into practice and made a considerable amount of money selling fake bomb detectors for use in places like Iraq - one of them got a seven year jail sentence for it last year in England.)

With blood transfusions, I understand there's now a two stage test used to screen for things like HIV. The first test is quick, easy, and quite often wrong: as I recall, most of the positive readings it gives turn out to be false positives. What matters, though, is that the negative results are very, very unlikely to be false negatives: you can be confident the blood is indeed clean. Then, you can use a more elaborate test to determine which of the few positives were correct - by eliminating the majority of samples, it's much easier to focus on the remainder. Much the way airport security should be done: quickly weed out the 90-99% of people/bags who definitely aren't a threat, then you have far more resources to focus on the much smaller number of possible threats.

Come to think of it, the very first CPU branch predictors used exactly this technique: they assumed that no conditional branch would ever be taken. Since most conditional branches aren't, that "prediction" was actually right most of the time. (The Pentium 4 is much more sophisticated, storing thousands of records about when branches are taken and not taken - hence "only" gets it wrong about one time in nine.)

Now, I'd like to think the predictor in question is more sophisticated than this - but to know that, we'd need a better statistical test than those quoted, which amount to "it's nearly as accurate as a static predictor based on no information about the case at all"! Did it predict the big controversial decisions more accurately than less significant ones, for example? (Unlikely, of course, otherwise they wouldn't have been so controversial.)

Comment Re:No towers in range? (Score 1) 127

Usually, a terrestrial phone doesn't need to do anything much to "look" for a tower, besides keeping its receiver turned on. Towers emit beacons, and if you don't hear the beacon, there's no point in you sending anything - you won't receive a reply because you don't even hear the tower's beacon.

True - the problem AIUI is that "just" keeping the receiver turned on constantly consumes a significant amount of power in itself. Once synced with a tower, the phone can turn off the receiver, knowing that it has, say, 789ms until the next beacon it needs to check for; if it's waiting, it needs to be listening constantly. Worse, it doesn't know what frequency the tower might appear on - so until it finds one, it will be constantly sweeping all the different frequency bands a tower could be using, until it actually finds one - on a modern handset, cycling between at least three different modes (GSM, 3G and LTE), each on several different frequency bands. Also, because of the possibility of roaming, it may be hitting other networks then checking whether or not it can use those ("Hi network 4439, I'm from network 4494, can I connect? No? Kthxbye")

Comment Request to remove or alter content (Score 2) 81

I can't imagine that absolutely none of the requests where verifiable facts. {like a mis-typed date}

That wouldn't come under "right to be forgotten" though, a simple edit or correction request would address that.

The whole notion of a "right" to prohibit someone else from making a factually accurate statement on one website about the content of another site seems utterly absurd to me. Removing the destination page itself could perhaps be excused in some cases ... but to accept that the owner of a page making a statement about somebody has a right to keep it, even if it's out of date, then turn round and gag the likes of Google from making current factual statements about that page? Every "judge" supporting that nonsense needs to be unemployed ASAP.

Comment Re:Perhaps they can ask Google to forget that page (Score 1) 273

There would have to be a "work under this title" (something copyrightable) which becomes accessible by putting in the fuse. If plugging in the fuse causes their copyrighted AC-available icon show up on the dashboard, for example, then it'd be a DMCA violation to plug in the fuse without their authorization. Also, it might become illegal to manufacture or traffick or sell fuses without Chrysler's authorization, but that's subjective and subject to judges' whims (how they decide to interpret your fuse's primary purpose, commercially significant uses, Chrysler's marketing, etc).

But if all it does is enable the air conditioner (if there's no copyrighted work protected by it), then it's not a DMCA violation.

This wouldn't ever happen, though. Suppose you made your own copyrighted work and also had it become accessible only by plugging in the exact same sort of fuse. If you became "commercially significant" enough, then Chrysler's own fuse sales to their own customers would become illegal (devices that circumvent your DRM). It's for this reason that all DRM schemes need to be trade secrets or patented, to keep different copyright holders from using each other's schemes (or at least keep 'em from doing it without a contract to cooperate). That's why no one would really use fuse as DRM. It's not that they'd worry about their customers "hacking," but because they'd need to worry about someone (anyone!) coming and suddenly making their own business illegal.

Comment Re:Only geeks... (Score 1) 125

How is that any different than swinging a load around with a crane? People will just have to be careful and realize the suits can be dangerous if misused.

I think the dexterity is the key here. Yes, a crane can lift 10+ tonnes at the touch of a button/lever - once someone has attached the hook to the object. You can't just reach down and pick something up with a crane, except in very carefully controlled circumstances (like shipping containers lined up on a dockyard). Imagine a suit like this in rescue situations, though: lifting lots of chunks of rubble off trapped survivors, clearing blocked paths. A crane could lift the weight easily, but can't pick chunks of rubble up; a bulldozer or excavator could move it all, but would kill the people trapped underneath. Also, in those situations there is often a lot of dust etc around - and filter masks don't fit well with the physical exertion of lifting and moving heavy debris.

Also, like the previous comment says, I imagine they'll scale up to heavier weights and other features in future (adding power tools, for example).

Comment Re:Its all in the gmail terms of use ... (Score 1) 790

I'd call cropping the image a trivial tweak. How you dealing with that?

That's a good point - unfortunately, it's not one that can easily be addressed algorithmically, because you stray into the much more abstract question of "what is porn?" (or, in this case, what is an "illegal image"). If I were to take a 1 megapixel illegal image and slice it into 100 tiles, how many of those tiles would themselves contain illegal imagery? Identifying a file as being the top-left corner of "known child abuse image #515345" isn't actually conclusive in itself, because that bit of picture may be innocuous in itself.

In the context of my work, I'd be logging that the offender in question had downloaded a 533k JPEG from a certain URL on dodgy-site.com, so the parole guys can skim through looking for anything suspicious: the domain name, or what search engine terms led to it, will probably be informative enough in itself. Hash matching is a quick and easy check to automate, but far from the only thing that will be checked: Facebook usage, for example ("Now, Mr Sex Offender, why exactly do you have a Facebook accounting claiming to be a 13 year old girl sending out friend requests...?") Fortunately, it's not a case of gathering proof for a prosecution, it's a much broader goal of assessing behaviour and compliance.

Slashdot Top Deals

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...