Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Cat blog (Score 1) 148

But, but... That doesn't make any sense!
Using HTTP, the connection isn't encrypted in either direction. If they can see the original request, they can also see the original response, so why not just cache that?

It's an absolutely crazy implementation, I agree (particularly speaking as someone implementing something which analyzes HTTP downloads right now). It's not caching, but some sort content analysis; my guess, and it is only a guess, is that it's intended as a workaround to copyright. Genuine caching is OK, for cacheable content, but I don't think this use would be covered by that copyright exemption: by fetching their own copy from the server like a regular web spider, they're no longer "making a copy". The other possibility is bandwidth: being a major ISP, it might be easier to intercept only the requests in-line, then queue them up for spidering by a separate system; intercepting the downloaded content as well would mean forcing all traffic through the analysis system in realtime.

Mine just hashes and logs the objects as they get fetched. Of course, I'm doing it in the firewall, with the user's knowledge and consent. I just remembered, though, a friend who works for an anti-malware vendor company mentioned to me that their security proxy does the same bizarre duplication rather than scanning in transit, which IIRC screwed up music streaming services, so presumably there's a good reason for that. (Weird, because if I were shipping malware, I'd find that all too trivial to circumvent by serving different content to the client and the scanner.)

Comment Backward (Score 2) 72

Conversely, I seem to find (in the UK at least) that cheaper ones and shops are more likely to have free WiFi, while pricier hotels and bigger chains seem to be more likely to charge for it. The poshest one I've spent any time in - part of the same chain as the Savoy in London - charges crazy prices (and has lousy mobile reception), though it's a rock-solid signal throughout the large building; a much cheaper hotel nearby just had a Wifi access point on ADSL somewhere, with no password, for anyone to use.

A question of attitude I suppose: a small hotel thinks £20 or so a month is a trivial investment to make guests happier, like having newspapers in reception; a bigger chain sees it as spending millions across the chain to roll out a service which should generate revenue.

Comment Re:Cat blog (Score 4, Informative) 148

Still, HTTPS would at least prevent your ISP from monitoring your browsing activity.

That's part of it - a valuable enough part in itself, IMO; at least one UK ISP, TalkTalk, has started duplicating HTTP requests made by their customers: so, if you request http://example.com/stuff on one of their lines, 30 seconds later they'll go and request the same URL themselves for monitoring purposes. Obviously, enabling SSL prevents this kind of gratuitous stupidity - and the previous incarnation of such snooping, Phorm. If enough websites enable SSL, ISPs will no longer have the ability to monitor customer behavior that closely, all they will see are SSL flows to and from IP addresses, and whatever DNS queries you make to their servers, if any. (Use encrypted connections to OpenDNS or similar, and your ISP will only ever see IP addresses and traffic volume - exactly as it should be IMO!)

Comment Re:Useless (Score 1) 177

So, I agree with you that simply predicting reverse/affirm at 70% accuracy may be easy, but predicting 68000 individual justice votes with similar accuracy might be a significantly greater challenge.

In fact, it looks like very much the same challenge: with most decisions being unanimous reversals, it seems only a small minority of those individual votes are votes to affirm the lower court decision. So, just as 'return "reverse";' is a 70+% accurate predictor of the overall court ruling in each case, the very same predictor will be somewhere around 70% accurate for each individual justice, for exactly the same reason. (For that matter, if I took a six-sided die and marked two sides "affirm" and the rest "reverse", I'd have a slightly less accurate predictor giving much less obvious predictions: it will correctly predict about two-thirds of the time, with incorrect predictions split between unexpected reversals and unexpected affirmations.)

This is the statistical problem with trying to measure/predict any unlikely (or indeed any very likely) event. I can build a "bomb detector" for screening airline luggage, for example, which is 99.99% accurate in real-world tests. How? Well, much less than 0.01% of actual airline luggage contains a bomb ... so a flashing green LED marked "no bomb present" will in fact be correct in almost every single case. It's also completely worthless, of course! (Sadly, at least two people have put exactly that business model into practice and made a considerable amount of money selling fake bomb detectors for use in places like Iraq - one of them got a seven year jail sentence for it last year in England.)

With blood transfusions, I understand there's now a two stage test used to screen for things like HIV. The first test is quick, easy, and quite often wrong: as I recall, most of the positive readings it gives turn out to be false positives. What matters, though, is that the negative results are very, very unlikely to be false negatives: you can be confident the blood is indeed clean. Then, you can use a more elaborate test to determine which of the few positives were correct - by eliminating the majority of samples, it's much easier to focus on the remainder. Much the way airport security should be done: quickly weed out the 90-99% of people/bags who definitely aren't a threat, then you have far more resources to focus on the much smaller number of possible threats.

Come to think of it, the very first CPU branch predictors used exactly this technique: they assumed that no conditional branch would ever be taken. Since most conditional branches aren't, that "prediction" was actually right most of the time. (The Pentium 4 is much more sophisticated, storing thousands of records about when branches are taken and not taken - hence "only" gets it wrong about one time in nine.)

Now, I'd like to think the predictor in question is more sophisticated than this - but to know that, we'd need a better statistical test than those quoted, which amount to "it's nearly as accurate as a static predictor based on no information about the case at all"! Did it predict the big controversial decisions more accurately than less significant ones, for example? (Unlikely, of course, otherwise they wouldn't have been so controversial.)

Comment Re:No towers in range? (Score 1) 127

Usually, a terrestrial phone doesn't need to do anything much to "look" for a tower, besides keeping its receiver turned on. Towers emit beacons, and if you don't hear the beacon, there's no point in you sending anything - you won't receive a reply because you don't even hear the tower's beacon.

True - the problem AIUI is that "just" keeping the receiver turned on constantly consumes a significant amount of power in itself. Once synced with a tower, the phone can turn off the receiver, knowing that it has, say, 789ms until the next beacon it needs to check for; if it's waiting, it needs to be listening constantly. Worse, it doesn't know what frequency the tower might appear on - so until it finds one, it will be constantly sweeping all the different frequency bands a tower could be using, until it actually finds one - on a modern handset, cycling between at least three different modes (GSM, 3G and LTE), each on several different frequency bands. Also, because of the possibility of roaming, it may be hitting other networks then checking whether or not it can use those ("Hi network 4439, I'm from network 4494, can I connect? No? Kthxbye")

Comment Request to remove or alter content (Score 2) 81

I can't imagine that absolutely none of the requests where verifiable facts. {like a mis-typed date}

That wouldn't come under "right to be forgotten" though, a simple edit or correction request would address that.

The whole notion of a "right" to prohibit someone else from making a factually accurate statement on one website about the content of another site seems utterly absurd to me. Removing the destination page itself could perhaps be excused in some cases ... but to accept that the owner of a page making a statement about somebody has a right to keep it, even if it's out of date, then turn round and gag the likes of Google from making current factual statements about that page? Every "judge" supporting that nonsense needs to be unemployed ASAP.

Comment Re:Only geeks... (Score 1) 125

How is that any different than swinging a load around with a crane? People will just have to be careful and realize the suits can be dangerous if misused.

I think the dexterity is the key here. Yes, a crane can lift 10+ tonnes at the touch of a button/lever - once someone has attached the hook to the object. You can't just reach down and pick something up with a crane, except in very carefully controlled circumstances (like shipping containers lined up on a dockyard). Imagine a suit like this in rescue situations, though: lifting lots of chunks of rubble off trapped survivors, clearing blocked paths. A crane could lift the weight easily, but can't pick chunks of rubble up; a bulldozer or excavator could move it all, but would kill the people trapped underneath. Also, in those situations there is often a lot of dust etc around - and filter masks don't fit well with the physical exertion of lifting and moving heavy debris.

Also, like the previous comment says, I imagine they'll scale up to heavier weights and other features in future (adding power tools, for example).

Comment Re:Its all in the gmail terms of use ... (Score 1) 790

I'd call cropping the image a trivial tweak. How you dealing with that?

That's a good point - unfortunately, it's not one that can easily be addressed algorithmically, because you stray into the much more abstract question of "what is porn?" (or, in this case, what is an "illegal image"). If I were to take a 1 megapixel illegal image and slice it into 100 tiles, how many of those tiles would themselves contain illegal imagery? Identifying a file as being the top-left corner of "known child abuse image #515345" isn't actually conclusive in itself, because that bit of picture may be innocuous in itself.

In the context of my work, I'd be logging that the offender in question had downloaded a 533k JPEG from a certain URL on dodgy-site.com, so the parole guys can skim through looking for anything suspicious: the domain name, or what search engine terms led to it, will probably be informative enough in itself. Hash matching is a quick and easy check to automate, but far from the only thing that will be checked: Facebook usage, for example ("Now, Mr Sex Offender, why exactly do you have a Facebook accounting claiming to be a 13 year old girl sending out friend requests...?") Fortunately, it's not a case of gathering proof for a prosecution, it's a much broader goal of assessing behaviour and compliance.

Comment Re:Its all in the gmail terms of use ... (Score 5, Informative) 790

That means only the most incompetent pedos aren't already randomly tweaking their jpgs - the smart ones are doing it in the EXIF section so it won't even change the picture.

The smart implementations probably hash the image payload excluding EXIF, for exactly that reason - maybe downsample and reduce the colorspace too, so trivial tweaks won't have that effect any more.

(In fact, the implementation I'm working with right now for exactly this purpose - I have a small research project underway with the police in Scotland as part of their Offender Management work - just hashes HTTP payloads for the moment - although refining this is on the drawing board for later.)

I do find this very disturbing in principle though. Is absolutely everything in your mailbox entirely innocent? I have, for example, a list of various Microsoft product keys in mine. As it happens, those are legitimate - all issued to me by Microsoft via MSDN subscription, then I stuck them all in a spreadsheet to keep track of which key was in use for what - but would Google or the police know that just from looking at the list? They might turn up with a warrant looking for the piracy ring I'm obviously running, just because Google got nosy and went vigilante!

This isn't the first time, though; I recall a malware researcher getting rather upset after Google started eating samples from his Inbox - even when they were inside password-protected ZIP files. I can see that they mean well, but to me that crosses a line.

Comment Re:Who has the market share? (Score 2) 336

Since we're talking about desktop market shares here, Linux's number isn't that far off.

I wonder about that, actually: I'm quite sure Linux users are much more likely to be running the likes of NoScript and various ad-blockers than Windows users are - and anyone who blocks whatever analytics script this survey uses will be ignored completely, skewing figures away from their platform. Maybe it's not a large proportion, but I'm sure it will be a factor there.

The scary thing is that Vista actually gained users, and the interesting gap is how desktop versus mobile usage compares: how would IE/Chrome/Safari compare across all form factors look? (Bearing in mind that mobile users on the Chrome rendering engine are all on Linux kernels, probably dwarfing the Linux desktop users.)

Comment Re:Exploited procedural loophole (Score 1) 419

The two times I've had in-store card referrals (high value transactions: the first time was buying a P3 laptop, which was quite high end in those days; the second was furnishing a new apartment after moving to Houston), I'm pretty sure it was the issuing bank ultimately handling the call - I can't imagine the bank would have transferred the personal information they were asking for as a security check to the merchant services provider: past unlisted contact details, previous transactions etc. I suspect the call may have been transferred to them, though, rather than called directly.

I had a similar issue this year with British Telecom working on a broadband fault. The service manager wanted to speak directly to the field engineer working on the fault (different divisions: the engineer's BT Openreach, the manager was BT Wholesale) - but the Openreach guy said he couldn't call the Wholesale one directly. So, the Wholesale one called my number and asked to speak to him ...

Comment Re:Illegal and Dangerous? (Score 1) 200

I say try because in a battle between a jet engine with the power to push 400 tons of steel into the sky VS a drone I'm going to put my money on the jet engine lasting long enough for them to turn around and land again.

You might want to rethink that after being reminded of jet airliners being brought down by birds - not an ounce of metallic content, just a few pounds of meat and soft lightweight bones - or the 747 which almost crashed after all four engines failed from ingesting some ash. (Fortunately, they happened to be relatively near an airport and were high enough to glide for over a hundred miles, which bought them just enough time to restart an engine while they had been preparing to ditch in the ocean, buying them enough time to limp to the nearest runway - although all four engines were damaged beyond repair.)

For that matter, the French Concorde which crashed in 2000 was destroyed by a single thin strip of metal, 17 inches long and just over an inch wide, less than four ounces: essentially, a slightly larger than average metal ruler. It didn't even go into an engine, it just burst a tire - violently enough that the ten pound lump of rubber ruptured the wing and number 5 fuel tank, causing the crash which killed everyone on board.

That was a single 4 oz strip of metal hitting a tire. A pound of bolts or nails will destroy the engine - or a metal drone engine that size.

Comment Re:Annoying. (Score 1) 347

A business called "BT Wholesale / aka OpenReach"

Actually, BT Wholesale is a separate unit from Openreach. Openreach manages the 'final mile' services: all the copper wire, the local exchange buildings, and some but not all of the equipment in there. A few UK ISPs build their services on top of Openreach's products directly: TalkTalk and Sky, for example, went and installed their own DSLAMs in those exchange buildings, paying Openreach to connect the copper wires to them. BT Wholesale also takes those Openreach products, adds in their own national backbone and offers a service to other ISPs: they'll install a fast fibre backbone link to the ISP's premises/facilities, and connect the customers through that to the ISP.

This can cause problems; my own ISP is a BT Wholesale customer, so when I had a fault earlier this year they had to report it to BT Wholesale, who passed it on to Openreach to deal with. Openreach came out and tested their bit - my phone line, and the VDSL equipment on each end - and found nothing wrong there, so closed the fault. After six visits, BT Wholesale (or rather, BT TSOps and the Adhara Ops team at Adastral Park, where the fault got escalated to in the end) eventually found the problem was on their own backbone (a faulty router was corrupting traffic between certain IP addresses - one of which happened to be a core router at my ISP).

I agree with the overall approach, though, having a separate and regulated entity run just the local loop portion. (In practice, Openreach is still a part of BT - hence I got a sales pitch from at least one of the six Openreach engineers about BT Retail being a better option. Against all the rules - Openreach are officially supposed to be neutral - but could that ever really happen in practice while they're still the same company?)

Comment Re:About time! (Score 2) 306

Others such as Eli Lily or the UK Gov Dept of Pensions really don't need so many addresses

Someone in the UK government pointed that out recently - it turns out that "Dept of Pensions" allocation is actually used across most of the government as some sort of VPN extranet with various external contractors. Apparently, since they all use different RFC1918 blocks internally, they can't all be VPNed into any single RFC1918 block: they needed a globally-unique block for that purpose.

British Telecom uses the 30.0.0.0/8 block for managing all their customer modems - that block is actually allocated to the US DoD, but they don't allow external access to it anyway, so there's nothing to stop you using that block internally yourself as long as you don't need to communicate with any other networks using the same trick. Better than wasting an entire /8 of global address space just for internal administrative systems - or a /9, like Comcast grabbed back in 2010.

My inner geek - who cares about efficiency - would love to see all the legacy blocks revoked. I'm sure the DoD could use 10/8 instead of 30/8 quite easily for their non-routed block; the universities could easily fit in a /16 instead of a /8, or smaller with a bit of NAT. Still, we should be moving to IPv6 instead now: give each university and ISP a /48, or /32 for big complex networks needing multiple layers. I just have a nasty feeling we're in for a long time of CGNAT spreading instead - where we currently have ISPs that don't offer static IP addresses, in a few years they'll be refusing to issue anything other than a NATted 100.64/16 address.

Slashdot Top Deals

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...