Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Fourth amendment searches and warrants (Score 1) 71

Lots of 4A searches do not require warrants -- searches incident to arrest, custodial searches, searches with consent, and probably more. The warrant requirement only kicks in when a warrantless search would be "unreasonable" (violate a reasonable expectation of privacy, and such expectation is narrower than most non-lawyers would believe).

Comment Re:What took them so long? (Score 1) 212

The single-session CD is supposed to come from an unsecure network. What good will putting it back there do an attacker?

I am not thinking small, I am thinking rational. You are assuming an "insan[e]" attacker, which is rather silly. I'm not claiming that single-session CDs will make a system unbreachable, or that you should try to. My claim is simply that using single-session CDs (in a controlled, hygenic way) makes the cost to breach a system much higher than the alternatives that were suggested (network and USB) -- not even that single-session CDs are always the right solution.

Comment Re: What took them so long? (Score 1) 212

Do you have any idea what the error rate for manual data entry is? Typically about 0.5% of the entries will be wrong. Retyping information is a very error prone process.

Do you have any idea that there are known good practices for checking entered data before committing to it? And that most people would want to apply this kind of check before kicking off a production run, of just about anything, regardless of how the order was sent to the system?

What is it about this topic that makes people forget basic engineering practices?

Comment Re:What took them so long? (Score 1) 212

If you have a air-gapped system, you don't let people plug either random USB devices or random Ethernet devices into it. You help enforce this by disabling USB ports, MAC-locking switch or router ports, making it clear that only specific authorized people can import data, and making sure those authorized few use hygienic practices. It's IT security, not brain surgery.

Comment Re: What took them so long? (Score 2) 212

Sure... if.

1) If you can define the protocol to be simple enough, and
2) if you can be sure that only the intended application will process the data stream on the secure side, and
3) if you actually test that application enough to be confident it is secure, and
4) if you can ensure that sensitive information will not (improperly) leak back down the other direction, and
5) if you use it often enough to pay for that development cost, and
6) if you can resist the pressure to add features or "generality" to the protocol that makes it more costly to ensure secure processing...

then maybe such a protocol makes sense. Maybe somebody somewhere has satisfied all those ifs, but I would suspect not. For your simplified example, it is probably cheaper -- and just as secure -- to have an operator enter the dozen or so keystrokes to order "produce x amount of class y steel" than to design, build, install and support a more automated method. Human involvement has the added bonus of (nominally) intelligent oversight of the intended behavior for the day.

Comment Re:What took them so long? (Score 1) 212

The point of an air gap is to make data transfers much more controlled. Some can be crossed regularly (with appropriate control), and some should not. One should only adopt any security measure after a cost-benefit analysis. The depth and rigor of that analysis should be determined by the expected costs (ongoing/operational) and potential costs (from a successful exploit).

Thus, I said "If I really wanted to reduce exposure", not "Everybody should do this to reduce exposure". If the productivity costs are very high, you had better impose enough oversight to deter or catch any policy violations... or choose a security policy besides "air gap". My basic points stand: much more software regularly talks to a network than regularly reads from CDs, and the protocols involved are much more complex for network communications; and USB sits in between those two.

FWIW, industrial control instructions can be made much more regular than arbitrary data, making it easier to detect a compromise before it reaches its ultimate target. For example, if the usual file size is 1 MB, you had better have a good reason for it to suddenly be 3 MB. If you are really paranoid, you might have a format checker or sanitizer to act like a very application-specific antivirus.

Comment Re:Sometimes 'air gap' is impossible (Score 2) 212

What compels the management to hook the control network up to the Internet? If a vendor told me that their safety-impinging product needed Internet access to run -- for a license check or for any other reason -- I would tell them to go pound sand, and I'd be happy to take my business to a competitor. If Internet access is not mandatory, you are describing "sometimes an air gap is inconvenient", not "sometimes an air gap is impossible".

Comment Re:"sophisticated social engineering techniques" (Score 1) 212

There are techniques like "Hello my name is Solicitor Darren White, my client has just deceased and left you a sum of $1,000,000,000 (ONE BEEELLION DOLLARS)...". There are also techniques like "Registration is now open for [industry-relevant convention], please visit [malware-infected site] to sign up so you can keep up with new developments." Beyond that are very individualized attempts to gain the target's confidence, perhaps involving apparently independent contacts -- persona A contacts the target over a job board, persona B uses some of that information to ask for a supplier reference, eventually culminating in executable code delivered directly to the target in hopes that it will bypass virus checks and be executed on a sufficiently privileged computer. More sophisticated social engineering techniques will usually be more narrowly tailored and more costly for the attacker to use.

Comment Re:What took them so long? (Score 1) 212

On the one hand, you have to worry about security holes in the USB driver and file system.

On the other hand, you have to worry about security holes in every piece of software that talks to the network.

If I really wanted to reduce exposure for a network, I would probably use single-session CDs to cross the air gap, and make sure to pack any extra space with random data.

Comment Re:Established science CANNOT BE QUESTIONED! (Score 2) 719

Which people do you think I am describing? There certainly are a lot of weirdo extremists in the environmental-activist camp, but I wasn't really thinking about them. If you want me to ignore the weirdo extremists on that side, will you ignore the weirdo extremists on the other side? More significantly, will media and activists stop focusing on the (conveniently distracting) anti-AGW weirdo extremists so that we can pay more attention to what actually can and should be done?

What specific steps do the reasoned thinkers recommend as "what actually needs to be done"? Last I heard, European countries were revising or just rolling back climate agreements because (a) they realized they couldn't achieve their goals without reducing their quality of life, (b) they realized the system was being gamed, and/or (c) they wanted to keep up with the countries who didn't sign up to those agreements.

Comment Re:Established science CANNOT BE QUESTIONED! (Score 3, Insightful) 719

Lots of people believe in ghosts. Lots of people also believe in people who "think[] that human activities have no impact on climate change". There's about as much hard evidence in one of these beliefs as in the other.

When climate alarmists stop pretending that the dispute is over the degree of human influence on climate, and how much different countries should spend to mitigate anthropogenic climate change (or other kinds!), they might start to get traction with skeptics. Also when they start acting like the situation is as bad as they claim it is.

I know that when I used an electric sous vide cooker to make pork chops for dinner last night, it was worse for the climate than if I ate raw vegetables, and better than if I grilled a slab of steak over a bonfire. I know that living in the suburbs emits more greenhouse gases than living in a tiny apartment in a big city. I am thoroughly unconvinced that forcing most people to live like the alarmists claim we should (but usually don't live themselves) will yield the claimed benefits, or be worth the costs even if the benefits would be as claimed.

Comment Re:Clickbait (Score 1) 130

I called it cheating because they violated both one of the prime rules of AI: train on a data set that is more or less representative of the data set you will test with, and one of the prime rules of statistics: do not apply a priori statistical analysis when you iterate with feedback based on the thing you estimated. Their test images are intentionally much different from the training images, which is one of the first things an undergraduate course on AI will talk about. They also use what are essentially a priori estimates after they repeatedly tweak the inputs to push those estimates to extremes, which is identified as taboo in decent undergraduate courses on statistics. Both of those are intentional violations of good practices that make the results look worse for the neural networks.

I can't tell from their paper what they mean by "99% confidence". Unless the DNN has max-pooling layers very near the output, none or many of the output units might have high activation levels for a given input. (It sounds like they had classes with low typical activation levels, and did not try to evolve fooling images for those classes.) If that happens -- say, "wheel" gets a score of 0.99, "lizard" gets 0.90, "dog" gets 0.80, and everything else is near zero -- then it is inappropriate to say that the network decided it was a wheel with 99% certainty. You would usually say that the network recognized the image as a wheel, but note it as an ambiguous result.

Comment Re:Clickbait (Score 1) 130

Why was my characterization of their approach "hardly fair"? Someone -- either the researchers or their press people -- decided to hype it as finding a general failing in DNNs (or "AI" as a whole). The failure mode is known, and their particular failure modes are tailored to one particular network (rather than even just one training set). I think the "hardly fair" part is the original hyperbole, and my response is perfectly appropriate to that. The research is not at all what it is sold as.

Don't multi-class identification networks typically have independent output ANNs, so that several can have high scores? I assumed, perhaps incorrectly, that the 99+% measures they cited were cases where only one output class had a high score, and the rest were low. If they were effectively using single-class identifiers, either in fact or by considering only the maximum score in a multi-class identifier, that makes their findings even less notable.

Comment Re:Clickbait (Score 1) 130

The researchers also basically cheated by "training" their distractor images on a fixed neural network. People have known for decades that a fixed/known neural network is easy to fool; what varies is exactly how you can fool it. The only novel finding here is their method for finding images that fool DNNs in practice -- but the chances are overwhelmingly high that a different DNN, trained on the same training set, would not make the same mistake (and perhaps not make any mistake, by assigning a low probability for all classes). It is a useful reminder for some security analyses, but not a useful indictment of AI or DNNs as a whole.

Slashdot Top Deals

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...