Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Computer License (Score 1) 321

I would agree, except that most users live in outright denial, rarely (if ever) learn correctly from mistakes and frequently prefer to ignore their suffering until the harm is truly excessive.

Better critical thinking techniques need to be taught in school, along with practices that impede cognitive dissonance.

Further, there need to be recognized groups that have the authority to mentor those who aren't clued up.

Comment Re:What is the actual risk? (Score 1) 321

If someone decided to stand on the curb for a long time, they'd probably be reported for suspicious activity. Casing a place is a very common precursor to a break-in. I see no reason for the monitoring of a private webcam to be treated any differently in that regard.

A more likely scenario would be for a criminal to drive past at night, see the car gone, and then check the internal cameras of the house for any activity to determine if it's easy to rob. If there's no baby, there's likely no babysitter either. It's just wardriving with intent.

A third scenario is that the criminals have got something equivalent to packet sniffing for speech. Back in the old pre-common-SSL days, it was common enough for a hostile packet sniffer to log packets that contained a field that was in credit card number format. You didn't have to break in to get all the personal data, you just grabbed it as it went by. You wouldn't then sit there waiting for interesting tidbits of information, you'd simply have your zombie botnet collect interesting-looking sound snippets. It doesn't have to recognize the words, just the patterns. We know for certain the security services had that in 2003 as part of Echelon and Moonpenny, and probably had that as far back as the late 1990s. It would be gross incompetence on the part of anyone dealing with IT security to blithely assume it's not reached the cybercriminal domain.

Hell, just the fact that the intelligence services can sniff for interesting data is a serious risk these days. Both British and American authorities have done some ethically questionable undercover work that (at best) bordered the criminal. And they're some of the better ones. Blatantly criminal endangerment, blackmail and other corrupt practices are widespread.

Comment Re:Place the blame where it belongs (Score 1) 321

I could build a device that is, by default, secure against remote intrusion. That's easy. I haven't, because the NSA wants to ban public encryption and GCHQ wants to declare all secure devices terrorist command-and-control centres. I'd rather not be a target for a hellfire missile, thank you very much.

But if I can do it, anyone with half a wit and a credit card can. It's not hard. It's not cheap, but it's not hard.

Such a device aught to be mandatory on eCommerce systems and a minimal version aught to be mandatory on all networked appliances (fridges, toasters, cameras, air conditioning, nuclear reactors....) - that it isn't IS gross incompetence. That the security agencies want to prohibit such technology is gross negligence.

Comment Re:missing from the Scorecard (Score 2) 96

IPSec and SK/IP are usable by ordinary people, and since all applications can work over those, all applications can have secure and usable cryptography.

That's not the problem. The problem is that if it's not used by a critical mass of people, it doesn't do any good. Until people are forced, kicking and screaming, to not broadcast every private thought with the entire world, nothing will happen. I'll see you on the 6Bone before I'll see the average Joe so much as clicking a button in their own interest.

Comment Re:Would love to see how I2P-Bote fares. (Score 2) 96

Agreed. Better to fix IPSec and have every packet encrypted - with keys when possible, opportunistically as fall-back - when communicating with any other computer for anything.

One of the advantages of IPSec is that absolutely everything is encrypted. Thus, any packet sniffer out there (be it by a credit card thief, the NSA - who may also be credit card thieves, or anyone else) can't look for context to decide what packets to grab. There is no context. That means having to decrypt absolutely everything, including DNS lookups, spam emails, everything. Since keys expire frequently, the value of the data has to be extraordinary to be worth the cost of the effort.

The main disadvantage of IPSec is that it doesn't replace the unencrypted channel for the user, it's a distinct channel. That's bad. You don't want a trojan sneaking onto the computer and simply echoing all the juicy gossip over the plain wire.

The second disadvantage is that it's a very heavy protocol. Sun's SK/IP was lighter and it might be worth investigating why it was dropped and whether it might be a better choice.

The final disadvantage is that most implementations use crypto functions that are no longer regarded as secure or are horribly slow. Not that that matters anyway, as to get it to override the user-visible open channels, you'll have to rewrite it anyway.

Comment Re:We have to assume everything is compromised (Score 4, Interesting) 96

The first requirement is that auditing must involve (0.5 x participants) + 1 who are not compromised, the minimum number guaranteed under The Byzantine General's Problem to result in provably correct information being transmitted to/from the head of the development team (who must also not be compromised).

The second requirement is that the audit not be done directly. In the case of seL4, the proof was done mathematically. In the case of extreme programming, development is done by producing test harnesses (essentially the same thing as the mathematical proof) which the code must comply with in order to pass inspection. Code itself is often very difficult to validate by inspection, inspecting the reasoning/logic is much cleaner and it's easier to prove that the inspection is itself correct.

The third requirement is that you must be able to establish that "traitor code" within the system, provided it is sufficiently small, cannot compromise security. In other words, there should be no single point of security failure, where a traitor module could transmit sufficient data to compromise the entire system. Obviously, there can always be sufficient traitor modules to betray the secrets between Alice and Bob. Nor is there any way to prove you have eliminated all of them. What you have to prove, however, is only that your detection threshold for such code is below the minimum number of such modules needed for a third-party to intercept Alice's lunch plans with Bob. Anything below threshold is unimportant.

This doesn't require you to use lots of duplicate code. It requires only that no block of code guaranteed to run gets to access clear-text and any form of network or storage device. Ever. Clear-text handling code should be able to read data, process it and hand it directly over to the next module. Nothing more. Then it doesn't matter what else it tries to do, it can't do anything toxic. Ideally, you'd write such code in its own totally isolated process that is loaded and run by the main program. If it's a distinct process, ideally under a non-privileged user, you can lock it down. Give it absolutely minimum rights to do what you specify and nothing more. It shouldn't have network access of any kind, for example, since it isn't to access any network.

Because nothing clear-text escapes that container, even via leakage over the heap or stack, it doesn't matter what has been added to the network code. There's nothing sensitive that can be leaked to third-parties at that point, if the cryptography is good.

Now, as previously noted, all this code is being audited by formal or semi-formal methods that have, themselves, been audited. This is still necessary, because the firewall isn't perfect. It's good, but a rootkit or hypervisor can see into the memory of multiple processes and can thus cross-contaminate without ever altering the code itself. The audit won't stop that, but it'll stop any code being added that assists in such a process.

Now, can you stop a third-party hypervisor at all? No. You cannot. That's what makes the NSA and GCHQ bleats so infuriating. If they were actually competent, they wouldn't care about what software you used, they could obtain anything they wanted in the clear anyway. It betrays severe incompetency and if there's anything more annoying than a spy agency conducting industrial espionage and moral supervision of the citizens of a country, it's a hopelessly incompetent spy agency conducting (largely successful) industrial espionage and moral supervision of the citizens of a country... whilst asking for assistance in doing so.

To get much more secure, to actually block software running outside the OS itself, you need far better security than you can achieve in software. With software, there is always something that can look in from outside. And if it can look in, it can both intercept and inject at every point in the code. Nothing, not even the data stream, can be assured. To go further, you must abandon plug-and-pray commodity hardware. If you want guaranteed intercept-proof technology, you have to understand the Orange Book. Not just the software but the hardware as well MUST be verified to B2 standards or above. If you're paranoid, start at A1 and work your way up through the ceiling. If you're going to do that, seL4 is the obvious starting point, although you could probably debug BareMetal OS to the required security level.

Comment Re: Kinetic Kill Vehicle (Score 1) 139

A dark satellite made from an ultrablack material or using a stealthy topology simply isn't going to be seen. By anyone.

Ion engines are slow, but they don't give off any tell-tale glare.

The southern hemisphere has very little in the way of monitoring - a satellite traversing any great circle other than equatorial will be difficult-to-impossible to track.

This is not a likely threat, on a scale from one to ten, the seriousness is sqrt(-1). Nonetheless, it's not zero.

Comment Re: are you sure? (Score 2) 139

That may be true in theory, but Iran succeeded in hijacking a US drone via a GPS attack. Thus, whatever authentication exists is not actually in use. The US, for reasons known only to them, hate encryption. Any encryption. By anyone. Including themselves. For much of the war in Afghanistan, drone camera signals were unencrypted and omnidirectional, leading to video footage being circulated. Slashdot covered the issue in the early days of the war.

If the US military are too stupid to encrypt drone GPS systems and drone video feeds in an open war, they can't be trusted to do anything right.

Comment Re: Err - no. (Score 2) 139

The FBI wants to ban private encryption, essentially banning eCommerce, eBanking, UNIX, foreign languages, medical implants, boolean operators...

The mere fact that the director could state this in public and not be fired by the time he'd finished speaking is all the proof you need that Americans - and indeed any post-Babbage civilizations - are expendable in the eyes of the civil (uncouth?) service.

Which should be no surprise. The difference in social influences, culture and thus attitude between the paranoid schizophrenic survivalists and the paranoid schizophrenic security agency staff is pretty much nil.

Comment Re: Err - no. (Score 2) 139

And the government always obeys the law? Further, if the facility exists, anyone can turn the jitter back on. It's no different from what we've been saying about backdoors - once they exist, anyone can use them. There's also risks of social engineering attacks against those running satellites. And, since no software is perfect (and no radiation proofing is perfect), the satellites may spontaneously add jitter, enable encryption (with a gibberish key), or simply activate their steering jets, putting them on an incorrect and/or elliptical orbit, screwing up calculations. (ie: physical jitter)

This is ignoring the solar storm/jamming/gamma ray burst/collision with space junk range of issues, as they're discussed elsewhere and aren't really pertinent to the jitter issue.

Slashdot Top Deals

A quarrel is quickly settled when deserted by one party; there is no battle unless there be two. -- Seneca

Working...