Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:Can't see how... (Score 1) 164

It's one thing if you've made a conscientious and competent effort to build a secure product, and you provide security updates for a reasonable support period afterward. The point isn't to punish vendors for not being perfect; responsibility for an attack ultimately lies with the attacker, after all, and the vendor is a victim too.

Something like an open telnet port with a hard-coded password, though, is gross negligence. Heartbleed might not be the device vendor's fault, but not providing a firmware update to fix it, for devices that haven't reached a reasonable end-of-life date, is gross negligence. Continuing to ship something like Debian 3, which reached end-of-life and stopped getting security updates more than a decade ago, is gross negligence.

That's the sort of thing that vendors ought to be held liable for. Gross negligence in the security of your product makes you an (unwitting) contributor to the attack, not an innocent victim.

Getting updates actually installed on devices, after they're released by the vendor, is tricky. It may be a good idea to have the device just update itself automatically, though that opens a different can of worms relating to forced updates and people's control over the devices they own. But if the owner chooses not to install a security update within some reasonable time period after it's released, maybe the owner should be liable for some portion of the damage when the device ends up participating in an attack.

Comment Re:Can't see how... (Score 1) 164

Can't see how a national government can fix this

By making manufacturers liable for damage done by their insecure devices.

Insecure software is an externality: the manufacturer creates the vulnerability, but the customer (or the whole public) bears the cost when it's exploited. Free-market competition is good at optimizing for minimum cost, but by default, externalities aren't included in the cost being optimized. That's why you get cheap, insecure devices.

If manufacturers are held liable for damage done by security flaws in their devices, that cost is no longer external. The manufacturer bears the cost of its own insecurity, and has an incentive to reduce that cost. Security becomes cost-effective, and competition will reward the manufacturers who do it the best.

The government doesn't have to mandate that devices be secure. It doesn't have to verify that devices are secure. It just has to make the manufacturer liable when a device is insecure, and the market can do the rest.

(This will, however, generally raise the price of devices. The cost of security gets transferred more directly to the customer, instead of foisted onto the public.)

Comment Re:I think it's wrong, they're killing i386 not i6 (Score 1) 378

"i386" is still the name that Debian and its derivatives (like Ubuntu) use for the 32-bit x86 platform, regardless of the specific chip. Debian actually dropped support for pre-686 CPUs a few months ago, and had required at least 586 for several years prior, but the overall architecture is still called "i386", because that's what it's always been called, and there's no real benefit (and lots of inconvenience) in changing it. Same reason why 64-bit x86 is called "amd64" even though Intel implements it too.

This Ubuntu proposal is about dropping 32-bit x86 entirely, not just certain old chips.

Comment Re:In Other News: People Hate Change (Score 2) 293

The best solution on offer is to use SCRIPTING in the initfs to mount the RAID volume before systemd gets to run. Yes, SCRIPTING.

You can use systemd and I'll stick to scripts.

Just not in your initramfs, I guess?

Really, though, distros use sophisticated scripts in initramfs anyway, which should handle this sort of thing. Mounting the root filesystem is initramfs's job, not /sbin/init's. My root filesystem is on LVM on top of dm-crypt on top of bcache on top of RAID1, and Debian makes it work just by running "update-initramfs -u" -- which happens automatically whenever a kernel package is installed or upgraded. What you're describing sounds like more of a distro thing than a systemd thing.

Comment Re:Logging in as root momentarily (Score 1) 250

I've been running Debian for more than a decade and I never log in as root. Use su to get a root shell, or to run an individual command as root — the same way you'd use sudo, except that you type the root password instead of your own password. And, like with sudo, that's one root shell or command in a terminal window, where everything else is a normal user login session. There's no good reason to have your whole desktop session running as root.

These days, the Debian installer also supports setting up sudo the way Ubuntu does, instead of having a root password. But I prefer to have a separate password for the root account, so that if someone learns my login password there's still another barrier to root access.

Comment Re:I been wondering (Score 1) 213

All the device does is allow them to locate a specific cellphone.

And we're not talking about situations where a warrant is needed, since they're not violating anyone's right to privacy.

Many people feel that your location is private, as long as you're not in a public place.

Also, it's not just the "target" phone: as I understand it, a stingray appears as a cell tower to all the phones near it. So it's catching people who aren't even suspected of a crime, and may lead to dropped calls when the phones try to switch to a stronger signal from a "tower" that isn't actually part of the phone system.

Comment Re:Stop calling it a court. (Score 1) 165

The thing that sets the FISA court apart from any other judge issuing warrants is that the evidence shows they act purely as a rubber stamp. Any court or judge who has never denied a warrant after having seen thousands of them is suspect.

"Never denied a warrant" is hyperbole, but the court does have a very high acceptance rate. That's a little misleading, though: the Wikipedia page mentions that the 99% acceptance rate only reflects "final" submissions, and that many requests have been changed or dropped before that point based on informal advice from a judge that the request was unlikely to be approved. Also, the NSA knows what the FISA court's rules are, and can avoid submitting requests in the first place that are unlikely to make it through. So it's not 99% of "whatever the NSA wants", it's 99% of things that the NSA thought were likely to be approved even after informal feedback from a judge. That's a very different beast.

It's valid to be concerned about the FISA court approving things it shouldn't. (In particular, I think the court overstepped its constitutional authority in approving the bulk phone metadata collection.) But the 99% approval rate doesn't support a claim that the court is a rubber stamp; it's a misleading statistic if used that way.

Comment Re:Stop calling it a court. (Score 1) 165

Furthermore, how does the foreign intelligence court have jurisdiction on matters involving domestic surveillance?

The FISA court's job is to issue warrants for surveillance of suspected foreign agents (e.g. spies, terrorists) within the US. Americans' privacy rights are protected by the Fourth Amendment, so the warrant is necessary. (Foreigners don't have Fourth Amendment protection, so no warrant is needed for the US to spy on them.)

Comment Re:Stop calling it a court. (Score 1) 165

In a court of law, issues are argued by two sides before a neutral magistrate.

I've seen this same point made in a few other places too. Maybe I'm missing something, and I'm certainly not a lawyer, but I don't think it holds water.

In a trial, the case is argued by two sides. But other things happen in courts besides trials — such as warrant requests. Those don't use the adversarial process AFAIK.

FISA aside: if the police suspect you have stolen property in your house and want to search your house to find it, they go to a judge and explain why they think you have stolen property. If the judge agrees that it's a reasonable suspicion, he or she issues a search warrant. You're not notified of this, and you don't get to come in and defend yourself. Probably the first you hear of it is when the police show up at your door with the warrant in hand. If they arrest you and charge you with a crime, then you get a trial where you can defend yourself against the charges. But for the search, it's the judge's job alone to weigh the evidence against your privacy rights.

The FISA court issues search warrants; no one is on trial there. You don't get to defend yourself in FISA court, but how is it any different from a normal court in that regard?

Comment Re:So, UEFI is a good thing now? (Score 5, Interesting) 471

First of all, UEFI is more than Secure Boot. UEFI has been standard on PCs for the past few years, and on Macs ever since they switched to x86. Secure Boot is just a feature of some newer UEFI implementations.

Second, Secure Boot is a legitimate security feature that helps to protect against boot-time malware. There's nothing inherently evil about it. The controversy is over who should have the power to decide which OS is considered trustworthy and allowed to boot: the owner of the computer, or the vendor of the OS that came preinstalled on the computer?

Naturally, you don't want to buy a computer that doesn't let you choose which OS you trust. But if you have a computer that does give you that choice, why not take advantage of it? Seems to me that it's good to have hardware vendors see increased demand for machines that support securely booting the OS of your choice, as opposed to those where you just have to disable Secure Boot entirely if you want to run something other than Windows.

Comment Re:Java sandboxing helped in this case (Score 1) 127

Not quite.

First, sandboxing in Android isn't done at the Java level, it's done at the OS level, by running each app under a different UID and letting the kernel take care of enforcing what that UID is (and isn't) allowed to do. It's the same system that prevents different users on a "conventional" Linux system from accessing each other's private files. This is why Android apps can load and run native code (via JNI) without needing any special security permission or exemption. Native code is still in the sandbox.

Second, the real danger in this flaw isn't malicious apps tricking the user, it's malicious apps tricking other apps. Android's permissions system includes a feature called "signature-level permissions" which allows apps that are signed by the same publisher to grant each other permissions that aren't available to apps signed by other publishers. This bug means that a malicious app can pretend to be signed by Company X in order to gain signature-level permissions to interact with actual Company X apps in privileged ways. Depending on the app, this may allow access to sensitive data.

Slashdot Top Deals

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.