Become a fan of Slashdot on Facebook


Forgot your password?
Slashdot Deals: Deal of the Day - Pay What You Want for the Learn to Code Bundle, includes AngularJS, Python, HTML5, Ruby, and more. ×

Comment Re:We don't need "backdoors" (Score 1) 259

Put simply, there exist plenty of systems and techniques that don't depend on a third-party who could possibly grant access to secure communications. These systems aren't going to disappear. Why would terrorists or other criminals use a system that could be monitored by authorities when secure alternatives exist? Why would ordinary people?

That's a really easy answer -- terrorists use these simple platforms for the same reason normal people do: because they're easy to use. Obviously a lot of our techniques and capabilities have been laid bare, but people use things like WhatsApp, iMessage, and Telegram because they're easy. It's the same reason that ordinary people -- and terrorists -- don't use Ello instead of Facebook, or ProtonMail instead of Gmail. And when people switch to more complicated, non-turnkey encryption solutions -- no matter how "simple" the more savvy may think them -- they make mistakes that can render their communications security measures vulnerable to defeat.

If the choice was between (easy & insecure) and (hard & secure), you'd have a point, but there's plenty of easy ways to have secure communication: for example, OTR-over-(any IM protocol) is about as simple as it gets (it's literally a one-click thing, and can be set to automatically go secure with no user interaction), doesn't depend on a provider for keys, and can work with any IM network. If someone can install an executable file, they can install and use OTR.

Sure, it doesn't conceal metadata, but most (all?) IM networks leak metadata as well. XMPP-over-Tor-hidden-service can help mask that, and isn't really complicated for the users ("Open Tor, click 'Connect' and wait for the green light, then open your IM client.").

Tox is another option: anonymous, distributed, and with no single point of failure. It's as easy to use as any other IM client.

Even if secure communications weren't as easy as non-secure methods, there's plenty of easy-to-follow guides on how to setup and use secure methods. It's hardly rocket science, and those methods aren't going away, so there's no reason to expect that bad guys that are motivated to keep their communications private will avoid them simply because they may be slightly more difficult.

I'm not saying that the vendors and cloud providers ALWAYS can provide assistance; but sometimes they can, given a particular target (device, email address, etc.), and they can do so in a way that comports with the rule of law in free society, doesn't require creating backdoors in encryption, and doesn't require "weakening" their products. And of course, it would be good if we were able to leverage certain things against legitimate foreign intelligence targets without the entire world knowing exactly what we are doing, so our enemies know exactly how to avoid it. Secrecy is required for the successful conduct of intelligence operations, even in free societies.

Sure, a company could do that (and several do), but there's certainly a lot of interest from users to have secure systems (devices, accounts, etc.) that cannot be remotely unlocked or decrypted by the company or authorities (see Apple). Considering how massively the US Government abused its position of power and authority through massive, warrantless surveillance of people, hacking and snooping corporate networks, doing shady things like parallel construction, and generally violating everyone's trust, it should come as no surprise that there's some pushback from users and industry.

Statistically, the risk posed by terrorists is so low as to not be a concern in my day-to-day life. I'm in far graver danger from occasionally eating hamburgers or riding a bike than I am from terrorists. Considering that "free societies" are hardly permanent things, and that a major event or political upset can dramatically change the nature of government, I'm more worried about granting even the most trustworthy government (which the US Government is not) powers that groups like the Stasi or KGB could only dream of in exchange for the dubious assurance that (a) it's necessary for them to stop bad guys and (b) they won't abuse that power.

Your mileage may vary, of course.

Comment Re:We don't need "backdoors" (Score 1) 259

Sure. One hypothetical example:

The communication has to be decrypted somewhere; the endpoint(s) can be exploited in various ways. That can be done now. US vendors could, in theory, be at least a partial aid in that process on a device-by-device basis, within clear and specific legal authorities, without doing anything like key escrow, wholesale weakening of encryption, or similar with regard to software or devices themselves.

What if the endpoints aren't accessible to the vendors?

For example, one could easily exchange encrypted emails with a correspondent and not decrypt the messages on an internet-connected system. A Raspberry Pi is cheap and can easily act as a secure, offline system: the sender could write their sensitive messages on the offline system, encrypt them, transfer the encrypted messages to a USB stick, use the USB stick to transfer the message to an internet-connected computer, then email the encrypted message to the recipient who does the reverse.

Sure, it's an extra step compared to en/decrypting the message on the internet-connected system, but a relatively minor one.

Short of compromising the firmware on the USB stick (which is a possible, albeit non-trivial thing) or doing something extreme such as TEMPEST-type monitoring of the offline system, how would compromise such a system? Vendors would be unable to do anything to help authorities.

Alternatively, things like OTR can be overlaid on common protocols like XMPP, AIM, etc. but the keys are managed by the endpoints. The OTR software is not dependent on nor communicates with any "vendor" who could assist authorities. Same thing with other security software like GnuPG (which is developed in Germany, outside of US jurisdiction).

Put simply, there exist plenty of systems and techniques that don't depend on a third-party who could possibly grant access to secure communications. These systems aren't going to disappear. Why would terrorists or other criminals use a system that could be monitored by authorities when secure alternatives exist? Why would ordinary people?

Comment Re:Self Signed (Score 1) 95

I can't believe people would trust anything other than self signed certificates.

That's ridiculous. Anything you do with a self-signed certificate can also be done with a CA cert, including certificate/public key pinning. What you really mean is that you can't believe users trust browsers that don't do public key pinning.

Exactly. On my not-particularly-interesting sites, the CA-issued cert is used merely to (a) not show warnings in browsers and (b) offer an independent check on the legitimacy of my domain.

To prevent spoofing, I use DNSSEC+DANE to identify which certificates should be presented, as well as using HSTS to ensure future visits use TLS. I also use HPKP public key pinning.

All basic stuff that should be used by pretty much every secure site. Alas, it's not widely used. Too bad, really.

Comment Re:!education (Score 1) 93

Huh? What? Cannot play videos? Damn. I missed that memo. I've been using Raspberry Pis as an in-car DVR for my kids for a few years now. Never had a problem. Straight from MythTV recorded mpeg4 onto a USB stick to playing on Raspberry PI. Just plug in the USB stick and give the kids a wireless mouse.

Proprietary bootloader? RaspBMC was so easy to set up, I'm afraid I never really noticed.

As someone with a toddler and a bunch of Pi2s, that sounds like a really nifty thing to do. Do you have any information about how you built it, what components are used, etc.?

Comment Re: S/MIME (Score 1) 83

Apologies: I mis-read the earlier comment. My comment about StartSSL generating a private key for the user applies only for SSL/TLS certs (where users can, as I mentioned, skip that and submit their own CSR).

When one generates a client certificate such as used in S/MIME, the key generation takes place entirely in the browser using keygen tags -- the private key is stored locally and the public key is sent to the server for signing.

Put simply, StartSSL (and other CAs around the world) are happy to issue certificates identifying you as you, but none of them AFAIK generate the private key themselves. Maybe some internal corporate CA systems do, but I'm not aware of any commercial ones that generate private keys for client certs.

Comment Re: S/MIME (Score 1) 83

Not necessary. Startcom, a company in Israel, is happy to generate and store a key that you can use to certify that you are you, for free. I think this also demonstrates the insane brokenness of the certificate authority system.

Sure, they offer the option (by default, which is annoying) for them to generate a private key for you (they claim not to store it) but you're welcome to generate your own private key and CSR and submit it for signing -- that way they never see your private key.

Comment Re:Why not both? (Score 4, Insightful) 239

AC has far lower transmission losses over long distances

Does it? I was always under the impression that AC was used for long-distance transmission because it could be easily stepped up to very high voltages with transformers while efficient DC-to-DC conversion was not possible until relatively recently. For the same power transmitted, resistive losses are lower at higher voltages as power lost to heat goes as I^2*R and lower currents could be used.

However, modern solid-state DC-to-DC converters are extremely efficient, can step DC voltages up to very high voltages and thus benefit from lower resistive losses in transmission. HVDC also benefits from not having to deal with inductive or capacitive losses in the cable.

In short, as far I know the key to minimizing losses in transmission lines is to use high voltages, not because of any inherent advantage of AC.

Comment Re:We should do what GPS does (Score 2) 233

I recently took a private tour of the time and frequency lab at METAS (the Swiss Federal Institute of Metrology) and got to observe their atomic clocks, ask the people there some questions, etc.

The scientist in charge of the lab wishes everyone would use TAI for time distribution. TAI has no leap seconds and differs from GPS time by a constant 19 seconds. If TAI was used, computers would never have to worry about leap seconds internally and things would be greatly simplified.

Computers don't care what time is used internally, and it's easy for computers to get a table of leap seconds and use that data to display UTC to users so the displayed time matches solar time.

Comment Oldie but goodie (Score 1) 558

My desktop at home has the following:

Intel Core 2 Quad Q6600 @2.4GHz
8GB DDR2 RAM (MB can hold 16GB, but DDR2 is blood expensive now)
Gigabyte GA-EP45-UD3R motherboard
Nvidia GeForce GTX 550 Ti
Crucial M550 1TB SSD (boot disk, most applications)
Mixed SATA hard disks, from 750GB to 4TB (games, photos, backups, etc.)

Nothing special, but other than the graphics card and hard disks I've found no real need to upgrade the rest of the system. I built it back in 2007 with my then-girlfriend (now wife) and it just keeps on trucking along, plays modern games with no issues, etc.

Comment Re:This makes me worry. (Score 2) 111

Rather than take the time to understand their perimeter and data it exposes they want to "protect" everything with HTTPS. Which probably doesn't make sense for static, non interactive services.

Perhaps, but it also helps protect against content injection or manipulation (e.g. ad injection by shady ISPs), snooping by third parties (e.g. hotel or coffee-shop networks), etc.

Honestly, there's very little reason *not* to encrypt data these days.

Comment Re:Airtel got caught, what about others? (Score 2) 134

How many people routinely check the source of their own web page through different connections to look for such injections? If some major US cell network or ISP did this, how likely they will be caught? Would https stop them from messing around with injections?

So long as the injector can't issue SSL certs that the user will trust, yes, https will stop such injections.

If the injector *can* issue SSL certs that the user will trust (e.g. the ISP requires users install their local CA, or they somehow have a global wildcard from a trusted CA), all bets are off -- the injector can impersonate and inject content into any https-secured site.

Comment Re:Why (Score 1) 138

... have Facebook encrypt email it sends to you ...

This doesn't prove who sent the message. A message must be encrypted with the receiver's public key and encrypted again with the sender's private key. Once again, all security depends on the integrity of the public-key server. Such servers can't prevent man-in-the-middle attacks.

In addition to encrypting messages to your public key, Facebook also digitally signs the messages using their private key and rotates the signing subkey every few months.

The fingerprint of their primary key (which is used to sign the signing subkeys) is available on their HTTPS-secured announcement page.

Additionally, all outgoing emails from Facebook are DKIM-signed, adding further assurance that it's from them.

Sure, it's *possible* that an HTTPS connection may be MITMed and DKIM records spoofed, but that requires an active attacker and significantly increases the risk of the attacker getting discovered. You could use Tor, a VPN, or a proxy from a different computer to verify that the HTTPS certificate, DKIM public keys, and the PGP fingerprint are what you see on your normal internet connection and thus have more assurance that the information is authentic.

Comment Re:how can we trust facebook? (Score 2) 138

That's not how it works. Facebook isn't letting you use PGP to encrypt user-to-user messages.

They're letting you upload your *public* key to your profile with the option to have Facebook encrypt any automated notification messages it sends to your email. This way those notification messages are protected from snooping as they traverse the internet between Facebook and your email server, while they are stored on the mail server, etc.

Comment Re:You still have to submit it (Score 1) 138

So how are you securely getting the email message to facebook to start with? I see an SSL connection that could easily have a "man in the middle" thing going on...

Facebook is encrypting automated notification messages (e.g. "[Friend name] posted new photos. Click here to see them." or "[Friend name] sent you a message on Facebook. Login to read it.") that it sends to your email account. Messages sent within Facebook are still unencrypted, only the notification message sent to your non-Facebook email would be encrypted.

The shortest distance between two points is under construction. -- Noelie Alito