Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Learn to Build 14 Websites with 28 Hours of Instruction on HTML, JavaScript, MySQL & More for $14 ×

Comment Re:No hacking required... (Score 1) 286

Actually, there is no EEPROM in the SoC. The ROM firmware is, well, a true mask ROM (the first stage), and the rest is loaded from external NAND flash. It's actually impractical to put EEPROM onto the same chip as a modern high-end SoC: it would be too cost-prohibitive or take too long to develop, because EEPROM needs special processing steps that regular CMOS chips don't. You'll never find EEPROM/Flash on a leading edge, high-end process, it's always older stuff. This is why eFuses and other OTP technologies are used, because some of them can be done without any special processing steps. And why just about any decently powerful device always has a little 8-pin flash chip to hold the firmware next to the main SoC. You only get embedded flash with low-end microcontrollers.

Some (particularly older) OTP chips are just EPROM (one "E") - the kind you erase with UV light - without the UV window. EEPROM is actually UV-erasable too, and one of the things often done to reset security "fuses" in EEPROM-based microcontrollers is to apply UV light in the right spot. Chip designers end up using shield metal above the bits, sometimes not very successfully (I recall one such chip was hacked by putting the light at an angle to get in under the upper metal shield). But this is the realm of lower-end microcontrollers with embedded EEPROM/Flash.

Comment Re:Didn't (Score 1) 286

Currently this is true, but with the oncoming invention and use of quantum computing, a key-recovery attack on 256-bit AES will become trivial.

Nope. Even assuming practical QC is coming, it only halves the practical key size for symmetric ciphers. 256-bit AES becomes as strong as 128-bit AES. You don't need a Universe worth of time then, just the entire power output of the Sun for a few seconds (under impossibly ideal circumstances). Still not going to happen. And that's assuming Landauer's principle applies the same way to qubits, which I'm not even sure it does - qubits might be more expensive to handle energy-wise.

QC breaks (currently in use) asymmetric crypto. It doesn't break symmetric crypto, only weakens it.

Even with 128-bit keys, keep in mind that the largest symmetric key ever broken was a 64-bit key, and that was broken by a large distributed computing project (70k hosts). For QC to break a single 128-bit crypto key (64-bit difficulty in QC), we'd need to have quantum computing power equivalent to that. That's probably half a century away - QC is in its absolute infancy. And that's for a single key. By then we'll all be using 256-bit crypto for everything and it'll be completely moot. I use 128-bit FDE at home for my most important data and I don't feel the least bit insecure. I might switch to 256-bit in a couple years when I upgrade my boxes again and then I'll be set for eternity (unless some catastrophic flaw is discovered in AES).

I'm curious though, why would you just erase the key after 10 attempts. Surely they could just add a full 13-pass erase of all the data, and reset the phone back to factory settings.

The battery wouldn't last long enough for a 13-pass erase of the data. The whole point of FDE with an erasable key is that if you erase the key you don't have to do an actual data wipe. In practice, wiping the key is as good as wiping the data. Breaking that kind of crypto is outside the threat model, and if you can do that, then there are many other things you can do that would break security in other ways. Assuming an attacker can't break AES-256 is perfectly reasonable.

Comment Re:Didn't (Score 1) 286

Or the NSA slipped a back door into the hardware and/or software allowing them access without needing the encryption key.

Unlikely, since Apple designed the chip and it's not manufactured in the US, and Apple controls the software end to end (it's signed).

It's also quite possible the NSA purposefully created a (known only to them) weakness in AES and how it generates "random" numbers to greatly reduce the key space they would need to search.

Unlikely, since AES neé Rijndael was designed by two Belgian cryptographers, has no "magic" unexplained numbers (unlike the Dual-EC-DRBG "random" number generator that we know the NSA backdoored, or the ECDSA curves which we suspect they might have), and has been extensively cryptanalyzed. AES doesn't "generate" any random numbers. It's a block cipher.

The NSA isn't some all-powerful entity. They're a bunch of sneaky bastards, but assuming they have backdoors in anything and everything is excessive application of a tinfoil hat. Snowden said so himself: good crypto works. And Apple are a bunch of paranoid bastards.

Comment Re:Didn't (Score 1) 286

Actually, the encryption key that would be erased is the data partition's full disk encryption key (which is not unlocked/decrypted by the PIN, it's unlocked/decrypted internally using the phone's UID key). So even though your PIN only protects user data at a higher level using a separate key (not metadata, and not all files on the phone), once your 10 attempts are gone, the entire data partition's lower level FDE key is wiped and all of it, data and metadata, is as good as gone.

Personally, I think it's perfectly fair to say that a key-recovery attack on 256-bit AES is impossible. Modulo future cryptographic breaks (which are unpredictable), with currently known attacks, you need > 2^254 operations to perform key recovery on 256-bit AES. Assuming that happens at room temperature, Landauer's principle and some back of the envelope math says you'd need the entire power output of every single star in the Milky Way for about as long as the age of the Universe just to count that high, nevermind actually try an AES decryption operation. At some point it's just silly to keep talking about things like brute force being "possible but impractical" for certain key sizes. It's impossible, saying otherwise will just confuse people who don't understand the ridiculousness of the numbers involved.

Comment Re:Didn't (Score 1) 286

RAM-resident firmware is still firmware. Ever used a Linux machine? Ever looked in /lib/firmware? All of those are firmware files to be loaded into RAM on various devices that require RAM-resident firmware to run.

Originally I actually used the words software and firmware interchangeably in the article, because the distinction is pretty much moot with devices like the iPhone which blur the line between embedded devices and general purpose computers, but I changed them all to "firmware" for consistency, to avoid confusing someone who doesn't understand the lack of distinction in this context. The old meaning of the term "firmware" in the sense of "something programmed into a ROM" stopped applying once we got devices with re-writable memory like EEPROM and Flash. Now it just means "software for an embedded device" (usually excluding things like apps and other add-ons). It doesn't matter what kind of memory it is stored on. There are devices out there that download their firmware from the Internet every time they boot up. It's still firmware.

If you want to be technically pedantic, what the FBI wants is a custom signed restore ramdisk (and associated iBEC and iBSS to boot it) that can be loaded from DFU mode. My article deliberately avoids going into pointless minutiae about the iPhone's boot process to keep it accessible to a wider audience.

Comment Re:No hacking required... (Score 1) 286

Presumably the UID is written to a memory cell on the SoC using links that open (like a fuse) when a high current is passed through (like the old PROM memories used to).

Ah, this is where it gets fun. There are actually quite a few OTP storage technologies. Fuses, like what you mention, are one. They're not necessarily on top (indeed, they'd usually be on lower, finer pitch layers, since the whole point of a fuse is that it has to be thin), though, so to read them you'd still need to strip off metallization layers, but that's just a matter of a controlled acid bath. It's not really so much about burning/melting the fuse like a traditional macroscopic one: what actually happens is accelerated electromigration of the metal trace due to excessive current density, so it's not driven primarily by temperature and there isn't a need for the fuse to be on top (and no material is emitted, just somewhat scattered outward as the metal migrates). You'd probably need a scanning electron microscope at the densities used in modern chips, but even I have access to one of those, so that's not a huge deal (turns out secondhand SEMs are cheap these days).

However, these days antifuses are common. Those work, broadly speaking, by causing a short circuit across gate oxide in a transistor using excessive voltage, or a similar technology. You can't really read those out trivially because the change is buried in a thin layer somewhere. Can you come up with a process that would make them visible to a SEM? Maybe. This is actually something I'm interested in researching, personally. But it's far from trivial (and I'm relatively clueless about silicon design).

I have no idea what technology Apple used in their SoC, though they're paranoid enough about security that they probably chose something hard to read out.

Comment Re:No hacking required... (Score 1) 286

Those unique keys are probably recorded at the time of manufacture and saved to a DB (against the serial number of the phone or board).

According to Apple, they UID key is generated during manufacturing and not recorded anywhere except on the device itself.

I'd expect the software would filter out touches less than 10ms or so.

Chinese PIN cracking devices for older versions of iOS (exploiting pin attempt counter flaws no longer available) did it via USB. I think it accepts USB HID input or something dumb like that. However, the retry time is dominated by the reboot required after every rollback. So you get 4-5 tries in a few seconds, then 90 or so seconds of waiting for it to reboot. The NAND reset can be instantaneous (for a decently designed emulator), but you still need to reboot the phone. Indeed, as I mention in the blog post, this is practical for 4-digit PINs (days), 5-digit PINs (a month or so), and gets annoying for 6-digit PINs (that's closer to a year, still useful if you really want the data, but not as much).

Comment Re:No hacking required... (Score 1) 286

The NV memory part is also encrypted with a key derived from a unique key fused into the CPU SoC (that is too long to be bruteforceable). To do the attack as you describe, they'd have to take the plastic off of the SoC (not the NV part, you can just pull that off the board and read it), and then use a FIB workstation to modify the metal routing and read off the fused UID key to be able to decrypt the external memory and attempt a PIN bruteforce. I explained this and other attacks here. That attack is technically possible, but unlikely, as it has a high chance of failure and it's very expensive.

What they're likely actually doing is not that. They're probably just reading off the NV (NAND Flash) memory chip, then attaching an emulator to the phone instead, performing 4-5 PIN tries using the phone itself, then rolling back the emulated memory contents and trying again. This doesn't require any silicon-level hacking, just desoldering one chip and instead soldering in a (custom, but not terribly hard to develop) NAND emulator instead.

Comment Re:No device is secure and they may never be so. (Score 1) 286

You got the "magical black box" part right, but you got the rest wrong.

All you have to do is use a passphrase (not a PIN) long enough to not be bruteforceable. Building a 100% secure device that limits the number of attempts at guessing an insecure PIN is impossible. Building a 100% secure device that protects your data using a secure passphrase is trivial: just use good encryption at rest.

Putting data in the cloud, at best, does nothing for you security-wise, and at worst, makes it that much easier to get to. It doesn't matter whether your data is in the cloud or on your phone. What matters is that it is encrypted with strong crypto, and that only you know the key. Then, as long as the crypto isn't broken, your data is safe. No (practical) crypto is "guaranteed" to never be circumvented, but modern crypto algorithms properly implemented are getting pretty close to there being a good chance nobody will ever be able to break them in a practical manner. Only time will tell.

If you want a phone secure against data extraction after being seized, you have two decent options: get an iPhone, or get an Android Nexus phone (anything else is probably not trustworthy, if only because most other manufacturers suck at security). The Nexus line has better data security at rest (it uses full disk encryption), while the iPhone line only encrypts most, but not all, data, and no metadata. In both cases, if you make sure the phone is powered down before it falls into the hands of an attacker, there is just about nothing they can do to get at your data.

Incidentally, we're talking about symmetric crypto here, not asymmetric crypto - quantum computing can implement a practical attack against current common asymmetric crypto algorithms, but not against symmetric crypto.

Comment Re:It's a 5C (Score 1) 286

And Apple also knows the Secure Enclave can be by-passed too, by anybody who has the firmware signing key.

It is also vulnerable to exactly the same external memory replay attack that non-Secure-Enclave-equipped phones are vulnerable to (i.e. the Secure Enclave is completely irrelevant to what is currently the easiest, most likely way the FBI got into the phone). I explained how all the pieces fit together in this blog post.

Which is real simple to do. Put the Secure Enclave firmware in ROM, so it can't be upgraded.

That's not the solution - Apple needs to be able to update the Secure Enclave firmware too, it's too complex to be reasonable to bake into a ROM forever. What they need to do, which I also explained in that article, is two things: tie user encryption keys to the hash of the firmware running on the SEP (so that a malicious firmware update renders user data inaccessible), and harden anti-replay protection with a secure anti-rollback counter (either using authenticated external memory or burying the EEPROM inside the main SoC package).

Comment Re:It's a 5C (Score 1) 286

Your source is an ex-Apple engineer who worked on iPhone security: https://twitter.com/JohnHedge/...

The Secure Enclave doesn't have "firmware updates" because it doesn't have nonvolatile firmware memory. Its firmware is loaded on every boot, and is part of the overall firmware of the phone. The Secure Enclave has no control over what firmware runs on it other than ensuring that it is signed by Apple, and it has no persistence of its own - it's a completely state-less CPU that depends on external EEPROM and Flash memory that can be externally tampered with and rolled back/replayed.

Comment Re:Didn't (Score 5, Interesting) 286

Of course they hacked the phone.

There is a very easy, very reasonable trick that is guaranteed to work to get the data out of that phone with minimal risk (assuming it has a 4-digit PIN). It's not a mistake, it's not a bug, it's not something anyone has to "discover". It's simply an attack outside the threat model that Apple used when designing that particular iPhone (and, with minor differences, all currently released iPhones). I have no doubt Apple knows full well it will work and knew it would work when they designed the phone (it's blatantly obvious, and Apple's security engineers aren't idiots) - protecting against it is just not trivial (it cannot be solved by software, it requires support hardware) so, to this date, they've chosen not to. In fact, they added a minor roadblock against it on newer phones (but only a minor one that can also be bypassed - because doing better is Hard(TM) and costs money), which demonstrates they are fully aware of it. I explained how it works here (search for "replay attack"). I'm not the first one to mention this approach.

Making iPhone secure against all physical attacks is impossible. If your PIN is bruteforceable (as is the case here), then security relies on the PIN attempt counter. An attacker with physical possession of the phone can always find a way in. Apple just has to decide how much effort (and money) they want to put into making that harder. The current bar is at approximately the "a couple experienced hardware/software hackers and a couple thousand dollars in R&D costs" level. With some more money/effort they could raise it to the "a crazy dude like Chris Tarnovsky and a medium-budget silicon hacking lab" level. It's not going to get to the "noone will practically be able to do it" level without making the iPhone into a tamper-resistant hardware security module with physical defenses (i.e. not something likely to fit in your pocket).

It still baffles me why everyone is so concerned about how the FBI got in, when we know an easy way in already.

Comment Re:Shitty standard (Score 2) 193

Because this has nothing to do with link speed. SATA doesn't deliver power over the data cable, and nobody wants to put a SATA (or USB) transceiver in USB power bricks. The resistor is used to signal current supply capability between "dumb" devices. USB devices already do intelligent negotiation of current capability and speed when the other end is a host and not a wall charger.

Comment No surprise (Score 4, Insightful) 85

Medical and healthcare companies consistently seem to have *no idea* whatsoever about security, and *no idea* that they actually need to hire someone who knows security.

Anything with a computer in it needs to take into account security. If you're putting code into your product and don't know security and aren't hiring someone who does, you're doing it wrong. Medical devices, cars, even Bluetooth toilets. If it communicates with the outside world or is exposed to users who aren't authorized full control over the device, it needs security. If you don't do it, your product is a ticking time bomb waiting for a researcher, if you're lucky, or a malicious attacker, if you aren't, to notice the lack of security. This will keep happening until everyone gets the message.

Slashdot Top Deals

"The pyramid is opening!" "Which one?" "The one with the ever-widening hole in it!" -- The Firesign Theatre

Working...