Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:more of the same (Score 1) 29

Exactly. So they are now at 133 Qbits. After about half a century of research. Know how many they need to attack current RSA? Well, RSA 2048 (already on the short side) would need about 7000 QBits and they would need to go through a long and complex calculation. And given that QBit numbers do not scale up exponentially (more like the inverse), I guess current RSA is at risk of being broken by aQC around the year 100'000 or later.

As someone for whom this stuff is not academic or an armchair issue, I don't think we can afford to be so blase. Yes, progress is slow, but it has been accelerating, and whether it's by a long series of incremental improvements or (more likely) a series of incremental improvements mixed with one or two breakthroughs that provide leaps ahead, we have to assume that QC will become a threat to classical asymmetric crypgtography.

Not RSA. No one who knows what they're doing uses RSA any more. Good cryptographers and cryptographic security engineers treat use of RSA as a "code smell" in new designs. Though I suppose that practical QCs big enough to break common elliptic curve cryptography could potentially reinvigorate use of RSA.

But we can't rely on RSA to save us, because QCs could potentially get big enough and good enough to threaten RSA, too. Luckily, the available suite of post-quantum algorithms is looking better and better. Not for all use cases, yet. At present, the main area we need to start shifting to PQC is firmware signing, because the public keys must be embedded in hardware that's hard (or impossible) to modify, and because many security-critical devices being manufactured now will be in use for a decade or three in the future.

Comment Re:Probably for the DRM (Score 1) 50

Are you sure?

The move to VMs won't make DRM stronger. It's already implemented in the TEE (Trusted Execution Environment), so breaking it requires finding and exploiting a vulnerability in that constrained and isolated environment. There is other TEE attack surface, but it's small; DRM is the biggest attack surface there.

The primary security benefit of the VM move is to protect the other security components in the TEE from DRM vulnerabilities. The DRM implementations have a long history of vulnerabilities, and exploiting one TEE component often provides a springboard to attack other TEE components, or the non-secure world (TrustZone has access to all physical RAM, so if you pop the TEE OS you have pwned the entire device, and getting remote code execution in one trusted app likely gives you the TEE OS).

The DICE architecture will make remotely identifying unpatched DRM implementations more reliable. This is and will be true whether DRM moves to VM or stays in TEE, because DICE attests to the TEE state. Indeed, attesting to the TEE state is the primary reason for DICE; using it for VMs was an afterthought. Without DICE, there are still ways to strongly identify the DRM version, but they're slightly less reliable.

TL;DR, I'd say that DRM is getting harder to defeat in any persistent way, but that is a side-effect of the DICE strategy, not a result of AVF.

Personally, my opinion is that DRM on mobile devices is stupid anyway. There are so many ways for video to be pirated that so much effort to close one of them is a waste of effort. And, of course, the "analog hole" seems unlikely ever to be defeated, not until movies are delivered directly into hardware in our brains. But content owners insist on the foolishness.

Comment Re:Probably for the DRM (Score 1) 50

which means it would be possible for us to allow user-signed bootloaders

Just to clarify, I mean loading our own public keys that we can ask the bootloader to trust, not signing our own firmware.

Ah! Pixel has always supported that. If you want to sign your own system images, buy a Pixel and enjoy.

Android verified boot has four states, based on device configuration and the result of signature verification:

Green: The system was verified by the OEM keys.
Yellow: The system was verified by user-installed keys.
Orange: The system was not verified because the bootloader is unlocked.
Red: Verification failed. The device refuses to boot in this state.

All OEMs are required to support red and green states. Some additionally support unlockable bootloaders, and therefore orange state. Pixel also supports yellow. It's possible that some other OEMs do, too.

In case anyone other than you and I is reading this (because I'm sure you already understand what I'm going to say): the value of yellow over orange is that when your device is in orange state (bootloader unlocked), anyone who obtains access to your device can flash other software to it, software that potentially bypasses all system-enforced security protections[*]. If you lock your bootloader after installing your own signing keys, you then know that your device will refuse to run anything not signed by either you (yellow mode) or the OEM (green mode).

TCG DICE, BTW, provides a solution, because the DICE attestation describes the entire software stack and is rooted in the CPU's boot ROM.

Interesting, I hadn't heard of that. My interest here is mainly for device integrity for protecting systems at the company I (do security) work for. We're the kind of company that has no problem rolling our own stuff vs vendor solutions if they're inadequate, so we've rolled our own attestation framework.

As a consumer though...meh. I really hate giving such tight controls over device state. Especially banking apps, who literally tick every last "security" and "distrust anything abnormal" checkbox you guys give them, which does nothing but annoy me as a user, and realistically speaking, does basically nothing to improve their security posture. Meanwhile, they all have it in their heads that SMS based 2FA is a good idea and that insane password complexity rules are actually useful. Total peak PHB syndrome. We even have to follow security practices that even NIST says are a bad idea, specifically because we have contracts with other companies that require them anyways. /rant

I agree with every word of that rant. The Android devrel team is trying to educate app developers, to help them understand when they should and should not use device attestation. It's hard, though. There's a strong tendency for people who don't understand anything about security to believe they need to Turn On All The Security Things, even when it gives them absolutely no value. This doesn't change the fact that there are some legitimate needs, of course.

And it's going away there, slowly. Not because big bad tech wants to take away the toys, but because security assurance in a networked world inherently requires being able to trust the integrity of the device you're talking to, and the classic PC architecture provides no way to do that. Hence TPMs, DICE, etc.

I can't comment on DICE, but TPM as it is now, doesn't completely lock you down unless you the user opt-in. Being able to escrow your own AES keys behind PCR state and a PIN is nice, yet most people here have both zero appreciation for that (while also claiming to be security experts) and spout off stupid conspiracy theories about it that make it plainly obvious they haven't the first clue how it works. And the TPM, being modular, really isn't the ideal DRM tool, at least not in its current implementation. The lower numbered PCRs, especially firmware hashes, app developers may as well not even bother validating in attestation quotes.

It's not the TPM that enables user opt-in, it's the software (firmware + OS). At present, most PC system software allows you to choose whether to use the TPM and reject unauthorized software... but Microsoft would definitely like to change that, for reasons both good (security) and bad (lock-in). Will they be able to? I don't know.

[*] Probably your data is safe even in orange mode[**]. The attacker can replace the boot, system and vendor partitions with whatever they like, but they cannot modify the TEE, and all of your data is encrypted with keys managed by the TEE and unlocked with your lockscreen password (we call it the Lock Screen Knowledge Factor, LSKF, for obscure reasons). Your LSKF is low-entropy, so that doesn't seem like it's worth much, but the TEE is also responsible for verifying the LSKF and it implements brute force mitigation, in the form of enforced delays after failed attempts. The delay function isn't as aggressive as I think it should be, but it would take ~13 years to search the space of 4-digit PINs, and anything with higher entropy is safe more or less forever. Attackers would get more value by spending their time trying to find vulnerabilities in the TEEs than they would brute-forcing your LSKF (unless it's really bad).

[**] Except for the Evil Maid attack. If the attacker can get your device, flash software that sends them all your data, then give the device back to you without you noticing, then the data encryption and TEE security does nothing to help. Signing your firmware and locking your bootloader -- yellow mode -- will block this attack. [***]

[***] Except for the Replacement Device Attack, which also works against green mode devices. If the attacker steals your device and replaces it with one that looks identical but grabs your LSKF and sends it to the attacker, you're hosed. This attack is one that we often consider in Android threat modeling, BTW, not so much because we think it's common but because it defines an upper bound on the cost of attacks. If anyone is proposing features that would block attacks that are more expensive, riskier (to the attacker) and less scalable than the Replacement Device Attack, there is no value in implementing them until someone devises a way to mitigate the Replacement Device Attack, which seems impossible.

Comment Re:Commercial sabatier(-like) reactors? (Score 1) 49

If you then co-locate a natural gas storage/liquefaction plant, plus a conventional gas fired electrical plant (the CO2 output of which becomes an input for the methane reactor...), that gives you quite a bit of redundancy.

I'm not sure burning methane to generate energy + CO2 which you capture and then convert back into methane makes sense. That would just be using methane as a storage mechanism for energy generated some other way (e.g. solar). That's fine, but it seems likely to be a lot less efficient than other energy storage mechanisms.

Creating methane from energy plus atmospheric CO2, or CO2 captured from some other industrial process, might make a lot of sense, though. That methane could then be piped elsewhere to be burned in furnaces, or rockets, or whatever. If all of the input energy is from renewables, it could create truly "green" methane. I suspect for most uses of natural gas it will make more sense to just replace gas appliances with electric ones, but for cases where that doesn't work, having carbon-neutral methane would be great.

Comment Re:Probably for the DRM (Score 1) 50

If we did it, though, it would have to be done in such a way that we can guarantee user mods can't cause device attestation to lie about the state of the device.

Doesn't the bootloader already do this? Sure, the kernel could drop whatever into the PCRs (or whatever you guys use, I've only done development against the only hardware attestation mechanism that has an open specification, namely TPM) after that point, but other than that you can at least attest the base state. With TPM at least, the kernel can't alter the PCRs that hash the firmware.

Though what would be nice is it if you guys could have a certification program for third party AOSP variants like GrapheneOS that doesn't prevent stuff like Netflix from working. (The way grapheneos is designed doesn't even make it practical to break netflix DRM.) Also PCI transactions.

At present, the Android bootloader is the root of trust for device lock state and system partition integrity. It is literally the software component that verifies all of that. We don't use TPMs, but if we did, the Android bootloader would be the component that fed the device state and system hash into the PCR. Assuming no change in the architecture, if you can write your own bootloader you can make the device lie about any of this.

TCG DICE, BTW, provides a solution, because the DICE attestation describes the entire software stack and is rooted in the CPU's boot ROM. That's what I was referring to when I said there's work in progress that would make it possible. With DICE, the device attestation could not lie about the integrity/identity of the bootloader, which means it would be possible for us to allow user-signed bootloaders -- or any other firmware component. However, if you actually ran your own code, app developers would get the signal from us that this is a modified device, and many would choose not to trust it, just like with rooting today.

As for why not... because it would be a huge amount of work to do it even on Google's own devices, and there is no return for that investment. It would make nerds like you and me happy, but would do absolutely nothing for Google's bottom line, or for any OEM's bottom line. The slice of the market that cares is too tiny.

This is what I think a lot of people are afraid of. Right now we only get this in just one architecture: PCs.

And it's going away there, slowly. Not because big bad tech wants to take away the toys, but because security assurance in a networked world inherently requires being able to trust the integrity of the device you're talking to, and the classic PC architecture provides no way to do that. Hence TPMs, DICE, etc.

How many of them even support user-signed system images? Pixel is the only one, AFAIK.

That's what I think bothers a lot of people about Android as a whole -- it leads to the whole feeling of "so this really isn't my phone after all." Though admittedly there's no good alternative, which sucks.

I get that. The other side is that you also want to run code that you didn't write on your devices (e.g. apps), and the developers of that code also have an interest in knowing whether they can trust your device. All of the stuff we do that makes you feel like it's not really your phone is precisely to enable them to trust that their code will execute correctly on your device.

I suppose that could be useful. I've never heard any demand for that feature, though. Is there a patchset that implements it?

Patchset I'm not certain, I think LineageOS might do it but I don't know. I do know about this:

https://play.google.com/store/...

I talk to the Lineage guys regularly; I'll ask them.

Comment Re:Rooting, too (Score 1) 50

I'm the TL of the Android HW-backed security team

Since you would be pretty knowledgeable on the topic, could this be used to still allow rooting, while offering a secure environment for the apps that really need it?

Maybe in the long run?

It could eventually be possible for apps to create their own VMs that rely only on known-trustworthy code, enabling the wider Android system to be less trusted. But that would require a lot of work, and I don't think enabling rooting is a use case that would motivate the investment. It's not that the Android team opposes rooting, we don't. But there's just not much motivation to expend a lot of effort to support it, since it's of interest to a very small group of users, relative to the Android user population.

One thing that would help is if the rooting/modding community got more involved in Android development, being willing to build and support patches that either add features that could be implemented without root if integrated into the core system or that build infrastructure to make rooting more isolated/safer. I know it seems like Google is a huge company with a hundred thousand engineers and should be able to do anything, but all of those engineers are busy and there are still things that make sense to do but aren't high priority.

Comment Re:Containers (Score 1) 50

AFAICS they simply assume that during the early parts of booting the Android kernel is secure, it loads the the KVM low-visor into EL2 and after that it can't mess with it any more even if it gets compromised down the line.

That makes perfect sense and is a lot simpler than I though. Thanks!

And, yes, the assumption is definitely that the system is secure until the "low-visor" (I like that term) has been loaded into EL2. If the attacker can compromise anything before that, the game is over. Which does make the VMs potentially less secure than TrustZone apps, because the TCB is much larger. And I expect the really critical security stuff will actually stay in TZ, where it will benefit from moving all the rest out of TZ.

Slashdot Top Deals

You will lose an important tape file.

Working...