which means it would be possible for us to allow user-signed bootloaders
Just to clarify, I mean loading our own public keys that we can ask the bootloader to trust, not signing our own firmware.
Ah! Pixel has always supported that. If you want to sign your own system images, buy a Pixel and enjoy.
Android verified boot has four states, based on device configuration and the result of signature verification:
Green: The system was verified by the OEM keys.
Yellow: The system was verified by user-installed keys.
Orange: The system was not verified because the bootloader is unlocked.
Red: Verification failed. The device refuses to boot in this state.
All OEMs are required to support red and green states. Some additionally support unlockable bootloaders, and therefore orange state. Pixel also supports yellow. It's possible that some other OEMs do, too.
In case anyone other than you and I is reading this (because I'm sure you already understand what I'm going to say): the value of yellow over orange is that when your device is in orange state (bootloader unlocked), anyone who obtains access to your device can flash other software to it, software that potentially bypasses all system-enforced security protections[*]. If you lock your bootloader after installing your own signing keys, you then know that your device will refuse to run anything not signed by either you (yellow mode) or the OEM (green mode).
TCG DICE, BTW, provides a solution, because the DICE attestation describes the entire software stack and is rooted in the CPU's boot ROM.
Interesting, I hadn't heard of that. My interest here is mainly for device integrity for protecting systems at the company I (do security) work for. We're the kind of company that has no problem rolling our own stuff vs vendor solutions if they're inadequate, so we've rolled our own attestation framework.
As a consumer though...meh. I really hate giving such tight controls over device state. Especially banking apps, who literally tick every last "security" and "distrust anything abnormal" checkbox you guys give them, which does nothing but annoy me as a user, and realistically speaking, does basically nothing to improve their security posture. Meanwhile, they all have it in their heads that SMS based 2FA is a good idea and that insane password complexity rules are actually useful. Total peak PHB syndrome. We even have to follow security practices that even NIST says are a bad idea, specifically because we have contracts with other companies that require them anyways. /rant
I agree with every word of that rant. The Android devrel team is trying to educate app developers, to help them understand when they should and should not use device attestation. It's hard, though. There's a strong tendency for people who don't understand anything about security to believe they need to Turn On All The Security Things, even when it gives them absolutely no value. This doesn't change the fact that there are some legitimate needs, of course.
And it's going away there, slowly. Not because big bad tech wants to take away the toys, but because security assurance in a networked world inherently requires being able to trust the integrity of the device you're talking to, and the classic PC architecture provides no way to do that. Hence TPMs, DICE, etc.
I can't comment on DICE, but TPM as it is now, doesn't completely lock you down unless you the user opt-in. Being able to escrow your own AES keys behind PCR state and a PIN is nice, yet most people here have both zero appreciation for that (while also claiming to be security experts) and spout off stupid conspiracy theories about it that make it plainly obvious they haven't the first clue how it works. And the TPM, being modular, really isn't the ideal DRM tool, at least not in its current implementation. The lower numbered PCRs, especially firmware hashes, app developers may as well not even bother validating in attestation quotes.
It's not the TPM that enables user opt-in, it's the software (firmware + OS). At present, most PC system software allows you to choose whether to use the TPM and reject unauthorized software... but Microsoft would definitely like to change that, for reasons both good (security) and bad (lock-in). Will they be able to? I don't know.
[*] Probably your data is safe even in orange mode[**]. The attacker can replace the boot, system and vendor partitions with whatever they like, but they cannot modify the TEE, and all of your data is encrypted with keys managed by the TEE and unlocked with your lockscreen password (we call it the Lock Screen Knowledge Factor, LSKF, for obscure reasons). Your LSKF is low-entropy, so that doesn't seem like it's worth much, but the TEE is also responsible for verifying the LSKF and it implements brute force mitigation, in the form of enforced delays after failed attempts. The delay function isn't as aggressive as I think it should be, but it would take ~13 years to search the space of 4-digit PINs, and anything with higher entropy is safe more or less forever. Attackers would get more value by spending their time trying to find vulnerabilities in the TEEs than they would brute-forcing your LSKF (unless it's really bad).
[**] Except for the Evil Maid attack. If the attacker can get your device, flash software that sends them all your data, then give the device back to you without you noticing, then the data encryption and TEE security does nothing to help. Signing your firmware and locking your bootloader -- yellow mode -- will block this attack. [***]
[***] Except for the Replacement Device Attack, which also works against green mode devices. If the attacker steals your device and replaces it with one that looks identical but grabs your LSKF and sends it to the attacker, you're hosed. This attack is one that we often consider in Android threat modeling, BTW, not so much because we think it's common but because it defines an upper bound on the cost of attacks. If anyone is proposing features that would block attacks that are more expensive, riskier (to the attacker) and less scalable than the Replacement Device Attack, there is no value in implementing them until someone devises a way to mitigate the Replacement Device Attack, which seems impossible.