Matthew Garrett ([personal profile] mjg59) wrote2015-02-16 11:31 am
Entry tags:

Intel Boot Guard, Coreboot and user freedom

PC World wrote an article on how the use of Intel Boot Guard by PC manufacturers is making it impossible for end-users to install replacement firmware such as Coreboot on their hardware. It's easy to interpret this as Intel acting to restrict competition in the firmware market, but the reality is actually a little more subtle than that.

UEFI Secure Boot as a specification is still unbroken, which makes attacking the underlying firmware much more attractive. We've seen several presentations at security conferences lately that have demonstrated vulnerabilities that permit modification of the firmware itself. Once you can insert arbitrary code in the firmware, Secure Boot doesn't do a great deal to protect you - the firmware could be modified to boot unsigned code, or even to modify your signed bootloader such that it backdoors the kernel on the fly.

But that's not all. Someone with physical access to your system could reflash your system. Even if you're paranoid enough that you X-ray your machine after every border crossing and verify that no additional components have been inserted, modified firmware could still be grabbing your disk encryption passphrase and stashing it somewhere for later examination.

Intel Boot Guard is intended to protect against this scenario. When your CPU starts up, it reads some code out of flash and executes it. With Intel Boot Guard, the CPU verifies a signature on that code before executing it[1]. The hash of the public half of the signing key is flashed into fuses on the CPU. It is the system vendor that owns this key and chooses to flash it into the CPU, not Intel.

This has genuine security benefits. It's no longer possible for an attacker to simply modify or replace the firmware - they have to find some other way to trick it into executing arbitrary code, and over time these will be closed off. But in the process, the system vendor has prevented the user from being able to make an informed choice to replace their system firmware.

The usual argument here is that in an increasingly hostile environment, opt-in security isn't sufficient - it's the role of the vendor to ensure that users are as protected as possible by default, and in this case all that's sacrificed is the ability for a few hobbyists to replace their system firmware. But this is a false dichotomy - UEFI Secure Boot demonstrated that it was entirely possible to produce a security solution that provided security benefits and still gave the user ultimate control over the code that their machine would execute.

To an extent the market will provide solutions to this. Vendors such as Purism will sell modern hardware without enabling Boot Guard. However, many people will buy hardware without consideration of this feature and only later become aware of what they've given up. It should never be necessary for someone to spend more money to purchase new hardware in order to obtain the freedom to run their choice of software. A future where users are obliged to run proprietary code because they can't afford another laptop is a dystopian one.

Intel should be congratulated for taking steps to make it more difficult for attackers to compromise system firmware, but criticised for doing so in such a way that vendors are forced to choose between security and freedom. The ability to control the software that your system runs is fundamental to Free Software, and we must reject solutions that provide security at the expense of that ability. As an industry we should endeavour to identify solutions that provide both freedom and security and work with vendors to make those solutions available, and as a movement we should be doing a better job of articulating why this freedom is a fundamental part of users being able to place trust in their property.

[1] It's slightly more complicated than that in reality, but the specifics really aren't that interesting.

Re: mistaken

(Anonymous) 2015-06-25 09:22 am (UTC)(link)
cpu fuses require higher confidentiality levels. semantics are important. these are likely not encoded into fuses, but rather some kind of [secure] flash.

Re: mistaken

(Anonymous) 2015-06-25 09:53 am (UTC)(link)
Fuses are implemented differently; there are usually two types of them. More like programmable logic. In some embodiments, a higher voltage is introduced to set or break them. Often used to permanently (sometimes semi) control functional elements. I highly doubt intel would give vendors access to these.

Flash however can be programmed and reprogrammed; usually implemented with NAND/NOR memory.

Re: mistaken

(Anonymous) 2015-12-13 11:43 pm (UTC)(link)
Flash requires high voltage for erase and write, but it usually internally generated by using on-chip charge pump. And often fuses/OTP/how you call it are just something similar to flash or EEPROM but unable to perform ERASE. This makes it one (or sometimes several) times programmable and is a one-way ticket generally.

In NOR case, you start with pre-erased 0xFFFF... ("all ones") in whole block. WRITE procedure brings desired bits down, creating number you want as the result. The only way to get bits back to 1 from 0 is perform ERASE. If hardware can't erase it, you can't do it, obviously. This design theoretically allows to have several programming operations: you can create new number if it only needs to transit 1s to 0s but not reverse. Yet, if it desired to have "true" OTP, where you can have only one shot, there could be lock bit in same area, wihch is cleared after write completion and locks down further write attempts alltogether. That's how fuses are often implemented. Though Intel may or may not have such implementations.

Actually, many microcontrollers have eraseable fuses. Though security-sensitive ones (e.g removal of cheap readout protection) usually causing whole IC to perform mass-erase of its NVM memory before unlocking fuses. This ensures you can overtake system if desired, BUT would not get any protected program code or data. Actually, if system designer used microcontroller with builtin memory and locked out reading, one can have quite hard time trying to reverse engineer uC firmware.

At the end you receive "vanilla" IC like if it was coming from factory, without any data left. In principle, similar tech can do secure booting as well. I.e. external software could be unable to access internals, and internal code can do verification and refuse to harm itself, at least until request comes from proper signing key owner. Yet it assumes some rewriteable on-chip memory and seroius separation of interal onchip resources vs externally available ones.

And even then you can't fully trust IC vendors. There was Actel FPGAs. They advertiesed secure code protection: their hardware encrypts configuration bitstream stored in flash array using AES. So even if you dissolve IC package and use electronic beam to read flash array, all you get is a garbage. Sounds interesting, right? But wait, some smartass got idea it is interesting to throw various bytes to JTAG, fuzzing JTAG controller. It turned out, Actel had (undocumented) commands which were able to read firmware back. In already decrypted form. Yay! So everybody can dump "protected" firmware at will. That's what you get if you trust marketing BS from vendors too much.

At the end of day, there is only one good option to make sure IC does what advertised. It have to be open hardware...