![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
The US Government is attempting to force Apple to build a signed image that can be flashed onto an iPhone used by one of the San Bernardino shooters. To their credit, Apple have pushed back against this - there's an explanation of why doing so would be dangerous here. But what's noteworthy is that Apple are arguing that they shouldn't do this, not that they can't do this - Apple (and many other phone manufacturers) have designed their phones such that they can replace the firmware with anything they want.
In order to prevent unauthorised firmware being installed on a device, Apple (and most other vendors) verify that any firmware updates are signed with a trusted key. The FBI don't have access to Apple's firmware signing keys, and as a result they're unable to simply replace the software themselves. That's why they're asking Apple to build a new firmware image, sign it with their private key and provide it to the FBI.
But what do we mean by "unauthorised firmware"? In this case, it's "not authorised by Apple" - Apple can sign whatever they want, and your iPhone will happily accept that update. As owner of the device, there's no way for you to reconfigure it such that it will accept your updates. And, perhaps worse, there's no way to reconfigure it such that it will reject Apple's.
I've previously written about how it's possible to reconfigure a subset of Android devices so that they trust your images and nobody else's. Any attempt to update the phone using the Google-provided image will fail - instead, they must be re-signed using the keys that were installed in the device. No matter what legal mechanisms were used against them, Google would be unable to produce a signed firmware image that could be installed on the device without your consent. The mechanism I proposed is complicated and annoying, but this could be integrated into the standard vendor update process such that you simply type a password to unlock a key for re-signing.
Why's this important? Sure, in this case the government is attempting to obtain the contents of a phone that belonged to an actual terrorist. But not all cases governments bring will be as legitimate, and not all manufacturers are Apple. Governments will request that manufacturers build new firmware that allows them to monitor the behaviour of activists. They'll attempt to obtain signing keys and use them directly to build backdoors that let them obtain messages sent to journalists. They'll be able to reflash phones to plant evidence to discredit opposition politicians.
We can't rely on Apple to fight every case - if it becomes politically or financially expedient for them to do so, they may well change their policy. And we can't rely on the US government only seeking to obtain this kind of backdoor in clear-cut cases - there's a risk that these techniques will be used against innocent people. The only way for Apple (and all other phone manufacturers) to protect users is to allow users to remove Apple's validation keys and substitute their own. If Apple genuinely value user privacy over Apple's control of a device, it shouldn't be a difficult decision to make.
In order to prevent unauthorised firmware being installed on a device, Apple (and most other vendors) verify that any firmware updates are signed with a trusted key. The FBI don't have access to Apple's firmware signing keys, and as a result they're unable to simply replace the software themselves. That's why they're asking Apple to build a new firmware image, sign it with their private key and provide it to the FBI.
But what do we mean by "unauthorised firmware"? In this case, it's "not authorised by Apple" - Apple can sign whatever they want, and your iPhone will happily accept that update. As owner of the device, there's no way for you to reconfigure it such that it will accept your updates. And, perhaps worse, there's no way to reconfigure it such that it will reject Apple's.
I've previously written about how it's possible to reconfigure a subset of Android devices so that they trust your images and nobody else's. Any attempt to update the phone using the Google-provided image will fail - instead, they must be re-signed using the keys that were installed in the device. No matter what legal mechanisms were used against them, Google would be unable to produce a signed firmware image that could be installed on the device without your consent. The mechanism I proposed is complicated and annoying, but this could be integrated into the standard vendor update process such that you simply type a password to unlock a key for re-signing.
Why's this important? Sure, in this case the government is attempting to obtain the contents of a phone that belonged to an actual terrorist. But not all cases governments bring will be as legitimate, and not all manufacturers are Apple. Governments will request that manufacturers build new firmware that allows them to monitor the behaviour of activists. They'll attempt to obtain signing keys and use them directly to build backdoors that let them obtain messages sent to journalists. They'll be able to reflash phones to plant evidence to discredit opposition politicians.
We can't rely on Apple to fight every case - if it becomes politically or financially expedient for them to do so, they may well change their policy. And we can't rely on the US government only seeking to obtain this kind of backdoor in clear-cut cases - there's a risk that these techniques will be used against innocent people. The only way for Apple (and all other phone manufacturers) to protect users is to allow users to remove Apple's validation keys and substitute their own. If Apple genuinely value user privacy over Apple's control of a device, it shouldn't be a difficult decision to make.
Re: Just a shortcut
Date: 2016-02-25 03:49 pm (UTC)Re: Just a shortcut
Date: 2016-02-25 09:57 pm (UTC)The actual encryption key is derived from two numbers: one externally supplied, in this case entered by a user, and the other internally stored in the secure hardware. The important thing to keep in mind is that the chip performs all the crypto computations itself and no key, stored or derived, can ever be retrieved from it.
The secure hardware is built in such a way that it will destroy the secure data upon detection of any tampering attempt with the device thus rendering itself unusable. As any product of human design, there might be security holes that would allow to access this data, but apparently so far none have been found in the Apple's chip.
This kind of hardware has been used for years in payment processing terminals, where a device is in physical control of an untrusted party. Apple isn't doing really anything new, it's just another use case of an already well-known thing, so it may be quite secure now.
In this case it appears that the secure hardware stores a secure part of the key (the other being a user-entered PIN) used to encrypt the data. Noone, not even Apple, can retrieve the stored part or the derived key, and FBI is not demanding this.
The stock iPhone firmware (which may be replaced by Apple) is built to impose a delay after unsuccessful PIN entry attempts and may be configured so that after 10 unsuccessful attempts it will delete the keys from the secure hardware. It also doesn't provide any other way of entering the PIN save by hand on the screen. But this is firmware which may be updated, and FBI asks for a custom firmware build with these restrictions removed, so they can brute force the PIN. As the PIN is 4 digits, and the key computation in the secure hardware takes about 80ms it would have to crack after no more than 800s.
Note that the feds cannot replace the firmware themselves: the iPhone won't install an unsigned build, and only Apple can sign. Neither can the feds extract the firmware storage physically from the phone and replace the software directly - the secure chip would destroy the keys. That's why they need Apple cooperation.
Now, most Android devices do not have any secure hardware part, and so the stored part of the encryption key (if any - the key may also be derived from the password or PIN alone) must be stored in plain hardware and thus be retrievable. The update public key must also be replaceable.
Thus, if this device had no secure hardware (I'm not sure if any Android device on the market today has a secure hardware part) the feds could simply physically take out the storage part, retrieve the stored key part (if any) and then proceed with the brute force attack.
I completely disagree that this is an "implementation detail". It's in fact what makes a device secure in physical control of a hostile party.
Now, the OP is obviously right that Apple could and should (but won't) allow the user to replace the update public key. Yet, he's wrong to say that not even Google could help the feds if he replaced the key; in fact, absent the secure hardware, the feds could easily help themselves to it.
Re: Just a shortcut
Date: 2016-02-27 07:13 pm (UTC)Subtitle: Decapping techniques are effective, but they're not practical in this case.
The iPhone in question does *not* have the "secure enclave" of newer phones.
The positive example they give for decapping is for a "break once break everywhere" strategy. Vaporizing 50 chips before the common secret was extracted that would allow "counterfeiting".
Re: Just a shortcut
Date: 2016-02-29 09:13 pm (UTC)Second, it appears that there *is* some secure hardware part, enough to keep the feds from even knowing the phone ids and keys. The memory flash can be physically removed and reattached, although this requires some specialized hardware, these keys are kept somewhere else, even on 5c.
Which appears not to be the case on Android devices.