[personal profile] mjg59
The US Government is attempting to force Apple to build a signed image that can be flashed onto an iPhone used by one of the San Bernardino shooters. To their credit, Apple have pushed back against this - there's an explanation of why doing so would be dangerous here. But what's noteworthy is that Apple are arguing that they shouldn't do this, not that they can't do this - Apple (and many other phone manufacturers) have designed their phones such that they can replace the firmware with anything they want.

In order to prevent unauthorised firmware being installed on a device, Apple (and most other vendors) verify that any firmware updates are signed with a trusted key. The FBI don't have access to Apple's firmware signing keys, and as a result they're unable to simply replace the software themselves. That's why they're asking Apple to build a new firmware image, sign it with their private key and provide it to the FBI.

But what do we mean by "unauthorised firmware"? In this case, it's "not authorised by Apple" - Apple can sign whatever they want, and your iPhone will happily accept that update. As owner of the device, there's no way for you to reconfigure it such that it will accept your updates. And, perhaps worse, there's no way to reconfigure it such that it will reject Apple's.

I've previously written about how it's possible to reconfigure a subset of Android devices so that they trust your images and nobody else's. Any attempt to update the phone using the Google-provided image will fail - instead, they must be re-signed using the keys that were installed in the device. No matter what legal mechanisms were used against them, Google would be unable to produce a signed firmware image that could be installed on the device without your consent. The mechanism I proposed is complicated and annoying, but this could be integrated into the standard vendor update process such that you simply type a password to unlock a key for re-signing.

Why's this important? Sure, in this case the government is attempting to obtain the contents of a phone that belonged to an actual terrorist. But not all cases governments bring will be as legitimate, and not all manufacturers are Apple. Governments will request that manufacturers build new firmware that allows them to monitor the behaviour of activists. They'll attempt to obtain signing keys and use them directly to build backdoors that let them obtain messages sent to journalists. They'll be able to reflash phones to plant evidence to discredit opposition politicians.

We can't rely on Apple to fight every case - if it becomes politically or financially expedient for them to do so, they may well change their policy. And we can't rely on the US government only seeking to obtain this kind of backdoor in clear-cut cases - there's a risk that these techniques will be used against innocent people. The only way for Apple (and all other phone manufacturers) to protect users is to allow users to remove Apple's validation keys and substitute their own. If Apple genuinely value user privacy over Apple's control of a device, it shouldn't be a difficult decision to make.

Date: 2016-02-22 10:47 pm (UTC)
From: (Anonymous)
Honestly, I can't see Apple doing this. More than anything else, they're moved by profit, and for profit, they need _control_. Control that they'll never put in the hand of the users. I could see another manufacturer trying this, but not Apple in its current corporate mindset.

Bottom line is, Tim Cook can say all the flowery stuff about freedom, making the world a better place and etc, but that's all a bunch of PR crap, if his company can't act where it counts. It's those crucial moments that teach us who really acts for freedom (= not just freedom for your company), and who is merely spouting feel-good bravado.

I've been reading your blog for quite a while Matthew, and even gave the address to my students so they can get a good view of the industry from an insider. One thing I always comment in class is that the tech sector is one of the biggest today, but its big companies are still pushed around by governments like a bunch of toddlers. You wouldn't see a company from a more established sector being bossed around by the government like that. I don't know, I thought maybe we already matured enough as an industry to not have this kind of thing be commonplace anymore. I recall when I just started teaching in 2002, and there was that FBI "nudge" for anti-virus companies to allow backdoors in their products, all in the name of finding terrorists. And today, we're still being pushed around the same way.

Date: 2016-02-23 02:12 am (UTC)
gerald_duck: (frontal)
From: [personal profile] gerald_duck
Dumb question: can you not protect an Android device simply by turning off automatic updates?

Even if you're then not especially careful about which updates you accept, you'd still avoid any risk of an upgrade being applied while the phone was locked… no?

Date: 2016-02-23 07:25 am (UTC)
From: (Anonymous)
You seem to miss one important thing: on Apple devices no one can replace the keys as these are stored in a secure chip that will self-destroy if it detects any attempt at tampering. On most (or maybe even all?) Android phones this key is not stored in a tamper-proof hardware, and so the feds wouldn't really need Google or the manufacturer to do anything, they'd just open the phone, flash the update public key directly and do what they want.

So no, Android isn't more secure here.

Date: 2016-02-23 11:21 am (UTC)
From: (Anonymous)
No matter how you name it, it's a hell of a difference in this case: when there is a secure hardware no one but Apple can replace the update key, when there is not, everyone in physical possession of the unit can.

Date: 2016-02-23 04:04 pm (UTC)
From: (Anonymous)
Well said.

A valid thought experiment

Date: 2016-02-23 05:37 pm (UTC)
From: (Anonymous)
I just can't helping thinking again that you're reaching. There's a hole in the justification that I've seen others point out.

Looking at the explanation in your Nexus post, why would you want to trigger a signed upgrade from the bootloader (which preserves user data)?

If upgrades are only triggered from the OS, then it can require user authentication. Which is the same as the UX you described in this post (the re-signing solution). IOW, why not require that the iPhone be unlocked, before it will approve the installation of a new update.

So I think Apple could re-design their phone so that they couldn't comply with this court order. It would be galling and expensive to have to redo it, but I can't see that it would cause any other problem.

I agree with the point you're making in general - "restricted boot" of closed-source code is very concerning. I remain interested in the work you've done, showing that secure boot can still be compatible with Free software.

-- Alan Jenkins

Apple

Date: 2016-02-23 06:08 pm (UTC)
From: (Anonymous)
> We can't rely on Apple to fight every case

Indeed, we cannot - they would not even have fought this one, if the FBI would have been so kind to ask them under seal. They don't care so much about their user's data as they care about their image.

See <http://www.nytimes.com/2016/02/19/technology/how-tim-cook-became-a-bulwark-for-digital-privacy.html?_r=0>:
> Apple had asked the F.B.I. to issue its application for the tool under seal. But the government made it public, prompting Mr. Cook to go into bunker mode to draft a response, according to people privy to the discussions, who spoke on condition of anonymity. The result was the letter that Mr. Cook signed on Tuesday, where he argued that it set a “dangerous precedent” for a company to be forced to build tools for the government that weaken security.

Re: Apple

Date: 2016-02-23 06:18 pm (UTC)
From: (Anonymous)
No matter the CEO's personal stance on the privacy debate, Apple is still a business, so it's understandable (and quite predictable) that they'll act one way in public, and another in private.

I don't really believe we'll ever see any kind of social change from those old players. We need new players in the industry to inject new ideas and practices. Trusting a corporation like Apple to change is a dead-end. Their priority is their own subsistence, not change. As such, I wouldn't trust any CEO's (be it Tim Cook or anyone else) promises on that regard. To me, that's a bunch of vain PR talk.

Firmware updates

Date: 2016-02-23 07:44 pm (UTC)
From: (Anonymous)
Apple screwed up here. An update of the firmware should wipe the
device, i.e., wipe the old secret key.

For a legitimate use case, the backup-update-restore cycles is
inconvenient, but I doubt it needs to happen often.

Re: Firmware updates

Date: 2016-02-24 02:32 am (UTC)
From: (Anonymous)
_Firmware_ updates, specifically those having to do with the secure
enclave. OS updates and application updates don't need to wipe
anything. Firmware updates really shouldn't be that often.

Or, for less trouble, they could either (1) require both an Apple
signature and the current key, or (2) just the Apple signature and
do the wipe.

Does Andriod suffer from the same problem

Date: 2016-02-24 03:07 am (UTC)
From: (Anonymous)
I take it the issue here is the PIN is necessarily weak, so it is only secure because the secure enclave prevents you from testing more than a few (10?) combinations. The root cause of the problem is not that Apple can upgrade the firmware in the phone, it is that it allows itself to upgrade the firmware in the secure enclave and such an upgrade can turn off the secure enclaves protections.

You mentioned that Android also signed firmware, but you can load your own keys. But it wasn't clear if Android (or at least the current Nexus phones) allow you to upgrade the firmware in it's implementation of the "secure enclave", be it Trust Zone, TPM or whatever else they use? If they do, it seems just as broken as Apple's implementation.

I'm not sure why they allow upgrading of the secure firmware at all. We had TPM's for over a decade now. I'm guessing most of them aren't firmware upgradeable, but that hasn't mattered because they are in the end very well defined and comparatively simple machines. So simple that perhaps with a bit of effort they don't need to be upgradeable because their makers can be fairly sure they have no bugs, and yet be simple enough still satisfy the no non-trivial software has no bugs "law".

Also, I notice that Nuvoton advertises their TPM's can support TPM 2.0 via a firmware upgrade. This smells like it could have the same problem which would be pretty sad. I guess you could allow a firmware upgrade if doing it destroyed all secrets it protected. I wonder if they do that?

Date: 2016-02-24 03:40 am (UTC)
From: (Anonymous)
Why not require a user password to update the firmware?

Date: 2016-02-24 04:40 am (UTC)
From: (Anonymous)
Good point.
Some software that can never be updated uses the TPM (https://en.wikipedia.org/wiki/Trusted_Platform_Module) to control what firmware can be installed and by whom.

Re: Apple

Date: 2016-02-24 05:16 am (UTC)
From: (Anonymous)
What sort of "new players"? Commercial? They'll have the exact same issues that Apple has. Non-commercial? They won't have nearly the resources that a commercial entity has (indeed nowhere near what Apple has, which dwarfs other companies).

Or put another way, if this hypothetical entity's priority is not their own subsistence, what are the odds of them subsisting? Do I really want to put my trust in such an organization?

Re: Firmware updates

Date: 2016-02-24 08:49 am (UTC)
From: (Anonymous)
You're just assuming that the Secure Enclave doesn't have security holes that need fast fixing (which means OTA, without a backup/wipe/restore cycle).

Date: 2016-02-24 01:39 pm (UTC)
From: (Anonymous)
I'll just mention my large incomplete essay on this subject: https://github.com/pjc50/pjc50.github.io/blob/master/pentagram-control.md

Just a shortcut

Date: 2016-02-24 11:26 pm (UTC)
From: (Anonymous)
We should not forget that the government isn't asking Apple for anything that they couldn't do themselves. I believe there is nothing that prevents the government from taking out the storage hardware, extracting the encrypted data, and brute-forcing the key on the copy. Asking Apple for modified firmware is a convience because they can then use firmware's password interface and don't have to reverse engineer how exactly the encryption is implemented, but it doesn't give the government any capabilities that it doesn't already have.

Re: Just a shortcut

Date: 2016-02-25 03:26 pm (UTC)
From: (Anonymous)
Not really. The encryption key is generated by some PBKDF iterations over the PIN + a device-specific 256-bit key that's burned into secure storage. Unless they can get the latter from secure storage--which is probably fairly hard--this devolves into brute-forcing a random 256-bit AES key, which is not really feasible.

Re: Just a shortcut

Date: 2016-02-25 03:49 pm (UTC)
From: (Anonymous)
Why do you think getting something from "secure storage" is hard? In other words, how does a "secure storage" differ from any other form of storage?

Re: Just a shortcut

Date: 2016-02-25 09:57 pm (UTC)
From: (Anonymous)
It's not just storage.

The actual encryption key is derived from two numbers: one externally supplied, in this case entered by a user, and the other internally stored in the secure hardware. The important thing to keep in mind is that the chip performs all the crypto computations itself and no key, stored or derived, can ever be retrieved from it.

The secure hardware is built in such a way that it will destroy the secure data upon detection of any tampering attempt with the device thus rendering itself unusable. As any product of human design, there might be security holes that would allow to access this data, but apparently so far none have been found in the Apple's chip.

This kind of hardware has been used for years in payment processing terminals, where a device is in physical control of an untrusted party. Apple isn't doing really anything new, it's just another use case of an already well-known thing, so it may be quite secure now.

In this case it appears that the secure hardware stores a secure part of the key (the other being a user-entered PIN) used to encrypt the data. Noone, not even Apple, can retrieve the stored part or the derived key, and FBI is not demanding this.

The stock iPhone firmware (which may be replaced by Apple) is built to impose a delay after unsuccessful PIN entry attempts and may be configured so that after 10 unsuccessful attempts it will delete the keys from the secure hardware. It also doesn't provide any other way of entering the PIN save by hand on the screen. But this is firmware which may be updated, and FBI asks for a custom firmware build with these restrictions removed, so they can brute force the PIN. As the PIN is 4 digits, and the key computation in the secure hardware takes about 80ms it would have to crack after no more than 800s.

Note that the feds cannot replace the firmware themselves: the iPhone won't install an unsigned build, and only Apple can sign. Neither can the feds extract the firmware storage physically from the phone and replace the software directly - the secure chip would destroy the keys. That's why they need Apple cooperation.

Now, most Android devices do not have any secure hardware part, and so the stored part of the encryption key (if any - the key may also be derived from the password or PIN alone) must be stored in plain hardware and thus be retrievable. The update public key must also be replaceable.

Thus, if this device had no secure hardware (I'm not sure if any Android device on the market today has a secure hardware part) the feds could simply physically take out the storage part, retrieve the stored key part (if any) and then proceed with the brute force attack.

I completely disagree that this is an "implementation detail". It's in fact what makes a device secure in physical control of a hostile party.

Now, the OP is obviously right that Apple could and should (but won't) allow the user to replace the update public key. Yet, he's wrong to say that not even Google could help the feds if he replaced the key; in fact, absent the secure hardware, the feds could easily help themselves to it.

Re: Firmware updates

Date: 2016-02-25 10:06 pm (UTC)
From: (Anonymous)
The Secure Enclave may not support OTA. Yes, it assumes that there's no bugs that need to be fixed overnight but that's a reasonable assumption. This kind of hardware has been widely used for years now, although not in phones, so it's in general well tested and understood. It may be a completely reasonable design decision to take the risk of having to support backup/erase/restore cycle in case of bugs.

Re: Firmware updates

Date: 2016-02-25 10:43 pm (UTC)
From: (Anonymous)
The phone in question does not have a secure enclave. Newer phones do, but the firmware is as you suggest (and as best I am aware) updatable without wiping the keys.

However wiping the keys entirely is not necessary nor practical, it would be just as easy to make the firmware update process require the secure enclave to be unlocked with the key first. That would prevent an "authorised" backdoor reflash in this situation.

It's also possible this is already required, we are not sure as Apple do not speak quite as detailed about the secure enclave update process as the rest of the processes in their iOS security guide.

However as before, it is not relevant to this case as the 5C does not contain a secure enclave.

Re: A valid thought experiment

Date: 2016-02-27 06:28 pm (UTC)
From: (Anonymous)
Right. The idea of re-signing the image is one such protocol.

You don't need the handshake/resigning to restore the phone to factory condition. You only need it if you want to restore a good OS image *and* preserve user data. I guess that counts as increased support cost.

-- Alan

I can imagine it's more difficult to design it that way, because what's being discussed is a surprisingly narrow threat model (a "special case").

http://www.nytimes.com/2016/02/25/technology/apple-is-said-to-be-working-on-an-iphone-even-it-cant-hack.html?_r=0

My interpretation is Apple noticed everyone making this specific point. A point people make because it's a direct implication of the more detailed explanations you read in the media. So Apple wanted to have an answer (for the future, obviously not as an argument in the current legal case). This is the only answer I've seen from them; that they're looking at a technical patch for the specific threat model.

It seems like the point you wanted to make is for stronger threat models, closer to http://arstechnica.co.uk/security/2016/02/most-software-already-has-a-golden-key-backdoor-its-called-auto-update/ . [Not that I understand much apart from the title in that article, I just think that's a relevant point to discuss].

In this case I still don't think it helps to re-sign Apple images. They'd have to be public images (ala TLS Certificate Transparency) that we know are safe e.g. reproducible builds of source code which is subject to independent scrutiny. IOW you need an open source OS.

Thus the evil nature of locking owners out from controlling their hardware. Preventing open source system software being developed, because that doesn't match their real goals. Which is what you wrote, but I found it hard to work out as written, and whether we're actually disagreeing on anything.

Re: Just a shortcut

Date: 2016-02-27 07:13 pm (UTC)
From: (Anonymous)
http://arstechnica.co.uk/security/2016/02/how-the-fbi-could-use-acid-and-lasers-to-access-data-stored-on-seized-iphone/

Subtitle: Decapping techniques are effective, but they're not practical in this case.

The iPhone in question does *not* have the "secure enclave" of newer phones.

The positive example they give for decapping is for a "break once break everywhere" strategy. Vaporizing 50 chips before the common secret was extracted that would allow "counterfeiting".

Re: Just a shortcut

Date: 2016-02-29 09:13 pm (UTC)
From: (Anonymous)
First of all, the Secure Enclave is a marketing babble for something that is much more than a tamper-proof crypto-processor, it appears to cover a fingerprint sensor as well, and some other things.

Second, it appears that there *is* some secure hardware part, enough to keep the feds from even knowing the phone ids and keys. The memory flash can be physically removed and reattached, although this requires some specialized hardware, these keys are kept somewhere else, even on 5c.

Which appears not to be the case on Android devices.
From: (Anonymous)
You stated:
> Any attempt to update the phone using the Google-provided image will fail - instead,
> they must be re-signed using the keys that were installed in the device.

And the article on the Nexus devices states:
> The OS is (nominally) secure and prevents anything from accessing the raw MTD
> devices. The bootloader will only allow the user to write to partitions if
> it's unlocked. The recovery image will only permit you to install images that
> are signed with a trusted key. In combination, these facts mean that it's
> impossible for an attacker to modify the OS image without unlocking the
> bootloader[1], and unlocking the bootloader wipes all your data. You'll
> probably notice that.

To sumarize the IOS situation: The code is non-free, and the phone require the
OS to be signed and uses a chain of trust.

We simplify things and assume we're dealing with tablets that don't have a modem.
The underlying issue that interest me most is fixing the conflict between security
and freedom in the context of code signing.

How the hardware works with (ARM) SOCs:
---------------------------------------
In that case, on IOS and Android devices, the boot chain usually works that way:
The SOC(System on a chip) has a bootrom.

A bootrom is just a fancy term for instructions[] burned inside the silicon, that get
executed first at boot.

So as soon as the power is stabilized, theses instructions are executed.
To mandate code signing, usually, the manufacturer burn the hash of some
keys inside the SOC fuses[2].

This is irreversible, and if done correctly, there is no way anymore to make the
bootrom load and run your own code. Instead it will only run signed code.
The process is well documented for Freescale's I.MX53[1].

With that, you can enforce signatures of what's loaded next.
Another feature that you can commonly find on SOCs is the ability to boot from
a variety of storage devices, such as NAND, SPI flash, SD/MMC, eMMC,
and so on.
It also often support loading code trough various buses such as USB and Serial.

Which buses and storage devices it tries, and the order in which it does it is
often configurable trough resistors.

On Apple's IOS, there isn't much variation. A chain of trust is implemented.
On Android devices, even if most SOCs do support what I described above,
it varies among products and manufacturers.

On the nexus devices, the signature is enforced. This is what permits the vendor to
make sure the user partition is erased (and most probably some other stuff related
to DRM too) when the user decides to unlock the bootloader.
The vendor is also free to implement what it wish in that bootloader.

However, this doesn't give the user freedom to replace the bootloader with the one
she likes.

On some Android devices, that signature is not enforced. This is very rare on phones.
I'm not aware of any allwinner device that enforces the signatures.
As for the smartphones, the LG Optimus black[3], the Goldelico GTA04,
the openmokos, don't enforce the signature.

I've left to interpretation who "The manufacturer" and "The vendor" is.
In some cases, it's not that clear, it may not design the device
vendor/manufacturer but the SOC's.

I care more about the freedom, for users, to run the code they wish, than
about having a device that implement a supposedly[4] secure chain of trust.

Given the small amount of phones that ship with such signature enforcement
disabled, It would be great if we had a solution to propose to the hardware
manufacturers.

From an end user perspective, code signing can be used to assure the integrity
of a device's code. This is to prevent evil-maid attacks.

Keeping secrets in hardware doesn't work when you have wealthy adversary models.
They have very expensive labs and equipment that can be used to "patch" a chip.
Cheaper attacks also do succeed in many cases.

Power glitching could probably result in arbitrary code execution, if the
attacker has physical access to the hardware.

Assuming we only want to detect the lack of integrity, and react properly to it
when we can, to prevent evil-maid attacks, how do we go from there?

Denis (GNUtoo).
From: (Anonymous)
Look into what the payment industry is doing. They have been, for decades now, putting devices that process payments into physical control of untrusted users. And it works.

Profile

Matthew Garrett

About Matthew

Power management, mobile and firmware developer on Linux. Security developer at Aurora. Ex-biologist. [personal profile] mjg59 on Twitter. Content here should not be interpreted as the opinion of my employer. Also on Mastodon.

Expand Cut Tags

No cut tags