[personal profile] mjg59
The US Government is attempting to force Apple to build a signed image that can be flashed onto an iPhone used by one of the San Bernardino shooters. To their credit, Apple have pushed back against this - there's an explanation of why doing so would be dangerous here. But what's noteworthy is that Apple are arguing that they shouldn't do this, not that they can't do this - Apple (and many other phone manufacturers) have designed their phones such that they can replace the firmware with anything they want.

In order to prevent unauthorised firmware being installed on a device, Apple (and most other vendors) verify that any firmware updates are signed with a trusted key. The FBI don't have access to Apple's firmware signing keys, and as a result they're unable to simply replace the software themselves. That's why they're asking Apple to build a new firmware image, sign it with their private key and provide it to the FBI.

But what do we mean by "unauthorised firmware"? In this case, it's "not authorised by Apple" - Apple can sign whatever they want, and your iPhone will happily accept that update. As owner of the device, there's no way for you to reconfigure it such that it will accept your updates. And, perhaps worse, there's no way to reconfigure it such that it will reject Apple's.

I've previously written about how it's possible to reconfigure a subset of Android devices so that they trust your images and nobody else's. Any attempt to update the phone using the Google-provided image will fail - instead, they must be re-signed using the keys that were installed in the device. No matter what legal mechanisms were used against them, Google would be unable to produce a signed firmware image that could be installed on the device without your consent. The mechanism I proposed is complicated and annoying, but this could be integrated into the standard vendor update process such that you simply type a password to unlock a key for re-signing.

Why's this important? Sure, in this case the government is attempting to obtain the contents of a phone that belonged to an actual terrorist. But not all cases governments bring will be as legitimate, and not all manufacturers are Apple. Governments will request that manufacturers build new firmware that allows them to monitor the behaviour of activists. They'll attempt to obtain signing keys and use them directly to build backdoors that let them obtain messages sent to journalists. They'll be able to reflash phones to plant evidence to discredit opposition politicians.

We can't rely on Apple to fight every case - if it becomes politically or financially expedient for them to do so, they may well change their policy. And we can't rely on the US government only seeking to obtain this kind of backdoor in clear-cut cases - there's a risk that these techniques will be used against innocent people. The only way for Apple (and all other phone manufacturers) to protect users is to allow users to remove Apple's validation keys and substitute their own. If Apple genuinely value user privacy over Apple's control of a device, it shouldn't be a difficult decision to make.

Date: 2016-02-22 10:47 pm (UTC)
From: (Anonymous)
Honestly, I can't see Apple doing this. More than anything else, they're moved by profit, and for profit, they need _control_. Control that they'll never put in the hand of the users. I could see another manufacturer trying this, but not Apple in its current corporate mindset.

Bottom line is, Tim Cook can say all the flowery stuff about freedom, making the world a better place and etc, but that's all a bunch of PR crap, if his company can't act where it counts. It's those crucial moments that teach us who really acts for freedom (= not just freedom for your company), and who is merely spouting feel-good bravado.

I've been reading your blog for quite a while Matthew, and even gave the address to my students so they can get a good view of the industry from an insider. One thing I always comment in class is that the tech sector is one of the biggest today, but its big companies are still pushed around by governments like a bunch of toddlers. You wouldn't see a company from a more established sector being bossed around by the government like that. I don't know, I thought maybe we already matured enough as an industry to not have this kind of thing be commonplace anymore. I recall when I just started teaching in 2002, and there was that FBI "nudge" for anti-virus companies to allow backdoors in their products, all in the name of finding terrorists. And today, we're still being pushed around the same way.

Date: 2016-02-23 02:12 am (UTC)
gerald_duck: (frontal)
From: [personal profile] gerald_duck
Dumb question: can you not protect an Android device simply by turning off automatic updates?

Even if you're then not especially careful about which updates you accept, you'd still avoid any risk of an upgrade being applied while the phone was locked… no?

Date: 2016-02-23 07:25 am (UTC)
From: (Anonymous)
You seem to miss one important thing: on Apple devices no one can replace the keys as these are stored in a secure chip that will self-destroy if it detects any attempt at tampering. On most (or maybe even all?) Android phones this key is not stored in a tamper-proof hardware, and so the feds wouldn't really need Google or the manufacturer to do anything, they'd just open the phone, flash the update public key directly and do what they want.

So no, Android isn't more secure here.

A valid thought experiment

Date: 2016-02-23 05:37 pm (UTC)
From: (Anonymous)
I just can't helping thinking again that you're reaching. There's a hole in the justification that I've seen others point out.

Looking at the explanation in your Nexus post, why would you want to trigger a signed upgrade from the bootloader (which preserves user data)?

If upgrades are only triggered from the OS, then it can require user authentication. Which is the same as the UX you described in this post (the re-signing solution). IOW, why not require that the iPhone be unlocked, before it will approve the installation of a new update.

So I think Apple could re-design their phone so that they couldn't comply with this court order. It would be galling and expensive to have to redo it, but I can't see that it would cause any other problem.

I agree with the point you're making in general - "restricted boot" of closed-source code is very concerning. I remain interested in the work you've done, showing that secure boot can still be compatible with Free software.

-- Alan Jenkins

Apple

Date: 2016-02-23 06:08 pm (UTC)
From: (Anonymous)
> We can't rely on Apple to fight every case

Indeed, we cannot - they would not even have fought this one, if the FBI would have been so kind to ask them under seal. They don't care so much about their user's data as they care about their image.

See <http://www.nytimes.com/2016/02/19/technology/how-tim-cook-became-a-bulwark-for-digital-privacy.html?_r=0>:
> Apple had asked the F.B.I. to issue its application for the tool under seal. But the government made it public, prompting Mr. Cook to go into bunker mode to draft a response, according to people privy to the discussions, who spoke on condition of anonymity. The result was the letter that Mr. Cook signed on Tuesday, where he argued that it set a “dangerous precedent” for a company to be forced to build tools for the government that weaken security.

Firmware updates

Date: 2016-02-23 07:44 pm (UTC)
From: (Anonymous)
Apple screwed up here. An update of the firmware should wipe the
device, i.e., wipe the old secret key.

For a legitimate use case, the backup-update-restore cycles is
inconvenient, but I doubt it needs to happen often.

Does Andriod suffer from the same problem

Date: 2016-02-24 03:07 am (UTC)
From: (Anonymous)
I take it the issue here is the PIN is necessarily weak, so it is only secure because the secure enclave prevents you from testing more than a few (10?) combinations. The root cause of the problem is not that Apple can upgrade the firmware in the phone, it is that it allows itself to upgrade the firmware in the secure enclave and such an upgrade can turn off the secure enclaves protections.

You mentioned that Android also signed firmware, but you can load your own keys. But it wasn't clear if Android (or at least the current Nexus phones) allow you to upgrade the firmware in it's implementation of the "secure enclave", be it Trust Zone, TPM or whatever else they use? If they do, it seems just as broken as Apple's implementation.

I'm not sure why they allow upgrading of the secure firmware at all. We had TPM's for over a decade now. I'm guessing most of them aren't firmware upgradeable, but that hasn't mattered because they are in the end very well defined and comparatively simple machines. So simple that perhaps with a bit of effort they don't need to be upgradeable because their makers can be fairly sure they have no bugs, and yet be simple enough still satisfy the no non-trivial software has no bugs "law".

Also, I notice that Nuvoton advertises their TPM's can support TPM 2.0 via a firmware upgrade. This smells like it could have the same problem which would be pretty sad. I guess you could allow a firmware upgrade if doing it destroyed all secrets it protected. I wonder if they do that?

Date: 2016-02-24 03:40 am (UTC)
From: (Anonymous)
Why not require a user password to update the firmware?

Date: 2016-02-24 01:39 pm (UTC)
From: (Anonymous)
I'll just mention my large incomplete essay on this subject: https://github.com/pjc50/pjc50.github.io/blob/master/pentagram-control.md

Just a shortcut

Date: 2016-02-24 11:26 pm (UTC)
From: (Anonymous)
We should not forget that the government isn't asking Apple for anything that they couldn't do themselves. I believe there is nothing that prevents the government from taking out the storage hardware, extracting the encrypted data, and brute-forcing the key on the copy. Asking Apple for modified firmware is a convience because they can then use firmware's password interface and don't have to reverse engineer how exactly the encryption is implemented, but it doesn't give the government any capabilities that it doesn't already have.
From: (Anonymous)
You stated:
> Any attempt to update the phone using the Google-provided image will fail - instead,
> they must be re-signed using the keys that were installed in the device.

And the article on the Nexus devices states:
> The OS is (nominally) secure and prevents anything from accessing the raw MTD
> devices. The bootloader will only allow the user to write to partitions if
> it's unlocked. The recovery image will only permit you to install images that
> are signed with a trusted key. In combination, these facts mean that it's
> impossible for an attacker to modify the OS image without unlocking the
> bootloader[1], and unlocking the bootloader wipes all your data. You'll
> probably notice that.

To sumarize the IOS situation: The code is non-free, and the phone require the
OS to be signed and uses a chain of trust.

We simplify things and assume we're dealing with tablets that don't have a modem.
The underlying issue that interest me most is fixing the conflict between security
and freedom in the context of code signing.

How the hardware works with (ARM) SOCs:
---------------------------------------
In that case, on IOS and Android devices, the boot chain usually works that way:
The SOC(System on a chip) has a bootrom.

A bootrom is just a fancy term for instructions[] burned inside the silicon, that get
executed first at boot.

So as soon as the power is stabilized, theses instructions are executed.
To mandate code signing, usually, the manufacturer burn the hash of some
keys inside the SOC fuses[2].

This is irreversible, and if done correctly, there is no way anymore to make the
bootrom load and run your own code. Instead it will only run signed code.
The process is well documented for Freescale's I.MX53[1].

With that, you can enforce signatures of what's loaded next.
Another feature that you can commonly find on SOCs is the ability to boot from
a variety of storage devices, such as NAND, SPI flash, SD/MMC, eMMC,
and so on.
It also often support loading code trough various buses such as USB and Serial.

Which buses and storage devices it tries, and the order in which it does it is
often configurable trough resistors.

On Apple's IOS, there isn't much variation. A chain of trust is implemented.
On Android devices, even if most SOCs do support what I described above,
it varies among products and manufacturers.

On the nexus devices, the signature is enforced. This is what permits the vendor to
make sure the user partition is erased (and most probably some other stuff related
to DRM too) when the user decides to unlock the bootloader.
The vendor is also free to implement what it wish in that bootloader.

However, this doesn't give the user freedom to replace the bootloader with the one
she likes.

On some Android devices, that signature is not enforced. This is very rare on phones.
I'm not aware of any allwinner device that enforces the signatures.
As for the smartphones, the LG Optimus black[3], the Goldelico GTA04,
the openmokos, don't enforce the signature.

I've left to interpretation who "The manufacturer" and "The vendor" is.
In some cases, it's not that clear, it may not design the device
vendor/manufacturer but the SOC's.

I care more about the freedom, for users, to run the code they wish, than
about having a device that implement a supposedly[4] secure chain of trust.

Given the small amount of phones that ship with such signature enforcement
disabled, It would be great if we had a solution to propose to the hardware
manufacturers.

From an end user perspective, code signing can be used to assure the integrity
of a device's code. This is to prevent evil-maid attacks.

Keeping secrets in hardware doesn't work when you have wealthy adversary models.
They have very expensive labs and equipment that can be used to "patch" a chip.
Cheaper attacks also do succeed in many cases.

Power glitching could probably result in arbitrary code execution, if the
attacker has physical access to the hardware.

Assuming we only want to detect the lack of integrity, and react properly to it
when we can, to prevent evil-maid attacks, how do we go from there?

Denis (GNUtoo).

Profile

Matthew Garrett

About Matthew

Power management, mobile and firmware developer on Linux. Security developer at Aurora. Ex-biologist. [personal profile] mjg59 on Twitter. Content here should not be interpreted as the opinion of my employer. Also on Mastodon.

Page Summary

Expand Cut Tags

No cut tags