You stated: > Any attempt to update the phone using the Google-provided image will fail - instead, > they must be re-signed using the keys that were installed in the device.
And the article on the Nexus devices states: > The OS is (nominally) secure and prevents anything from accessing the raw MTD > devices. The bootloader will only allow the user to write to partitions if > it's unlocked. The recovery image will only permit you to install images that > are signed with a trusted key. In combination, these facts mean that it's > impossible for an attacker to modify the OS image without unlocking the > bootloader[1], and unlocking the bootloader wipes all your data. You'll > probably notice that.
To sumarize the IOS situation: The code is non-free, and the phone require the OS to be signed and uses a chain of trust.
We simplify things and assume we're dealing with tablets that don't have a modem. The underlying issue that interest me most is fixing the conflict between security and freedom in the context of code signing.
How the hardware works with (ARM) SOCs: --------------------------------------- In that case, on IOS and Android devices, the boot chain usually works that way: The SOC(System on a chip) has a bootrom.
A bootrom is just a fancy term for instructions[] burned inside the silicon, that get executed first at boot.
So as soon as the power is stabilized, theses instructions are executed. To mandate code signing, usually, the manufacturer burn the hash of some keys inside the SOC fuses[2].
This is irreversible, and if done correctly, there is no way anymore to make the bootrom load and run your own code. Instead it will only run signed code. The process is well documented for Freescale's I.MX53[1].
With that, you can enforce signatures of what's loaded next. Another feature that you can commonly find on SOCs is the ability to boot from a variety of storage devices, such as NAND, SPI flash, SD/MMC, eMMC, and so on. It also often support loading code trough various buses such as USB and Serial.
Which buses and storage devices it tries, and the order in which it does it is often configurable trough resistors.
On Apple's IOS, there isn't much variation. A chain of trust is implemented. On Android devices, even if most SOCs do support what I described above, it varies among products and manufacturers.
On the nexus devices, the signature is enforced. This is what permits the vendor to make sure the user partition is erased (and most probably some other stuff related to DRM too) when the user decides to unlock the bootloader. The vendor is also free to implement what it wish in that bootloader.
However, this doesn't give the user freedom to replace the bootloader with the one she likes.
On some Android devices, that signature is not enforced. This is very rare on phones. I'm not aware of any allwinner device that enforces the signatures. As for the smartphones, the LG Optimus black[3], the Goldelico GTA04, the openmokos, don't enforce the signature.
I've left to interpretation who "The manufacturer" and "The vendor" is. In some cases, it's not that clear, it may not design the device vendor/manufacturer but the SOC's.
I care more about the freedom, for users, to run the code they wish, than about having a device that implement a supposedly[4] secure chain of trust.
Given the small amount of phones that ship with such signature enforcement disabled, It would be great if we had a solution to propose to the hardware manufacturers.
From an end user perspective, code signing can be used to assure the integrity of a device's code. This is to prevent evil-maid attacks.
Keeping secrets in hardware doesn't work when you have wealthy adversary models. They have very expensive labs and equipment that can be used to "patch" a chip. Cheaper attacks also do succeed in many cases.
Power glitching could probably result in arbitrary code execution, if the attacker has physical access to the hardware.
Assuming we only want to detect the lack of integrity, and react properly to it when we can, to prevent evil-maid attacks, how do we go from there?
Power management, mobile and firmware developer on Linux. Security developer at Aurora. Ex-biologist. mjg59 on Twitter. Content here should not be interpreted as the opinion of my employer. Also on Mastodon.
The difference with Android, signing, and freedom.
Date: 2016-03-01 02:42 pm (UTC)> Any attempt to update the phone using the Google-provided image will fail - instead,
> they must be re-signed using the keys that were installed in the device.
And the article on the Nexus devices states:
> The OS is (nominally) secure and prevents anything from accessing the raw MTD
> devices. The bootloader will only allow the user to write to partitions if
> it's unlocked. The recovery image will only permit you to install images that
> are signed with a trusted key. In combination, these facts mean that it's
> impossible for an attacker to modify the OS image without unlocking the
> bootloader[1], and unlocking the bootloader wipes all your data. You'll
> probably notice that.
To sumarize the IOS situation: The code is non-free, and the phone require the
OS to be signed and uses a chain of trust.
We simplify things and assume we're dealing with tablets that don't have a modem.
The underlying issue that interest me most is fixing the conflict between security
and freedom in the context of code signing.
How the hardware works with (ARM) SOCs:
---------------------------------------
In that case, on IOS and Android devices, the boot chain usually works that way:
The SOC(System on a chip) has a bootrom.
A bootrom is just a fancy term for instructions[] burned inside the silicon, that get
executed first at boot.
So as soon as the power is stabilized, theses instructions are executed.
To mandate code signing, usually, the manufacturer burn the hash of some
keys inside the SOC fuses[2].
This is irreversible, and if done correctly, there is no way anymore to make the
bootrom load and run your own code. Instead it will only run signed code.
The process is well documented for Freescale's I.MX53[1].
With that, you can enforce signatures of what's loaded next.
Another feature that you can commonly find on SOCs is the ability to boot from
a variety of storage devices, such as NAND, SPI flash, SD/MMC, eMMC,
and so on.
It also often support loading code trough various buses such as USB and Serial.
Which buses and storage devices it tries, and the order in which it does it is
often configurable trough resistors.
On Apple's IOS, there isn't much variation. A chain of trust is implemented.
On Android devices, even if most SOCs do support what I described above,
it varies among products and manufacturers.
On the nexus devices, the signature is enforced. This is what permits the vendor to
make sure the user partition is erased (and most probably some other stuff related
to DRM too) when the user decides to unlock the bootloader.
The vendor is also free to implement what it wish in that bootloader.
However, this doesn't give the user freedom to replace the bootloader with the one
she likes.
On some Android devices, that signature is not enforced. This is very rare on phones.
I'm not aware of any allwinner device that enforces the signatures.
As for the smartphones, the LG Optimus black[3], the Goldelico GTA04,
the openmokos, don't enforce the signature.
I've left to interpretation who "The manufacturer" and "The vendor" is.
In some cases, it's not that clear, it may not design the device
vendor/manufacturer but the SOC's.
I care more about the freedom, for users, to run the code they wish, than
about having a device that implement a supposedly[4] secure chain of trust.
Given the small amount of phones that ship with such signature enforcement
disabled, It would be great if we had a solution to propose to the hardware
manufacturers.
From an end user perspective, code signing can be used to assure the integrity
of a device's code. This is to prevent evil-maid attacks.
Keeping secrets in hardware doesn't work when you have wealthy adversary models.
They have very expensive labs and equipment that can be used to "patch" a chip.
Cheaper attacks also do succeed in many cases.
Power glitching could probably result in arbitrary code execution, if the
attacker has physical access to the hardware.
Assuming we only want to detect the lack of integrity, and react properly to it
when we can, to prevent evil-maid attacks, how do we go from there?
Denis (GNUtoo).