Ubuntu 16.04 was released today, with one of the highlights being the new Snap package format. Snaps are intended to make it easier to distribute applications for Ubuntu - they include their dependencies rather than relying on the archive, they can be updated on a schedule that's separate from the distribution itself and they're confined by a strong security policy that makes it impossible for an app to steal your data.

At least, that's what Canonical assert. It's true in a sense - if you're using Snap packages on Mir (ie, Ubuntu mobile) then there's a genuine improvement in security. But if you're using X11 (ie, Ubuntu desktop) it's horribly, awfully misleading. Any Snap package you install is completely capable of copying all your private data to wherever it wants with very little difficulty.

The problem here is the X11 windowing system. X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window. An application that has no access to any of your private data can wait until your session is idle, open an unconfined terminal and then use curl to send your data to a remote site. As long as Ubuntu desktop still uses X11, the Snap format provides you with very little meaningful security. Mir and Wayland both fix this, which is why Wayland is a prerequisite for the sandboxed xdg-app design.

I've produced a quick proof of concept of this. Grab XEvilTeddy from git, install Snapcraft (it's in 16.04), snapcraft snap, sudo snap install xevilteddy*.snap, /snap/bin/xevilteddy.xteddy . An adorable teddy bear! How cute. Now open Firefox and start typing, then check back in your terminal window. Oh no! All my secrets. Open another terminal window and give it focus. Oh no! An injected command that could instead have been a curl session that uploaded your private SSH keys to somewhere that's not going to respect your privacy.

The Snap format provides a lot of underlying technology that is a great step towards being able to protect systems against untrustworthy third-party applications, and once Ubuntu shifts to using Mir by default it'll be much better than the status quo. But right now the protections it provides are easily circumvented, and it's disingenuous to claim that it currently gives desktop users any real security.
Around a year ago I wrote some patches in an attempt to improve power management on Haswell and Broadwell systems by configuring Serial ATA power management appropriately. I got a couple of reports of them triggering SATA errors for some users, couldn't reproduce them myself and so didn't have a lot of confidence in them. Time passed.

I've been working on power management stuff again this week, so it seemed like a good opportunity to revisit these. I've made a few changes and pushed a couple of trees - one against master and one against 4.5.

First, these probably only have relevance to users of mobile Intel parts in the U or S range (/proc/cpuinfo will tell you - you're looking for a four-digit number that starts with 4 (Haswell), 5 (Broadwell) or 6 (Skylake) and ends with U or S), and won't do anything unless you have SATA drives (including PCI-based SATA). To test them, first disable anything like TLP that might alter your SATA link power management policy. Then check powertop - you should only be getting to PC3 at best. Build a kernel with these patches and boot it. /sys/class/scsi_host/*/link_power_management_policy should read "firmware". Check powertop and see whether you're getting into deeper PC states. Now run your system for a while and check the kernel log for any SATA errors that you didn't see before.

Let me know if you see SATA errors and are willing to help debug this, and leave a comment if you don't see any improvement in PC states.
The first time I was paid to do software development came as something of a surprise to me. I was working as a sysadmin in a computational physics research group when a friend asked me if I'd be willing to talk to her PhD supervisor. I had nothing better to do, so said yes. And that was how I started the evening having dinner with David MacKay, and ended the evening better fed, a little drunker and having agreed in principle to be paid to write free software.

I'd been hired to work on Dasher, an information-efficient text entry system. It had been developed by one of David's students as a practical demonstration of arithmetic encoding after David had realised that presenting a visualisation of an effective compression algorithm allowed you to compose text without having to enter as much information into the system. At first this was merely a neat toy, but it soon became clear that the benefits of Dasher had a great deal of overlap with good accessibility software. It required much less precision of input, it made it easy to correct mistakes (you merely had to reverse direction in order to start zooming back out of the text you had entered) and it worked with a variety of input technologies from mice to eye tracking to breathing. My job was to take this codebase and turn it into a project that would be interesting to external developers.

In the year I worked with David, we turned Dasher from a research project into a well-integrated component of Gnome, improved its support for Windows, accepted code from an external contributor who ported it to OS X (using an OpenGL canvas!) and wrote ports for a range of handheld devices. We added code that allowed Dasher to directly control the UI of other applications, making it possible for people to drive word processors without having to leave Dasher. We taught Dasher to speak. We strove to avoid the mistakes present in so many other pieces of accessibility software, such as configuration that could only be managed by an (expensive!) external consultant. And we visited Dasher users and learned how they used it and what more they needed, then went back home and did what we could to provide that.

Working on Dasher was an incredible opportunity. I was involved in the development of exciting code. I spoke on it at multiple conferences. I became part of the Gnome community. I visited the USA for the first time. I entered people's homes and taught them how to use Dasher and experienced their joy as they realised that they could now communicate up to an order of magnitude more quickly. I wrote software that had a meaningful impact on the lives of other people.

Working with David was certainly not easy. Our weekly design meetings were, charitably, intense. He had an astonishing number of ideas, and my job was to figure out how to implement them while (a) not making the application overly complicated and (b) convincing David that it still did everything he wanted. One memorable meeting involved me gradually arguing him down from wanting five new checkboxes to agreeing that there were only two combinations that actually made sense (and hence a single checkbox) - and then admitting that this was broadly equivalent to an existing UI element, so we could just change the behaviour of that slightly without adding anything. I took the opportunity to delete an additional menu item in the process.

I was already aware of the importance of free software in terms of developers, but working with David made it clear to me how important it was to users as well. A community formed around Dasher, helping us improve it and allowing us to develop support for new use cases that made the difference between someone being able to type at two words per minute and being able to manage twenty. David saw that this collaborative development would be vital to creating something bigger than his original ideas, and it succeeded in ways he couldn't have hoped for.

I spent a year in the group and then went back to biology. David went on to channel his strong feelings about social responsibility into issues such as sustainable energy, writing a freely available book on the topic. He served as chief adviser to the UK Department of Energy and Climate Change for five years. And earlier this year he was awarded a knighthood for his services to scientific outreach.

David died yesterday. It's unlikely that I'll ever come close to what he accomplished, but he provided me with much of the inspiration to try to do so anyway. The world is already a less fascinating place without him.
(Edit to add: this issue is restricted to the mobile SKUs. Desktop parts have very different power management behaviour)

Linux 4.5 seems to have got Intel's Skylake platform (ie, 6th-generation Core CPUs) to the point where graphics work pretty reliably, which is great progress (4.4 tended to lose all my windows every so often, especially over suspend/resume). I'm even running Wayland happily. Unfortunately one of the reasons I have a laptop is that I want to be able to do things like use it on battery, and power consumption's an important part of that. Skylake continues the trend from Haswell of moving to an SoC-type model where clock and power domains are shared between components that were previously entirely independent, and so you can't enter deep power saving states unless multiple components all have the correct power management configuration. On Haswell/Broadwell this manifested in the form of Serial ATA link power management being involved in preventing the package from going into deep power saving states - setting that up correctly resulted in a reduction in full-system power consumption of about 40%[1].

I've now got a Skylake platform with a nice shiny NVMe device, so Serial ATA policy isn't relevant (the platform doesn't even expose a SATA controller). The deepest power saving state I can get into is PC3, despite Skylake supporting PC8 - so I'm probably consuming about 40% more power than I should be. And nobody seems to know what needs to be done to fix this. I've found no public documentation on the power management dependencies on Skylake. Turning on everything in Powertop doesn't improve anything. My battery life is pretty poor and the system is pretty warm.

The best thing about this is the following statement from page 64 of the 6th Generation Intel ® Processor Datasheet for U-Platforms:

Caution: Long term reliability cannot be assured unless all the Low-Power Idle States are enabled.

which is pretty concerning. Without support for states deeper than PC3, Linux is running in a configuration that Intel imply may trigger premature failure. That's obviously not good. Until this situation is improved, you probably shouldn't buy any Skylake systems if you're planning on running Linux.

[1] These patches never went upstream. Someone reported that they resulted in their SSD throwing errors and I couldn't find anybody with deeper levels of SATA experience who was interested in working on the problem. Intel's AHCI drivers for Windows do the right thing, but I couldn't find anybody at Intel who could get any information from their Windows driver team.
I've been working on TPMTOTP a little this weekend. I merged a pull request that adds command-line argument handling, which includes the ability to choose the set of PCRs you want to seal to without rebuilding the tools, and also lets you print the base32 encoding of the secret rather than the qr code so you can import it into a wider range of devices. More importantly it also adds support for setting the expected PCR values on the command line rather than reading them out of the TPM, so you can now re-seal the secret against new values before rebooting.

I also wrote some new code myself. TPMTOTP is designed to be usable in the initramfs, allowing you to validate system state before typing in your passphrase. Unfortunately the initramfs itself is one of the things that's measured. So, you end up with something of a chicken and egg problem - TPMTOTP needs access to the secret, and the obvious thing to do is to put the secret in the initramfs. But the secret is sealed against the hash of the initramfs, and so you can't generate the secret until after the initramfs. Modify the initramfs to insert the secret and you change the hash, so the secret is no longer released. Boo.

On EFI systems you can handle this by sticking the secret in an EFI variable (there's some special-casing in the code to deal with the additional metadata on the front of things you read out of efivarfs). But that's not terribly useful if you're not on an EFI system. Thankfully, there's a way around this. TPMs have a small quantity of nvram built into them, so we can stick the secret there. If you pass the -n argument to sealdata, that'll happen. The unseal apps will attempt to pull the secret out of nvram before falling back to looking for a file, so things should just magically work.

I think it's pretty feature complete now, other than TPM2 support? That's on my list.
There's a piece of software called XScreenSaver. It attempts to fill two somewhat disparate roles:
  • Provide a functioning screen lock on systems using the X11 windowing system, a job made incredibly difficult due to a variety of design misfeatures in said windowing system[1]
  • Provide cute graphical output while the screen is locked
XScreenSaver does an excellent job of the second of these[2] and is pretty good at the first, which is to say that it only suffers from a disastrous security flaw once very few years and as such is certainly not appreciably worse than any other piece of software.

Debian ships an operating system that prides itself on stability. The Debian definition of stability is a very specific one - rather than referring to how often the software crashes or misbehaves, it refers to how often the software changes behaviour. Debian is very reluctant to upgrade software that is part of a stable release, to the extent that developers will attempt to backport individual security fixes to the version they shipped rather than upgrading to a release that contains all those security fixes but also adds a new feature. The argument here is that the new release may also introduce new bugs, and Debian's users desire stability (in the "things don't change" sense) more than new features. Backporting security fixes keeps them safe without compromising the reason they're running Debian in the first place.

This all makes plenty of sense at a theoretical level, but reality is sometimes less convenient. The first problem is that security bugs are typically also, well, bugs. They may make your software crash or misbehave in annoying but apparently harmless ways. And when you fix that bug you've also fixed a security bug, but the ability to determine whether a bug is a security bug or not is one that involves deep magic and a fanatical devotion to the cause so given the choice between maybe asking for a CVE and dealing with embargoes and all that crap when perhaps you've actually only fixed a bug that makes the letter "E" appear in places it shouldn't and not one that allows the complete destruction of your intergalactic invasion fleet means people will tend to err on the side of "Eh fuckit" and go drinking instead. So new versions of software will often fix security vulnerabilities without there being any indication that they do so[3], and running old versions probably means you have a bunch of security issues that nobody will ever do anything about.

But that's broadly a technical problem and one we can apply various metrics to, and if somebody wanted to spend enough time performing careful analysis of software we could have actual numbers to figure out whether the better security approach is to upgrade or to backport fixes. Conversations become boring once we introduce too many numbers, so let's ignore that problem and go onto the second, which is far more handwavy and social and so significantly more interesting.

The second problem is that upstream developers remain associated with the software shipped by Debian. Even though Debian includes a tool for reporting bugs against packages included in Debian, some users will ignore that and go straight to the upstream developers. Those upstream developers then have to spend at least 15 or so seconds telling the user that the bug they're seeing has been fixed for some time, and then figure out how to explain that no sorry they can't make Debian include a fixed version because that's not how things work. Worst case, the stable release of Debian ends up including a bug that makes software just basically not work at all and everybody who uses it assumes that the upstream author is brutally incompetent, and they end up quitting the software industry and I don't know running a nightclub or something.

From the Debian side of things, the straightforward solution is to make it more obvious that users should file bugs with Debian and not bother the upstream authors. This doesn't solve the problem of damaged reputation, and nor does it entirely solve the problem of users contacting upstream developers. If a bug is filed with Debian and doesn't get fixed in a timely manner, it's hardly surprising that users will end up going upstream. The Debian bugs list for XScreenSaver does not make terribly attractive reading.

So, coming back to the title for this entry. The most obvious failure of the commons is where a basically malicious actor consumes while giving nothing back, but if an actor with good intentions ends up consuming more than they contribute that may still be a problem. An upstream author releases a piece of software under a free license. Debian distributes this to users. Debian's policies result in the upstream author having to do more work. What does the upstream author get out of this exchange? In an ideal world, plenty. The author's software is made available to more people. A larger set of developers is willing to work on making improvements to the software. In a less ideal world, rather less. The author has to deal with bug mail about already fixed bugs. The author's reputation may be harmed by user exposure to said fixed bugs. The author may get less in the way of useful bug fixes or features because people are running old versions rather than fixing new ones. If the balance tips towards the latter, the author's decision to release their software under a free license has made their life more difficult.

Most discussions about Debian's policies entirely ignore the latter scenario, focusing more on the fact that the author chose to release their software under a free license to begin with. If the author is unwilling to handle the consequences of that, goes the argument, why did they do it in the first place? The unfortunate logical conclusion to that argument is that the author realises that they made a huge mistake and never does so again, and woo uh oops.

The irony here is that one of Debian's foundational documents, the Debian Free Software Guidelines, makes allowances for this. Section 4 allows for distribution of software in Debian even if the author insists that modified versions[4] are renamed. This allows for an author to make a choice - allow themselves to be associated with the Debian version of their work and increase (a) their userbase and (b) their support load, or try to distinguish what Debian ship from their identity. But that document was ratified in 1997 and people haven't really spent much time since then thinking about why it says what it does, and so this tradeoff is rarely considered.

Free software doesn't benefit from distributions antagonising their upstreams, even if said upstream is a cranky nightclub owner. Debian's users are Debian's highest priority, but those users are going to suffer if developers decide that not using free licenses improves their quality of life. Kneejerk reactions around specific instances aren't helpful, but now is probably a good time to start thinking about what value Debian bring to its upstream authors and how that can be increased. Failing to do so doesn't serve users, Debian itself or the free software community as a whole.

[1] The X server has no fundamental concept of a screen lock. This is implemented by an application asking that the X server send all keyboard and mouse input to it rather than to any other application, and then that application creating a window that fills the screen. Due to some hilarious design decisions, opening a pop-up menu in an application prevents any other application from being able to grab input and so it is impossible for the screensaver to activate if you open a menu and then walk away from your computer. This is merely the most obvious problem - there are others that are more subtle and more infuriating. The only fix in this case is to nuke the site from orbit.

[2] There's screenshots here. My favourites are the one that emulate the electrical characteristics of an old CRT in order to present a more realistic depiction of the output of an Apple 2 and the one that includes a complete 6502 emulator.

[3] And obviously new versions of software will often also introduce new security vulnerabilities without there being any indication that they do so, because who would ever put that in their changelog. But the less ethically challenged members of the security community are more likely to be looking at new versions of software than ones released three years ago, so you're probably still tending towards winning overall

[4] There's a perfectly reasonable argument that all packages distributed by Debian are modified in some way
Trusted Platform Modules are fairly unintelligent devices. They can do some crypto, but they don't have any ability to directly monitor the state of the system they're attached to. This is worked around by having each stage of the boot process "measure" state into registers (Platform Configuration Registers, or PCRs) in the TPM by taking the SHA1 of the next boot component and performing an extend operation. Extend works like this:

New PCR value = SHA1(current value||new hash)

ie, the TPM takes the current contents of the PCR (a 20-byte register), concatenates the new SHA1 to the end of that in order to obtain a 40-byte value, takes the SHA1 of this 40-byte value to obtain a 20-byte hash and sets the PCR value to this. This has a couple of interesting properties:
  • You can't directly modify the contents of the PCR. In order to obtain a specific value, you need to perform the same set of writes in the same order. If you replace the trusted bootloader with an untrusted one that runs arbitrary code, you can't rewrite the PCR to cover up that fact
  • The PCR value is predictable and can be reconstructed by replaying the same series of operations
But how do we know what those operations were? We control the bootloader and the kernel and we know what extend operations they performed, so that much is easy. But the firmware itself will have performed some number of operations (the firmware itself is measured, as is the firmware configuration, and certain aspects of the boot process that aren't in our control may also be measured) and we may not be able to reconstruct those from scratch.

Thankfully we have more than just the final PCR data. The firmware provides an interface to log each extend operation, and you can read the event log in /sys/kernel/security/tpm0/binary_bios_measurements. You can pull information out of that log and use it to reconstruct the writes the firmware made. Merge those with the writes you performed and you should be able to reconstruct the final TPM state. Hurrah!

The problem is that a lot of what you want to measure into the TPM may vary between machines or change in response to configuration changes or system updates. If you measure every module that grub loads, and if grub changes the order that it loads modules in, you also need to update your calculations of the end result. Thankfully there's a way around this - rather than making policy decisions based on the final TPM value, just use the final TPM value to ensure that the log is valid. If you extract each hash value from the log and simulate an extend operation, you should end up with the same value as is present in the TPM. If so, you know that the log is valid. At that point you can examine individual log entries without having to care about the order that they occurred in, which makes writing your policy significantly easier.

But there's another source of fragility. Imagine that you're measuring every command executed by grub (as is the case in the CoreOS grub). You want to ensure that no inappropriate commands have been run (such as ones that would allow you to modify the loaded kernel after it's been measured), but you also want to permit certain variations - for instance, you might have a primary root filesystem and a fallback root filesystem, and you're ok with either being passed as a kernel argument. One approach would be to write two lines of policy, but there's an even more flexible approach. If the bootloader logs the entire command into the event log, when replaying the log we can verify that the event description hashes to the value that was passed to the TPM. If it does, rather than testing against an explicit hash value, we can examine the string itself. If the event description matches a regular expression provided by the policy then we're good.

This approach makes it possible to write TPM policies that are resistant to changes in ordering and permit fine-grained definition of acceptable values, and which can cleanly separate out local policy, generated policy values and values that are provided by the firmware. The split between machine-specific policy and OS policy allows for the static machine-specific policy to be merged with OS-provided policy, making remote attestation viable even over automated system upgrades.

We've integrated an implementation of this kind of policy into the TPM support code we'd like to integrate into Kubernetes, and CoreOS will soon be generating known-good hashes at image build time. The combination of these means that people using Distributed Trusted Computing under Tectonic will be able to validate the state of their systems with nothing more than a minimal machine-specific policy description.

The support code for all of this should also start making it into other distributions in the near future (the grub code is already in Fedora 24), so with luck we can define a cross-distribution policy format and make it straightforward to handle this in a consistent way even in hetrogenous operating system environments. Remote attestation is a powerful tool for ensuring that your systems are in a valid state, but the difficulty of policy management has been a significant factor in making it difficult for people to deploy in their data centres. Making it easier for people to shield themselves against low-level boot attacks is a big step forward in improving the security of distributed workloads and makes bare-metal hosting a much more viable proposition.
I'm in London for Kubecon right now, and the hotel I'm staying at has decided that light switches are unfashionable and replaced them with a series of Android tablets.
A tablet displaying the text UK_bathroom isn't responding. Do you want to close it?
One was embedded in the wall, but the two next to the bed had convenient looking ethernet cables plugged into the wall. So.

I managed to borrow a couple of USB ethernet adapters, set up a transparent bridge (brctl addbr br0; brctl addif br0 enp0s20f0u1; brctl addif br0 enp0s20f0u2; ifconfig br0 up) and then stuck my laptop between the tablet and the wall. tcpdump -i br0 showed traffic, and wireshark revealed that it was Modbus over TCP. Modbus is a pretty trivial protocol, and notably has no authentication whatsoever. tcpdump showed that traffic was being sent to, and pymodbus let me start controlling my lights, turning the TV on and off and even making my curtains open and close. What fun!

And then I noticed something. My room number is 714. The IP address I was communicating with was They wouldn't, would they?

I mean yes obviously they would.

It's basically as bad as it could be - once I'd figured out the gateway, I could access the control systems on every floor and query other rooms to figure out whether the lights were on or not, which strongly implies that I could control them as well. Jesus Molina talked about doing this kind of thing a couple of years ago, so it's not some kind of one-off - instead, hotels are happily deploying systems with no meaningful security, and the outcome of sending a constant stream of "Set room lights to full" and "Open curtain" commands at 3AM seems fairly predictable.

We're doomed.

(edited: this previously claimed I could only access systems on my own floor, but it turns out that each floor is a separate broadcast domain and I just needed to set a gateway to access the others)

(further edit: I'm deliberately not naming the hotel. They were receptive to my feedback and promised to do something about the issue.)
I maintain an application for bridging various non-Hue lighting systems to something that looks enough like a Hue that an Amazon Echo will still control them. One thing I hadn't really worked on was colour support, so I picked up some cheap bulbs and a bridge. The kit is badged as an iSuper iRainbow001, and it's terrible.

Things seemed promising enough at first, although the bulbs were alarmingly heavy (there's a significant chunk of heatsink built into them, which seems to get a lot warmer than I'd expect from something that claims a 7W power consumption). The app was a bit clunky, but eh - I wasn't planning on using it for long. I pressed the button on the bridge, launched the app and could control the bulbs. The first thing I noticed was that they had a separate "white" and "colour" mode. White mode was pretty bright, but colour mode massively less so - presumably the white LEDs are entirely independent of the RGB ones, and much higher intensity. Still, potentially useful as mood lighting.

Anyway. Next step was to start playing with the protocol, which meant finding the device on my network. I checked anything that had picked up a DHCP lease recently and nmapped them. The OS detection reported Linux, which wasn't hugely surprising - there was no GPL notice or source code included with the box, but I'm way past the point of shock at that. It also reported that there was a telnet daemon running. I connected and got a login prompt. And then I typed admin as the username and admin as the password and got a root prompt. So, there's that. The copy of Busybox included even came with tftp, so it was easy to get copies of tcpdump and strace on there to see what was up.

So. Next step. Protocol sniffing. I wanted to know how discovery worked, so reset the device to factory and watched what happens. The app on my phone sent out a discovery packet on UDP port 18602 which looked like this:


The CLIP and CLPT fields refer to the cloud server that allows for management when you're not on the local network. The mac field contains an utterly fake address. If you send out a discovery packet and your mac hasn't been registered with the device, you get a denial back. If your mac has been (and by "mac" here I mean "utterly fake mac that's the same for all devices"), you get back a response including the device serial number and IP address. And if you just leave out the mac field entirely, you get back a response no matter whether your address is registered or not. So, that's a start. Once you've registered one copy of the app with the device, anything can communicate with it by just using the same fake mac in the discovery packets.

Onwards. The command format turns out to be simple. They start ##, are followed by two ascii digits encoding a command, four ascii digits containing a bulb id, two ascii digits containing the number of following bytes and then the command data (in ascii). An example is:


which decodes as command 5 (set white intensity) on bulb 1 with two bytes of data following, each of which is an f. This translates as "Set bulb 1 to full white intensity". I worked out the rest pretty quickly - command 03 sets the RGB colour of the bulb, 0A asks the bridge to search for new bulbs, 0B tells you which bulbs are available and their capabilities and 0E gives you the MAC addresses of the bulbs(‽). 0C crashes the server process, and 06 spews a bunch of garbage back at you and potentially crashes the bulb in a hilarious way that involves it flickering at about 15Hz. It turns out that 06 is actually the "Rename bulb" command, and if you send it less data than it's expecting something goes hilariously wrong in string parsing and everything is awful.

Ok. Easy enough, but not hugely exciting. What about the remote protocol? This turns out to involve sending a login packet and then a wrapped command packet. The login has some length data, a header of "MQIsdp", a long bunch of ascii-encoded hex, a username and a password.

The username is w13733 and the password is gbigbi01. These are hardcoded in the app. The ascii-encoded hex can be replaced with 0s and the login succeeds anyway.

Ok. What about the wrapping on the command? The login never indicated which device we wanted to talk to, so presumably there's some sort of protection going on here oh wait. The command packet is a length, the serial number of the bridge and then a normal command packet. As long as you know the serial number of the device (which it tells you in response to a discovery packet, even if you're unauthenticated), you can use the cloud service to send arbitrary commands to the device (including the one that crashes the service). One of which involves the device then doing some kind of string parsing that doesn't appear to work that well. What could possibly go wrong?

Ok, so that seemed to be the entirety of the network protocol. Anything else to do? Some poking around on the bridge revealed (a) that it had an active wireless device and (b) a running DHCP server. They wouldn't, would they?

Yes. Yes, they would.

The devices all have a hardcoded SSID of "iRainbow", although they don't broadcast it. There's no security - anybody can associate. It'll then hand out an IP address. It's running telnetd on that interface as well. You can bounce through there to the owner's internal network.

So, in summary: it's a device that infringes my copyright, gives you root access in response to trivial credentials, has access control that depends entirely on nobody ever looking at the packets, is sufficiently poorly implemented that you can crash both it and the bulbs, has a cloud access protocol that has no security whatsoever and also acts as an easy mechanism for people to circumvent your network security. This may be the single worst device I've ever bought.

I called the manufacturer and they insisted that the device was designed in 2012, was no longer manufactured or supported, that they had no source code to give me and there would be no security fixes. The vendor wants me to pay for shipping back to them and reserves the right to deduct the cost of the original "free" shipping from the refund. Everything is awful, which is why I just ordered four more random bulbs to play with over the weekend.
The US Government is attempting to force Apple to build a signed image that can be flashed onto an iPhone used by one of the San Bernardino shooters. To their credit, Apple have pushed back against this - there's an explanation of why doing so would be dangerous here. But what's noteworthy is that Apple are arguing that they shouldn't do this, not that they can't do this - Apple (and many other phone manufacturers) have designed their phones such that they can replace the firmware with anything they want.

In order to prevent unauthorised firmware being installed on a device, Apple (and most other vendors) verify that any firmware updates are signed with a trusted key. The FBI don't have access to Apple's firmware signing keys, and as a result they're unable to simply replace the software themselves. That's why they're asking Apple to build a new firmware image, sign it with their private key and provide it to the FBI.

But what do we mean by "unauthorised firmware"? In this case, it's "not authorised by Apple" - Apple can sign whatever they want, and your iPhone will happily accept that update. As owner of the device, there's no way for you to reconfigure it such that it will accept your updates. And, perhaps worse, there's no way to reconfigure it such that it will reject Apple's.

I've previously written about how it's possible to reconfigure a subset of Android devices so that they trust your images and nobody else's. Any attempt to update the phone using the Google-provided image will fail - instead, they must be re-signed using the keys that were installed in the device. No matter what legal mechanisms were used against them, Google would be unable to produce a signed firmware image that could be installed on the device without your consent. The mechanism I proposed is complicated and annoying, but this could be integrated into the standard vendor update process such that you simply type a password to unlock a key for re-signing.

Why's this important? Sure, in this case the government is attempting to obtain the contents of a phone that belonged to an actual terrorist. But not all cases governments bring will be as legitimate, and not all manufacturers are Apple. Governments will request that manufacturers build new firmware that allows them to monitor the behaviour of activists. They'll attempt to obtain signing keys and use them directly to build backdoors that let them obtain messages sent to journalists. They'll be able to reflash phones to plant evidence to discredit opposition politicians.

We can't rely on Apple to fight every case - if it becomes politically or financially expedient for them to do so, they may well change their policy. And we can't rely on the US government only seeking to obtain this kind of backdoor in clear-cut cases - there's a risk that these techniques will be used against innocent people. The only way for Apple (and all other phone manufacturers) to protect users is to allow users to remove Apple's validation keys and substitute their own. If Apple genuinely value user privacy over Apple's control of a device, it shouldn't be a difficult decision to make.
I had no access to the internet for most of my childhood. Nobody in my area knew anything about programming. I learned a great deal from a number of CDs that included free software source code and archives of project mailing lists. When I got to university, I learned even more by being able to develop a Debian-based OS for use in our computer facilities. That gave me the experience and knowledge that I needed to become involved in Debian development, which in turn gave me the background required to be able to help Ubuntu become the first free software operating system to work out of the box on modern laptops. From there, I've been able to build my career around developing free software.

Ubuntu can be translated as "I am who I am because of who we all are". I am who I am because people made the choice to release their software under licenses that permitted examination, modification and redistribution. I am who I am because I was able to participate in communities that took advantages of those freedoms to produce new and better software. I am who I am because when my priorities differed from those of existing communities, it was still possible for me to benefit from their work and for them to benefit from mine.

Free software doesn't mean that the software is entirely free of restrictions. While a core aspect is the right to distribute modified versions of code, it has never been fundamental to free software that you be able to do so while still claiming that the code is the original version. Various approaches have been taken to make it possible for users to distinguish modified versions, ranging from simply including license terms that require modified versions be marked as such, to licenses that require that you change the name of the package if you modify it. However, what's probably the most effective approach has been to apply trademark law to the problem. Mozilla's trademark policy is an example of this - if you modify the code in ways that aren't approved by Mozilla, you aren't entitled to use the trademarks.

A requirement that you avoid use of trademarks in an infringing way is reasonable. Mozilla products include support for building with branding disabled, which makes it very straightforward for a user to build a modified version of Firefox that can be redistributed without any trademark issues. Red Hat have a similar policy for Fedora and RHEL[1] - you simply replace the packages that contain the branding and you're done.

Canonical's IP policy around Ubuntu is fundamentally different. While Mozilla make it clear that you simply no longer have a right to use the trademarks under trademark law, Canonical appear to require that you remove all trademarks entirely even if using them wouldn't be a violation of trademark law. While Mozilla restrict the redistribution of modified binaries that include their trademarks, Canonical insist that you rebuild everything even if the package doesn't contain any trademarks. And while Mozilla give you a single build option that creates binaries that conform with their trademark requirements, Canonical will refuse to tell you what you have to do.

When asked about this at SCALE earlier this year, Mark Shuttleworth claimed that Ubuntu's policy was consistent with that of other projects. This is inaccurate. Nobody else requires that you rebuild every package before you can redistribute it in a modified distribution - such a restriction is a violation of freedom 2 of the Free Software Definition, and as a result the binary distributions of Ubuntu are not free software. Nobody else refuses to discuss whether you're required to remove non-infringing trademarks in order to be able to redistribute. Nobody else responds to offers to make it easier for users to produce non-infringing derivatives with a flat refusal.

Mark claims that I'm only raising this issue because I work for a competitor and wish to harm Canonical. Nothing could be further from the truth. I began discussing this before working for my current employers - my previous employers had no meaningful market overlap with Canonical at all. The reason I care is because I care about free software. I care about people being able to derive new and interesting things from existing code. I care about a small team of people being able to take Ubuntu and make something better in the same way that Ubuntu did with Debian. I care about ensuring that users receive the freedom to do this without having to jump through a significant number of hoops in the process. Ubuntu has been a spectacularly successful vehicle for getting free software into the hands of users. Mark's generosity in funding this experiment has undoubtedly made the world a better place. Canonical employs a large number of talented developers writing high quality software, many of whom I'm fortunate enough to be able to call friends. And Canonical are squandering that by restricting the rights of their users and alienating the free software community.

I want others to be who they are because of my work and the work of all the others like me. Anything that makes that more difficult saddens me, and so I do what I can to fix it. I criticise Canonical's policies in the hope that we, as a community, can convince Canonical to agree that this kind of artificial barrier to modification hurts us more than it helps them. In many ways, Canonical remain one of our best hopes for broadening the reach of free software, and this is why it's unfortunate that they do so in a way that makes it more difficult for people to have the same experiences that I did.

[1] While it's easy to turn a trademark infringing version of RHEL into a non-infringing one, Red Hat don't provide publicly available binary packages for RHEL. If you get hold of them somehow you're entitled to redistribute them freely, but Red Hat's subscriber agreement indicates that if you do this as a Red Hat customer you will lose access to further binary updates - a provision that I find utterly repugnant. Its inclusion reduces my respect for Red Hat and my enthusiasm for working with them, and given the official Red Hat support for CentOS it appears to make no sense whatsoever. Red Hat should drop it.
The Linux Foundation is an industry organisation dedicated to promoting, protecting and standardising Linux and open source software[1]. The majority of its board is chosen by the member companies - 10 by platinum members (platinum membership costs $500,000 a year), 3 by gold members (gold membership costs $100,000 a year) and 1 by silver members (silver membership costs between $5,000 and $20,000 a year, depending on company size). Up until recently individual members ($99 a year) could also elect two board members, allowing for community perspectives to be represented at the board level.

As of last Friday, this is no longer true. The by-laws were amended to drop the clause that permitted individual members to elect any directors. Section 3.3(a) now says that no affiliate members may be involved in the election of directors, and section 5.3(d) still permits at-large directors but does not require them[2]. The old version of the bylaws are here - the only non-whitespace differences are in sections 3.3(a) and 5.3(d).

These changes all happened shortly after Karen Sandler announced that she planned to stand for the Linux Foundation board during a presentation last September. A short time later, the "Individual membership" program was quietly renamed to the "Individual supporter" program and the promised benefit of being allowed to stand for and participate in board elections was dropped (compare the old page to the new one). Karen is the executive director of the Software Freedom Conservancy, an organisation involved in the vitally important work of GPL enforcement. The Linux Foundation has historically been less than enthusiastic about GPL enforcement, and the SFC is funding a lawsuit against one of the Foundation's members for violating the terms of the GPL. The timing may be coincidental, but it certainly looks like the Linux Foundation was willing to throw out any semblance of community representation just to ensure that there was no risk of someone in favour of GPL enforcement ending up on their board.

Much of the code in Linux is written by employees paid to do this work, but significant parts of both Linux and the huge range of software that it depends on are written by community members who now have no representation in the Linux Foundation. Ignoring them makes it look like the Linux Foundation is interested only in promoting, protecting and standardising Linux and open source software if doing so benefits their corporate membership rather than the community as a whole. This isn't a positive step.

[1] Article II of the bylaws
[2] Other than in the case of the TAB representative, an individual chosen by a board elected via in-person voting at a conference
I gave a presentation at 32C3 this week. One of the things I said was "If any of you are doing seriously confidential work on Apple laptops, stop. For the love of god, please stop." I didn't really have time to go into the details of that at the time, but right now I'm sitting on a plane with a ridiculous sinus headache and the pseudoephedrine hasn't kicked in yet so here we go.

The basic premise of my presentation was that it's very difficult to determine whether your system is in a trustworthy state before you start typing your secrets (such as your disk decryption passphrase) into it. If it's easy for an attacker to modify your system such that it's not trustworthy at the point where you type in a password, it's easy for an attacker to obtain your password. So, if you actually care about your disk encryption being resistant to anybody who can get temporary physical possession of your laptop, you care about it being difficult for someone to compromise your early boot process without you noticing.

There's two approaches to this. The first is UEFI Secure Boot. If you cryptographically verify each component of the boot process, it's not possible for a user to compromise the boot process. The second is a measured boot. If you measure each component of the boot process into the TPM, and if you use these measurements to control access to a secret that allows the laptop to prove that it's trustworthy (such as Joanna Rutkowska's Anti Evil Maid or my variant on the theme), an attacker can compromise the boot process but you'll know that they've done so before you start typing.

So, how do current operating systems stack up here?

Windows: Supports UEFI Secure Boot in a meaningful way. Supports measured boot, but provides no mechanism for the system to attest that it hasn't been compromised. Good, but not perfect.

Linux: Supports UEFI Secure Boot[1], but doesn't verify signatures on the initrd[2]. This means that attacks such as Evil Abigail are still possible. Measured boot isn't in a good state, but it's possible to incorporate with a bunch of manual work. Vulnerable out of the box, but can be configured to be better than Windows.

Apple: Ha. Snare talked about attacking the Apple boot process in 2012 - basically everything he described then is still possible. Apple recently hired the people behind Legbacore, so there's hope - but right now all shipping Apple hardware has no firmware support for UEFI Secure Boot and no TPM. This makes it impossible to provide any kind of boot attestation, and there's no real way you can verify that your system hasn't been compromised.

Now, to be fair, there's attacks that even Windows and properly configured Linux will still be vulnerable to. Firmware defects that permit modification of System Management Mode code can still be used to circumvent these protections, and the Management Engine is in a position to just do whatever it wants and fuck all of you. But that's really not an excuse to just ignore everything else. Improving the current state of boot security makes it more difficult for adversaries to compromise a system, and if we ever do get to the point of systems which aren't running any hidden proprietary code we'll still need this functionality. It's worth doing, and it's worth doing now.

[1] Well, except Ubuntu's signed bootloader will happily boot unsigned kernels which kind of defeats the entire point of the exercise
[2] Initrds are built on the local machine, so we can't just ship signed images
The Software Freedom Conservancy is currently running a fundraising program in an attempt to raise enough money to continue funding GPL compliance work. If they don't gain enough supporters, the majority of their compliance work will cease. And, since SFC are one of the only groups currently actively involved in performing GPL compliance work, that basically means that there will be nobody working to ensure that users have the rights that copyright holders chose to give them.

Why does this matter? More people are using GPLed software than at any point in history. Hundreds of millions of Android devices were sold this year, all including GPLed code. An unknowably vast number of IoT devices run Linux. Cameras, Blu Ray players, TVs, light switches, coffee machines. Software running in places that we would never have previously imagined. And much of it abandoned immediately after shipping, gently rotting, exposing an increasingly large number of widely known security vulnerabilities to an increasingly hostile internet. Devices that become useless because of protocol updates. Toys that have a "Guaranteed to work until" date, and then suddenly Barbie goes dead and you're forced to have an unexpected conversation about API mortality with your 5-year old child.

We can't fix all of these things. Many of these devices have important functionality locked inside proprietary components, released under licenses that grant no permission for people to examine or improve them. But there are many that we can. Millions of devices are running modern and secure versions of Android despite being abandoned by their manufacturers, purely because the vendor released appropriate source code and a community grew up to maintain it. But this can only happen when the vendor plays by the rules.

Vendors who don't release their code remove that freedom from their users, and the weapons users have to fight against that are limited. Most users hold no copyright over the software in the device and are unable to take direct action themselves. A vendor's failure to comply dooms them to having to choose between buying a new device in 12 months or no longer receiving security updates. When yet more examples of vendor-supplied malware are discovered, it's more difficult to produce new builds without them. The utility of the devices that the user purchased is curtailed significantly.

The Software Freedom Conservancy is one of the only organisations actively fighting against this, and if they're forced to give up their enforcement work the pressure on vendors to comply with the GPL will be reduced even further. If we want users to control their devices, to be able to obtain security updates even after the vendor has given up, we need to keep that pressure up. Supporting the SFC's work has a real impact on the security of the internet and people's lives. Please consider giving them money.
Eric Raymond, author of The Cathedral and the Bazaar (an important work describing the effectiveness of open collaboration and development), recently wrote a piece calling for "Social Justice Warriors" to be ejected from the hacker community. The primary thrust of his argument is that by calling for a removal of the "cult of meritocracy", these SJWs are attacking the central aspect of hacker culture - that the quality of code is all that matters.

This argument is simply wrong.

Eric's been involved in software development for a long time. In that time he's seen a number of significant changes. We've gone from computers being the playthings of the privileged few to being nearly ubiquitous. We've moved from the internet being something you found in universities to something you carry around in your pocket. You can now own a computer whose CPU executes only free software from the moment you press the power button. And, as Eric wrote almost 20 years ago, we've identified that the "Bazaar" model of open collaborative development works better than the "Cathedral" model of closed centralised development.

These are huge shifts in how computers are used, how available they are, how important they are in people's lives, and, as a consequence, how we develop software. It's not a surprise that the rise of Linux and the victory of the bazaar model coincided with internet access becoming more widely available. As the potential pool of developers grew larger, development methods had to be altered. It was no longer possible to insist that somebody spend a significant period of time winning the trust of the core developers before being permitted to give feedback on code. Communities had to change in order to accept these offers of work, and the communities were better for that change.

The increasing ubiquity of computing has had another outcome. People are much more aware of the role of computing in their lives. They are more likely to understand how proprietary software can restrict them, how not having the freedom to share software can impair people's lives, how not being able to involve themselves in software development means software doesn't meet their needs. The largest triumph of free software has not been amongst people from a traditional software development background - it's been the fact that we've grown our communities to include people from a huge number of different walks of life. Free software has helped bring computing to under-served populations all over the world. It's aided circumvention of censorship. It's inspired people who would never have considered software development as something they could be involved in to develop entire careers in the field. We will not win because we are better developers. We will win because our software meets the needs of many more people, needs the proprietary software industry either can not or will not satisfy. We will win because our software is shaped not only by people who have a university degree and a six figure salary in San Francisco, but because our contributors include people whose native language is spoken by so few people that proprietary operating system vendors won't support it, people who live in a heavily censored regime and rely on free software for free communication, people who rely on free software because they can't otherwise afford the tools they would need to participate in development.

In other words, we will win because free software is accessible to more of society than proprietary software. And for that to be true, it must be possible for our communities to be accessible to anybody who can contribute, regardless of their background.

Up until this point, I don't think I've made any controversial claims. In fact, I suspect that Eric would agree. He would argue that because hacker culture defines itself through the quality of contributions, the background of the contributor is irrelevant. On the internet, nobody knows that you're contributing from a basement in an active warzone, or from a refuge shelter after escaping an abusive relationship, or with the aid of assistive technology. If you can write the code, you can participate.

Of course, this kind of viewpoint is overly naive. Humans are wonderful at noticing indications of "otherness". Eric even wrote about his struggle to stop having a viscerally negative reaction to people of a particular race. This happened within the past few years, so before then we can assume that he was less aware of the issue. If Eric received a patch from someone whose name indicated membership of this group, would there have been part of his subconscious that reacted negatively? Would he have rationalised this into a more critical analysis of the patch, increasing the probability of rejection? We don't know, and it's unlikely that Eric does either.

Hacker culture has long been concerned with good design, and a core concept of good design is that code should fail safe - ie, if something unexpected happens or an assumption turns out to be untrue, the desirable outcome is the one that does least harm. A command that fails to receive a filename as an argument shouldn't assume that it should modify all files. A network transfer that fails a checksum shouldn't be permitted to overwrite the existing data. An authentication server that receives an unexpected error shouldn't default to granting access. And a development process that may be subject to unconscious bias should have processes in place that make it less likely that said bias will result in the rejection of useful contributions.

When people criticise meritocracy, they're not criticising the concept of treating contributions based on their merit. They're criticising the idea that humans are sufficiently self-aware that they will be able to identify and reject every subconscious prejudice that will affect their treatment of others. It's not a criticism of a desirable goal, it's a criticism of a flawed implementation. There's evidence that organisations that claim to embody meritocratic principles are more likely to reward men than women even when everything else is equal. The "cult of meritocracy" isn't the belief that meritocracy is a good thing, it's the belief that a project founded on meritocracy will automatically be free of bias.

Projects like the Contributor Covenant that Eric finds so objectionable exist to help create processes that (at least partially) compensate for our flaws. Review of our processes to determine whether we're making poor social decisions is just as important as review of our code to determine whether we're making poor technical decisions. Just as the bazaar overtook the cathedral by making it easier for developers to be involved, inclusive communities will overtake "pure meritocracies" because, in the long run, these communities will produce better output - not just in terms of the quality of the code, but also in terms of the ability of the project to meet the needs of a wider range of people.

The fight between the cathedral and the bazaar came from people who were outside the cathedral. Those fighting against the assumption that meritocracies work may be outside what Eric considers to be hacker culture, but they're already part of our communities, already making contributions to our projects, already bringing free software to more people than ever before. This time it's Eric building a cathedral and decrying the decadent hordes in their bazaar, Eric who's failed to notice the shift in the culture that surrounds him. And, like those who continued building their cathedrals in the 90s, it's Eric who's now irrelevant to hacker culture.

(Edited to add: for two quite different perspectives on why Eric's wrong, see Tim's and Coraline's posts)
I've previously written about Canonical's obnoxious IP policy and how Mark Shuttleworth admits it's deliberately vague. After spending some time discussing specific examples with Canonical, I've been explicitly told that while Canonical will gladly give me a cost-free trademark license permitting me to redistribute unmodified Ubuntu binaries, they will not tell me what Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries actually means.

Why does this matter? The free software definition requires that you be able to redistribute software to other people in either unmodified or modified form without needing to ask for permission first. This makes it clear that Ubuntu itself isn't free software - distributing the individual binary packages without permission is forbidden, even if they wouldn't contain any infringing trademarks[1]. This is obnoxious, but not inherently toxic. The source packages for Ubuntu could still be free software, making it fairly straightforward to build a free software equivalent.

Unfortunately, while true in theory, this isn't true in practice. The issue here is the apparently simple phrase you must remove and replace the Trademarks and will need to recompile the source code. "Trademarks" is defined later as being the words "Ubuntu", "Kubuntu", "Juju", "Landscape", "Edubuntu" and "Xubuntu" in either textual or logo form. The naive interpretation of this is that you have to remove trademarks where they'd be infringing - for instance, shipping the Ubuntu bootsplash as part of a modified product would almost certainly be clear trademark infringement, so you shouldn't do that. But that's not what the policy actually says. It insists that all trademarks be removed, whether they would embody an infringement or not. If a README says "To build this software under Ubuntu, install the following packages", a literal reading of Canonical's policy would require you to remove or replace the word "Ubuntu" even though failing to do so wouldn't be a trademark infringement. If an @ubuntu.com email address is present in a changelog, you'd have to change it. You wouldn't be able to ship the juju-core package without renaming it and the application within. If this is what the policy means, it's so impractical to be able to rebuild Ubuntu that it's not free software in any meaningful way.

This seems like a pretty ludicrous interpretation, but it's one that Canonical refuse to explicitly rule out. Compare this to Red Hat's requirements around Fedora - if you replace the fedora-logos, fedora-release and fedora-release-notes packages with your own content, you're good. A policy like this satisfies the concerns that Dustin raised over people misrepresenting their products, but still makes it easy for users to distribute modified code to other users. There's nothing whatsoever stopping Canonical from adopting a similarly unambiguous policy.

Mark has repeatedly asserted that attempts to raise this issue are mere FUD, but he won't answer you if you ask him direct questions about this policy and will insist that it's necessary to protect Ubuntu's brand. The reality is that if Debian had had an identical policy in 2004, Ubuntu wouldn't exist. The effort required to strip all Debian trademarks from the source packages would have been immense[2], and this would have had to be repeated for every release. While this policy is in place, nobody's going to be able to take Ubuntu and build something better. It's grotesquely hypocritical, especially when the Ubuntu website still talks about their belief that people should be able to distribute modifications without licensing fees.

All that's required for Canonical to deal with this problem is to follow Fedora's lead and isolate their trademarks in a small set of packages, then tell users that those packages must be replaced if distributing a modified version of Ubuntu. If they're serious about this being a branding issue, they'll do it. And if I'm right that the policy is deliberately obfuscated so Canonical can encourage people to buy licenses, they won't. It's easy for them to prove me wrong, and I'll be delighted if they do. Let's see what happens.

[1] The policy is quite clear on this. If you want to distribute something other than an unmodified Ubuntu image, you have two choices:
  1. Gain approval or certification from Canonical
  2. Remove all trademarks and recompile the source code
Note that option 2 requires you to rebuild even if there are no trademarks to remove.

[2] Especially when every source package contains a directory called "debian"…
The Washington Post published an article today which describes the ongoing tension between the security community and Linux kernel developers. This has been roundly denounced as FUD, with Rob Graham going so far as to claim that nobody ever attacks the kernel.

Unfortunately he's entirely and demonstrably wrong, it's not FUD and the state of security in the kernel is currently far short of where it should be.

An example. Recent versions of Android use SELinux to confine applications. Even if you have full control over an application running on Android, the SELinux rules make it very difficult to do anything especially user-hostile. Hacking Team, the GPL-violating Italian company who sells surveillance software to human rights abusers, found that this impeded their ability to drop their spyware onto targets' devices. So they took advantage of the fact that many Android devices shipped a kernel with a flawed copy_from_user() implementation that allowed them to copy arbitrary userspace data over arbitrary kernel code, thus allowing them to disable SELinux.

If we could trust userspace applications, we wouldn't need SELinux. But we assume that userspace code may be buggy, misconfigured or actively hostile, and we use technologies such as SELinux or AppArmor to restrict its behaviour. There's simply too much userspace code for us to guarantee that it's all correct, so we do our best to prevent it from doing harm anyway.

This is significantly less true in the kernel. The model up until now has largely been "Fix security bugs as we find them", an approach that fails on two levels:

1) Once we find them and fix them, there's still a window between the fixed version being available and it actually being deployed
2) The forces of good may not be the first ones to find them

This reactive approach is fine for a world where it's possible to push out software updates without having to perform extensive testing first, a world where the only people hunting for interesting kernel vulnerabilities are nice people. This isn't that world, and this approach isn't fine.

Just as features like SELinux allow us to reduce the harm that can occur if a new userspace vulnerability is found, we can add features to the kernel that make it more difficult (or impossible) for attackers to turn a kernel bug into an exploitable vulnerability. The number of people using Linux systems is increasing every day, and many of these users depend on the security of these systems in critical ways. It's vital that we do what we can to avoid their trust being misplaced.

Many useful mitigation features already exist in the Grsecurity patchset, but a combination of technical disagreements around certain features, personality conflicts and an apparent lack of enthusiasm on the side of upstream kernel developers has resulted in almost none of it landing in the kernels that most people use. Kees Cook has proposed a new project to start making a more concerted effort to migrate components of Grsecurity to upstream. If you rely on the kernel being a secure component, either because you ship a product based on it or because you use it yourself, you should probably be doing what you can to support this.

Microsoft received entirely justifiable criticism for the terrible state of security on their platform. They responded by introducing cutting-edge security features across the OS, including the kernel. Accusing anyone who says we need to do the same of spreading FUD is risking free software being sidelined in favour of proprietary software providing more real-world security. That doesn't seem like a good outcome.
Reaction to Sarah's post about leaving the kernel community was a mixture of terrible and touching, but it's still one of those things that almost certainly won't end up making any kind of significant difference. Linus has made it pretty clear that he's fine with the way he behaves, and nobody's going to depose him. That's unfortunate, because earlier today I was sitting in a presentation at Linuxcon and remembering how much I love the technical side of kernel development. "Remembering" is a deliberate choice of word - it's been increasingly difficult to remember that, because instead I remember having to deal with interminable arguments over the naming of an interface because Linus has an undying hatred of BSD securelevel, or having my name forever associated with the deepthroating of Microsoft because Linus couldn't be bothered asking questions about the reasoning behind a design before trashing it.

In the end it's a mixture of just being tired of dealing with the crap associated with Linux development and realising that by continuing to put up with it I'm tacitly encouraging its continuation, but I can't be bothered any more. And, thanks to the magic of free software, it turns out that I can avoid putting up with the bullshit in the kernel community and get to work on the things I'm interested in doing. So here's a kernel tree with patches that implement a BSD-style securelevel interface. Over time it'll pick up some of the power management code I'm still working on, and we'll see where it goes from there. But, until there's a significant shift in community norms on LKML, I'll only be there when I'm being paid to be there. And that's improved my mood immeasurably.

(Edited to add a context link for the "deepthroating of Microsoft" reference)
When I wrote about TPM attestation via 2FA, I mentioned that you needed a bootloader that actually performed measurement. I've now written some patches for Shim and Grub that do so.

The Shim code does a couple of things. The obvious one is to measure the second-stage bootloader into PCR 9. The perhaps less expected one is to measure the contents of the MokList and MokSBState UEFI variables into PCR 14. This means that if you're happy simply running a system with your own set of signing keys and just want to ensure that your secure boot configuration hasn't been compromised, you can simply seal to PCR 7 (which will contain the UEFI Secure Boot state as defined by the UEFI spec) and PCR 14 (which will contain the additional state used by Shim) and ignore all the others.

The grub code is a little more complicated because there's more ways to get it to execute code. Right now I've gone for a fairly extreme implementation. On BIOS systems, the grub stage 1 and 2 will be measured into PCR 9[1]. That's the only BIOS-specific part of things. From then on, any grub modules that are loaded will also be measured into PCR 9. The full kernel image will be measured into PCR10, and the full initramfs will be measured into PCR11. The command line passed to the kernel is in PCR12. Finally, each command executed by grub (including those in the config file) is measured into PCR 13.

That's quite a lot of measurement, and there are probably fairly reasonable circumstances under which you won't want to pay attention to all of those PCRs. But you've probably also noticed that several different things may be measured into the same PCR, and that makes it more difficult to figure out what's going on. Thankfully, the spec designers have a solution to this in the form of the TPM measurement log.

Rather than merely extending a PCR with a new hash, software can extend the measurement log at the same time. This is stored outside the TPM and so isn't directly cryptographically protected. In the simplest form, it contains a hash and some form of description of the event associated with that hash. If you replay those hashes you should end up with the same value that's in the TPM, so for attestation purposes you can perform that verification and then merely check that specific log values you care about are correct. This makes it possible to have a system perform an attestation to a remote server that contains a full list of the grub commands that it ran and for that server to make its attestation decision based on a subset of those.

No promises as yet about PCR allocation being final or these patches ever going anywhere in their current form, but it seems reasonable to get them out there so people can play. Let me know if you end up using them!

[1] The code for this is derived from the old Trusted Grub patchset, by way of Sirrix AG's Trusted Grub 2 tree.
I have an Amazon Echo. I also have a LIFX Smart Bulb. The Echo can integrate with Philips Hue devices, letting you control your lights by voice. It has no integration with LIFX. Worse, the Echo developer program is fairly limited - while the device's built in code supports communicating with devices on your local network, the third party developer interface only allows you to make calls to remote sites[1]. It seemed like I was going to have to put up with either controlling my bedroom light by phone or actually getting out of bed to hit the switch.

Then I found this article describing the implementation of a bridge between the Echo and Belkin Wemo switches, cunningly called Fauxmo. The Echo already supports controlling Wemo switches, and the code in question simply implements enough of the Wemo API to convince the Echo that there's a bunch of Wemo switches on your network. When the Echo sends a command to them asking them to turn on or off, the code executes an arbitrary callback that integrates with whatever API you want.

This seemed like a good starting point. There's a free implementation of the LIFX bulb API called Lazylights, and with a quick bit of hacking I could use the Echo to turn my bulb on or off. But the Echo's Hue support also allows dimming of lights, and that seemed like a nice feature to have. Tcpdump showed that asking the Echo to look for Hue devices resulted in similar UPnP discovery requests to it looking for Wemo devices, so extending the Fauxmo code seemed plausible. I signed up for the Philips developer program and then discovered that the terms and conditions explicitly forbade using any information on their site to implement any kind of Hue-compatible endpoint. So that was out. Thankfully enough people have written their own Hue code at various points that I could figure out enough of the protocol by searching Github instead, and now I have a branch of Fauxmo that supports searching for LIFX bulbs and presenting them as Hues[2].

Running this on a machine on my local network is enough to keep the Echo happy, and I can now dim my bedroom light in addition to turning it on or off. But it demonstrates a somewhat awkward situation. Right now vendors have no real incentive to offer any kind of compatibility with each other. Instead they're all trying to define their own ecosystems with their own incompatible protocols with the aim of forcing users to continue buying from them. Worse, they attempt to restrict developers from implementing any kind of compatibility layers. The inevitable outcome is going to be either stacks of discarded devices speaking abandoned protocols or a cottage industry of developers writing bridge code and trying to avoid DMCA takedowns.

The dystopian future we're heading towards isn't Gibsonian giant megacorporations engaging in physical warfare, it's one where buying a new toaster means replacing all your lightbulbs or discovering that the code making your home alarm system work is now considered a copyright infringement. Is there a market where I can invest in IP lawyers?

[1] It also requires an additional phrase at the beginning of a request to indicate which third party app you want your query to go to, so it's much more clumsy to make those requests compared to using a built-in app.
[2] I only have one bulb, so as yet I haven't added any support for groups.