Trusted Platform Modules are fairly unintelligent devices. They can do some crypto, but they don't have any ability to directly monitor the state of the system they're attached to. This is worked around by having each stage of the boot process "measure" state into registers (Platform Configuration Registers, or PCRs) in the TPM by taking the SHA1 of the next boot component and performing an extend operation. Extend works like this:

New PCR value = SHA1(current value||new hash)

ie, the TPM takes the current contents of the PCR (a 20-byte register), concatenates the new SHA1 to the end of that in order to obtain a 40-byte value, takes the SHA1 of this 40-byte value to obtain a 20-byte hash and sets the PCR value to this. This has a couple of interesting properties:
  • You can't directly modify the contents of the PCR. In order to obtain a specific value, you need to perform the same set of writes in the same order. If you replace the trusted bootloader with an untrusted one that runs arbitrary code, you can't rewrite the PCR to cover up that fact
  • The PCR value is predictable and can be reconstructed by replaying the same series of operations
But how do we know what those operations were? We control the bootloader and the kernel and we know what extend operations they performed, so that much is easy. But the firmware itself will have performed some number of operations (the firmware itself is measured, as is the firmware configuration, and certain aspects of the boot process that aren't in our control may also be measured) and we may not be able to reconstruct those from scratch.

Thankfully we have more than just the final PCR data. The firmware provides an interface to log each extend operation, and you can read the event log in /sys/kernel/security/tpm0/binary_bios_measurements. You can pull information out of that log and use it to reconstruct the writes the firmware made. Merge those with the writes you performed and you should be able to reconstruct the final TPM state. Hurrah!

The problem is that a lot of what you want to measure into the TPM may vary between machines or change in response to configuration changes or system updates. If you measure every module that grub loads, and if grub changes the order that it loads modules in, you also need to update your calculations of the end result. Thankfully there's a way around this - rather than making policy decisions based on the final TPM value, just use the final TPM value to ensure that the log is valid. If you extract each hash value from the log and simulate an extend operation, you should end up with the same value as is present in the TPM. If so, you know that the log is valid. At that point you can examine individual log entries without having to care about the order that they occurred in, which makes writing your policy significantly easier.

But there's another source of fragility. Imagine that you're measuring every command executed by grub (as is the case in the CoreOS grub). You want to ensure that no inappropriate commands have been run (such as ones that would allow you to modify the loaded kernel after it's been measured), but you also want to permit certain variations - for instance, you might have a primary root filesystem and a fallback root filesystem, and you're ok with either being passed as a kernel argument. One approach would be to write two lines of policy, but there's an even more flexible approach. If the bootloader logs the entire command into the event log, when replaying the log we can verify that the event description hashes to the value that was passed to the TPM. If it does, rather than testing against an explicit hash value, we can examine the string itself. If the event description matches a regular expression provided by the policy then we're good.

This approach makes it possible to write TPM policies that are resistant to changes in ordering and permit fine-grained definition of acceptable values, and which can cleanly separate out local policy, generated policy values and values that are provided by the firmware. The split between machine-specific policy and OS policy allows for the static machine-specific policy to be merged with OS-provided policy, making remote attestation viable even over automated system upgrades.

We've integrated an implementation of this kind of policy into the TPM support code we'd like to integrate into Kubernetes, and CoreOS will soon be generating known-good hashes at image build time. The combination of these means that people using Distributed Trusted Computing under Tectonic will be able to validate the state of their systems with nothing more than a minimal machine-specific policy description.

The support code for all of this should also start making it into other distributions in the near future (the grub code is already in Fedora 24), so with luck we can define a cross-distribution policy format and make it straightforward to handle this in a consistent way even in hetrogenous operating system environments. Remote attestation is a powerful tool for ensuring that your systems are in a valid state, but the difficulty of policy management has been a significant factor in making it difficult for people to deploy in their data centres. Making it easier for people to shield themselves against low-level boot attacks is a big step forward in improving the security of distributed workloads and makes bare-metal hosting a much more viable proposition.
I'm in London for Kubecon right now, and the hotel I'm staying at has decided that light switches are unfashionable and replaced them with a series of Android tablets.
A tablet displaying the text UK_bathroom isn't responding. Do you want to close it?
One was embedded in the wall, but the two next to the bed had convenient looking ethernet cables plugged into the wall. So.

I managed to borrow a couple of USB ethernet adapters, set up a transparent bridge (brctl addbr br0; brctl addif br0 enp0s20f0u1; brctl addif br0 enp0s20f0u2; ifconfig br0 up) and then stuck my laptop between the tablet and the wall. tcpdump -i br0 showed traffic, and wireshark revealed that it was Modbus over TCP. Modbus is a pretty trivial protocol, and notably has no authentication whatsoever. tcpdump showed that traffic was being sent to 172.16.207.14, and pymodbus let me start controlling my lights, turning the TV on and off and even making my curtains open and close. What fun!

And then I noticed something. My room number is 714. The IP address I was communicating with was 172.16.207.14. They wouldn't, would they?

I mean yes obviously they would.

It's basically as bad as it could be - once I'd figured out the gateway, I could access the control systems on every floor and query other rooms to figure out whether the lights were on or not, which strongly implies that I could control them as well. Jesus Molina talked about doing this kind of thing a couple of years ago, so it's not some kind of one-off - instead, hotels are happily deploying systems with no meaningful security, and the outcome of sending a constant stream of "Set room lights to full" and "Open curtain" commands at 3AM seems fairly predictable.

We're doomed.

(edited: this previously claimed I could only access systems on my own floor, but it turns out that each floor is a separate broadcast domain and I just needed to set a gateway to access the others)

(further edit: I'm deliberately not naming the hotel. They were receptive to my feedback and promised to do something about the issue.)
I maintain an application for bridging various non-Hue lighting systems to something that looks enough like a Hue that an Amazon Echo will still control them. One thing I hadn't really worked on was colour support, so I picked up some cheap bulbs and a bridge. The kit is badged as an iSuper iRainbow001, and it's terrible.

Things seemed promising enough at first, although the bulbs were alarmingly heavy (there's a significant chunk of heatsink built into them, which seems to get a lot warmer than I'd expect from something that claims a 7W power consumption). The app was a bit clunky, but eh - I wasn't planning on using it for long. I pressed the button on the bridge, launched the app and could control the bulbs. The first thing I noticed was that they had a separate "white" and "colour" mode. White mode was pretty bright, but colour mode massively less so - presumably the white LEDs are entirely independent of the RGB ones, and much higher intensity. Still, potentially useful as mood lighting.

Anyway. Next step was to start playing with the protocol, which meant finding the device on my network. I checked anything that had picked up a DHCP lease recently and nmapped them. The OS detection reported Linux, which wasn't hugely surprising - there was no GPL notice or source code included with the box, but I'm way past the point of shock at that. It also reported that there was a telnet daemon running. I connected and got a login prompt. And then I typed admin as the username and admin as the password and got a root prompt. So, there's that. The copy of Busybox included even came with tftp, so it was easy to get copies of tcpdump and strace on there to see what was up.

So. Next step. Protocol sniffing. I wanted to know how discovery worked, so reset the device to factory and watched what happens. The app on my phone sent out a discovery packet on UDP port 18602 which looked like this:

INLAN:CLIP:23.21.212.48:CLPT:22345:MAC:02:00:00:00:00:00

The CLIP and CLPT fields refer to the cloud server that allows for management when you're not on the local network. The mac field contains an utterly fake address. If you send out a discovery packet and your mac hasn't been registered with the device, you get a denial back. If your mac has been (and by "mac" here I mean "utterly fake mac that's the same for all devices"), you get back a response including the device serial number and IP address. And if you just leave out the mac field entirely, you get back a response no matter whether your address is registered or not. So, that's a start. Once you've registered one copy of the app with the device, anything can communicate with it by just using the same fake mac in the discovery packets.

Onwards. The command format turns out to be simple. They start ##, are followed by two ascii digits encoding a command, four ascii digits containing a bulb id, two ascii digits containing the number of following bytes and then the command data (in ascii). An example is:

##05010002ff

which decodes as command 5 (set white intensity) on bulb 1 with two bytes of data following, each of which is an f. This translates as "Set bulb 1 to full white intensity". I worked out the rest pretty quickly - command 03 sets the RGB colour of the bulb, 0A asks the bridge to search for new bulbs, 0B tells you which bulbs are available and their capabilities and 0E gives you the MAC addresses of the bulbs(‽). 0C crashes the server process, and 06 spews a bunch of garbage back at you and potentially crashes the bulb in a hilarious way that involves it flickering at about 15Hz. It turns out that 06 is actually the "Rename bulb" command, and if you send it less data than it's expecting something goes hilariously wrong in string parsing and everything is awful.

Ok. Easy enough, but not hugely exciting. What about the remote protocol? This turns out to involve sending a login packet and then a wrapped command packet. The login has some length data, a header of "MQIsdp", a long bunch of ascii-encoded hex, a username and a password.

The username is w13733 and the password is gbigbi01. These are hardcoded in the app. The ascii-encoded hex can be replaced with 0s and the login succeeds anyway.

Ok. What about the wrapping on the command? The login never indicated which device we wanted to talk to, so presumably there's some sort of protection going on here oh wait. The command packet is a length, the serial number of the bridge and then a normal command packet. As long as you know the serial number of the device (which it tells you in response to a discovery packet, even if you're unauthenticated), you can use the cloud service to send arbitrary commands to the device (including the one that crashes the service). One of which involves the device then doing some kind of string parsing that doesn't appear to work that well. What could possibly go wrong?

Ok, so that seemed to be the entirety of the network protocol. Anything else to do? Some poking around on the bridge revealed (a) that it had an active wireless device and (b) a running DHCP server. They wouldn't, would they?

Yes. Yes, they would.

The devices all have a hardcoded SSID of "iRainbow", although they don't broadcast it. There's no security - anybody can associate. It'll then hand out an IP address. It's running telnetd on that interface as well. You can bounce through there to the owner's internal network.

So, in summary: it's a device that infringes my copyright, gives you root access in response to trivial credentials, has access control that depends entirely on nobody ever looking at the packets, is sufficiently poorly implemented that you can crash both it and the bulbs, has a cloud access protocol that has no security whatsoever and also acts as an easy mechanism for people to circumvent your network security. This may be the single worst device I've ever bought.

I called the manufacturer and they insisted that the device was designed in 2012, was no longer manufactured or supported, that they had no source code to give me and there would be no security fixes. The vendor wants me to pay for shipping back to them and reserves the right to deduct the cost of the original "free" shipping from the refund. Everything is awful, which is why I just ordered four more random bulbs to play with over the weekend.
The US Government is attempting to force Apple to build a signed image that can be flashed onto an iPhone used by one of the San Bernardino shooters. To their credit, Apple have pushed back against this - there's an explanation of why doing so would be dangerous here. But what's noteworthy is that Apple are arguing that they shouldn't do this, not that they can't do this - Apple (and many other phone manufacturers) have designed their phones such that they can replace the firmware with anything they want.

In order to prevent unauthorised firmware being installed on a device, Apple (and most other vendors) verify that any firmware updates are signed with a trusted key. The FBI don't have access to Apple's firmware signing keys, and as a result they're unable to simply replace the software themselves. That's why they're asking Apple to build a new firmware image, sign it with their private key and provide it to the FBI.

But what do we mean by "unauthorised firmware"? In this case, it's "not authorised by Apple" - Apple can sign whatever they want, and your iPhone will happily accept that update. As owner of the device, there's no way for you to reconfigure it such that it will accept your updates. And, perhaps worse, there's no way to reconfigure it such that it will reject Apple's.

I've previously written about how it's possible to reconfigure a subset of Android devices so that they trust your images and nobody else's. Any attempt to update the phone using the Google-provided image will fail - instead, they must be re-signed using the keys that were installed in the device. No matter what legal mechanisms were used against them, Google would be unable to produce a signed firmware image that could be installed on the device without your consent. The mechanism I proposed is complicated and annoying, but this could be integrated into the standard vendor update process such that you simply type a password to unlock a key for re-signing.

Why's this important? Sure, in this case the government is attempting to obtain the contents of a phone that belonged to an actual terrorist. But not all cases governments bring will be as legitimate, and not all manufacturers are Apple. Governments will request that manufacturers build new firmware that allows them to monitor the behaviour of activists. They'll attempt to obtain signing keys and use them directly to build backdoors that let them obtain messages sent to journalists. They'll be able to reflash phones to plant evidence to discredit opposition politicians.

We can't rely on Apple to fight every case - if it becomes politically or financially expedient for them to do so, they may well change their policy. And we can't rely on the US government only seeking to obtain this kind of backdoor in clear-cut cases - there's a risk that these techniques will be used against innocent people. The only way for Apple (and all other phone manufacturers) to protect users is to allow users to remove Apple's validation keys and substitute their own. If Apple genuinely value user privacy over Apple's control of a device, it shouldn't be a difficult decision to make.
I had no access to the internet for most of my childhood. Nobody in my area knew anything about programming. I learned a great deal from a number of CDs that included free software source code and archives of project mailing lists. When I got to university, I learned even more by being able to develop a Debian-based OS for use in our computer facilities. That gave me the experience and knowledge that I needed to become involved in Debian development, which in turn gave me the background required to be able to help Ubuntu become the first free software operating system to work out of the box on modern laptops. From there, I've been able to build my career around developing free software.

Ubuntu can be translated as "I am who I am because of who we all are". I am who I am because people made the choice to release their software under licenses that permitted examination, modification and redistribution. I am who I am because I was able to participate in communities that took advantages of those freedoms to produce new and better software. I am who I am because when my priorities differed from those of existing communities, it was still possible for me to benefit from their work and for them to benefit from mine.

Free software doesn't mean that the software is entirely free of restrictions. While a core aspect is the right to distribute modified versions of code, it has never been fundamental to free software that you be able to do so while still claiming that the code is the original version. Various approaches have been taken to make it possible for users to distinguish modified versions, ranging from simply including license terms that require modified versions be marked as such, to licenses that require that you change the name of the package if you modify it. However, what's probably the most effective approach has been to apply trademark law to the problem. Mozilla's trademark policy is an example of this - if you modify the code in ways that aren't approved by Mozilla, you aren't entitled to use the trademarks.

A requirement that you avoid use of trademarks in an infringing way is reasonable. Mozilla products include support for building with branding disabled, which makes it very straightforward for a user to build a modified version of Firefox that can be redistributed without any trademark issues. Red Hat have a similar policy for Fedora and RHEL[1] - you simply replace the packages that contain the branding and you're done.

Canonical's IP policy around Ubuntu is fundamentally different. While Mozilla make it clear that you simply no longer have a right to use the trademarks under trademark law, Canonical appear to require that you remove all trademarks entirely even if using them wouldn't be a violation of trademark law. While Mozilla restrict the redistribution of modified binaries that include their trademarks, Canonical insist that you rebuild everything even if the package doesn't contain any trademarks. And while Mozilla give you a single build option that creates binaries that conform with their trademark requirements, Canonical will refuse to tell you what you have to do.

When asked about this at SCALE earlier this year, Mark Shuttleworth claimed that Ubuntu's policy was consistent with that of other projects. This is inaccurate. Nobody else requires that you rebuild every package before you can redistribute it in a modified distribution - such a restriction is a violation of freedom 2 of the Free Software Definition, and as a result the binary distributions of Ubuntu are not free software. Nobody else refuses to discuss whether you're required to remove non-infringing trademarks in order to be able to redistribute. Nobody else responds to offers to make it easier for users to produce non-infringing derivatives with a flat refusal.

Mark claims that I'm only raising this issue because I work for a competitor and wish to harm Canonical. Nothing could be further from the truth. I began discussing this before working for my current employers - my previous employers had no meaningful market overlap with Canonical at all. The reason I care is because I care about free software. I care about people being able to derive new and interesting things from existing code. I care about a small team of people being able to take Ubuntu and make something better in the same way that Ubuntu did with Debian. I care about ensuring that users receive the freedom to do this without having to jump through a significant number of hoops in the process. Ubuntu has been a spectacularly successful vehicle for getting free software into the hands of users. Mark's generosity in funding this experiment has undoubtedly made the world a better place. Canonical employs a large number of talented developers writing high quality software, many of whom I'm fortunate enough to be able to call friends. And Canonical are squandering that by restricting the rights of their users and alienating the free software community.

I want others to be who they are because of my work and the work of all the others like me. Anything that makes that more difficult saddens me, and so I do what I can to fix it. I criticise Canonical's policies in the hope that we, as a community, can convince Canonical to agree that this kind of artificial barrier to modification hurts us more than it helps them. In many ways, Canonical remain one of our best hopes for broadening the reach of free software, and this is why it's unfortunate that they do so in a way that makes it more difficult for people to have the same experiences that I did.

[1] While it's easy to turn a trademark infringing version of RHEL into a non-infringing one, Red Hat don't provide publicly available binary packages for RHEL. If you get hold of them somehow you're entitled to redistribute them freely, but Red Hat's subscriber agreement indicates that if you do this as a Red Hat customer you will lose access to further binary updates - a provision that I find utterly repugnant. Its inclusion reduces my respect for Red Hat and my enthusiasm for working with them, and given the official Red Hat support for CentOS it appears to make no sense whatsoever. Red Hat should drop it.
The Linux Foundation is an industry organisation dedicated to promoting, protecting and standardising Linux and open source software[1]. The majority of its board is chosen by the member companies - 10 by platinum members (platinum membership costs $500,000 a year), 3 by gold members (gold membership costs $100,000 a year) and 1 by silver members (silver membership costs between $5,000 and $20,000 a year, depending on company size). Up until recently individual members ($99 a year) could also elect two board members, allowing for community perspectives to be represented at the board level.

As of last Friday, this is no longer true. The by-laws were amended to drop the clause that permitted individual members to elect any directors. Section 3.3(a) now says that no affiliate members may be involved in the election of directors, and section 5.3(d) still permits at-large directors but does not require them[2]. The old version of the bylaws are here - the only non-whitespace differences are in sections 3.3(a) and 5.3(d).

These changes all happened shortly after Karen Sandler announced that she planned to stand for the Linux Foundation board during a presentation last September. A short time later, the "Individual membership" program was quietly renamed to the "Individual supporter" program and the promised benefit of being allowed to stand for and participate in board elections was dropped (compare the old page to the new one). Karen is the executive director of the Software Freedom Conservancy, an organisation involved in the vitally important work of GPL enforcement. The Linux Foundation has historically been less than enthusiastic about GPL enforcement, and the SFC is funding a lawsuit against one of the Foundation's members for violating the terms of the GPL. The timing may be coincidental, but it certainly looks like the Linux Foundation was willing to throw out any semblance of community representation just to ensure that there was no risk of someone in favour of GPL enforcement ending up on their board.

Much of the code in Linux is written by employees paid to do this work, but significant parts of both Linux and the huge range of software that it depends on are written by community members who now have no representation in the Linux Foundation. Ignoring them makes it look like the Linux Foundation is interested only in promoting, protecting and standardising Linux and open source software if doing so benefits their corporate membership rather than the community as a whole. This isn't a positive step.

[1] Article II of the bylaws
[2] Other than in the case of the TAB representative, an individual chosen by a board elected via in-person voting at a conference
I gave a presentation at 32C3 this week. One of the things I said was "If any of you are doing seriously confidential work on Apple laptops, stop. For the love of god, please stop." I didn't really have time to go into the details of that at the time, but right now I'm sitting on a plane with a ridiculous sinus headache and the pseudoephedrine hasn't kicked in yet so here we go.

The basic premise of my presentation was that it's very difficult to determine whether your system is in a trustworthy state before you start typing your secrets (such as your disk decryption passphrase) into it. If it's easy for an attacker to modify your system such that it's not trustworthy at the point where you type in a password, it's easy for an attacker to obtain your password. So, if you actually care about your disk encryption being resistant to anybody who can get temporary physical possession of your laptop, you care about it being difficult for someone to compromise your early boot process without you noticing.

There's two approaches to this. The first is UEFI Secure Boot. If you cryptographically verify each component of the boot process, it's not possible for a user to compromise the boot process. The second is a measured boot. If you measure each component of the boot process into the TPM, and if you use these measurements to control access to a secret that allows the laptop to prove that it's trustworthy (such as Joanna Rutkowska's Anti Evil Maid or my variant on the theme), an attacker can compromise the boot process but you'll know that they've done so before you start typing.

So, how do current operating systems stack up here?

Windows: Supports UEFI Secure Boot in a meaningful way. Supports measured boot, but provides no mechanism for the system to attest that it hasn't been compromised. Good, but not perfect.

Linux: Supports UEFI Secure Boot[1], but doesn't verify signatures on the initrd[2]. This means that attacks such as Evil Abigail are still possible. Measured boot isn't in a good state, but it's possible to incorporate with a bunch of manual work. Vulnerable out of the box, but can be configured to be better than Windows.

Apple: Ha. Snare talked about attacking the Apple boot process in 2012 - basically everything he described then is still possible. Apple recently hired the people behind Legbacore, so there's hope - but right now all shipping Apple hardware has no firmware support for UEFI Secure Boot and no TPM. This makes it impossible to provide any kind of boot attestation, and there's no real way you can verify that your system hasn't been compromised.

Now, to be fair, there's attacks that even Windows and properly configured Linux will still be vulnerable to. Firmware defects that permit modification of System Management Mode code can still be used to circumvent these protections, and the Management Engine is in a position to just do whatever it wants and fuck all of you. But that's really not an excuse to just ignore everything else. Improving the current state of boot security makes it more difficult for adversaries to compromise a system, and if we ever do get to the point of systems which aren't running any hidden proprietary code we'll still need this functionality. It's worth doing, and it's worth doing now.

[1] Well, except Ubuntu's signed bootloader will happily boot unsigned kernels which kind of defeats the entire point of the exercise
[2] Initrds are built on the local machine, so we can't just ship signed images
The Software Freedom Conservancy is currently running a fundraising program in an attempt to raise enough money to continue funding GPL compliance work. If they don't gain enough supporters, the majority of their compliance work will cease. And, since SFC are one of the only groups currently actively involved in performing GPL compliance work, that basically means that there will be nobody working to ensure that users have the rights that copyright holders chose to give them.

Why does this matter? More people are using GPLed software than at any point in history. Hundreds of millions of Android devices were sold this year, all including GPLed code. An unknowably vast number of IoT devices run Linux. Cameras, Blu Ray players, TVs, light switches, coffee machines. Software running in places that we would never have previously imagined. And much of it abandoned immediately after shipping, gently rotting, exposing an increasingly large number of widely known security vulnerabilities to an increasingly hostile internet. Devices that become useless because of protocol updates. Toys that have a "Guaranteed to work until" date, and then suddenly Barbie goes dead and you're forced to have an unexpected conversation about API mortality with your 5-year old child.

We can't fix all of these things. Many of these devices have important functionality locked inside proprietary components, released under licenses that grant no permission for people to examine or improve them. But there are many that we can. Millions of devices are running modern and secure versions of Android despite being abandoned by their manufacturers, purely because the vendor released appropriate source code and a community grew up to maintain it. But this can only happen when the vendor plays by the rules.

Vendors who don't release their code remove that freedom from their users, and the weapons users have to fight against that are limited. Most users hold no copyright over the software in the device and are unable to take direct action themselves. A vendor's failure to comply dooms them to having to choose between buying a new device in 12 months or no longer receiving security updates. When yet more examples of vendor-supplied malware are discovered, it's more difficult to produce new builds without them. The utility of the devices that the user purchased is curtailed significantly.

The Software Freedom Conservancy is one of the only organisations actively fighting against this, and if they're forced to give up their enforcement work the pressure on vendors to comply with the GPL will be reduced even further. If we want users to control their devices, to be able to obtain security updates even after the vendor has given up, we need to keep that pressure up. Supporting the SFC's work has a real impact on the security of the internet and people's lives. Please consider giving them money.
Eric Raymond, author of The Cathedral and the Bazaar (an important work describing the effectiveness of open collaboration and development), recently wrote a piece calling for "Social Justice Warriors" to be ejected from the hacker community. The primary thrust of his argument is that by calling for a removal of the "cult of meritocracy", these SJWs are attacking the central aspect of hacker culture - that the quality of code is all that matters.

This argument is simply wrong.

Eric's been involved in software development for a long time. In that time he's seen a number of significant changes. We've gone from computers being the playthings of the privileged few to being nearly ubiquitous. We've moved from the internet being something you found in universities to something you carry around in your pocket. You can now own a computer whose CPU executes only free software from the moment you press the power button. And, as Eric wrote almost 20 years ago, we've identified that the "Bazaar" model of open collaborative development works better than the "Cathedral" model of closed centralised development.

These are huge shifts in how computers are used, how available they are, how important they are in people's lives, and, as a consequence, how we develop software. It's not a surprise that the rise of Linux and the victory of the bazaar model coincided with internet access becoming more widely available. As the potential pool of developers grew larger, development methods had to be altered. It was no longer possible to insist that somebody spend a significant period of time winning the trust of the core developers before being permitted to give feedback on code. Communities had to change in order to accept these offers of work, and the communities were better for that change.

The increasing ubiquity of computing has had another outcome. People are much more aware of the role of computing in their lives. They are more likely to understand how proprietary software can restrict them, how not having the freedom to share software can impair people's lives, how not being able to involve themselves in software development means software doesn't meet their needs. The largest triumph of free software has not been amongst people from a traditional software development background - it's been the fact that we've grown our communities to include people from a huge number of different walks of life. Free software has helped bring computing to under-served populations all over the world. It's aided circumvention of censorship. It's inspired people who would never have considered software development as something they could be involved in to develop entire careers in the field. We will not win because we are better developers. We will win because our software meets the needs of many more people, needs the proprietary software industry either can not or will not satisfy. We will win because our software is shaped not only by people who have a university degree and a six figure salary in San Francisco, but because our contributors include people whose native language is spoken by so few people that proprietary operating system vendors won't support it, people who live in a heavily censored regime and rely on free software for free communication, people who rely on free software because they can't otherwise afford the tools they would need to participate in development.

In other words, we will win because free software is accessible to more of society than proprietary software. And for that to be true, it must be possible for our communities to be accessible to anybody who can contribute, regardless of their background.

Up until this point, I don't think I've made any controversial claims. In fact, I suspect that Eric would agree. He would argue that because hacker culture defines itself through the quality of contributions, the background of the contributor is irrelevant. On the internet, nobody knows that you're contributing from a basement in an active warzone, or from a refuge shelter after escaping an abusive relationship, or with the aid of assistive technology. If you can write the code, you can participate.

Of course, this kind of viewpoint is overly naive. Humans are wonderful at noticing indications of "otherness". Eric even wrote about his struggle to stop having a viscerally negative reaction to people of a particular race. This happened within the past few years, so before then we can assume that he was less aware of the issue. If Eric received a patch from someone whose name indicated membership of this group, would there have been part of his subconscious that reacted negatively? Would he have rationalised this into a more critical analysis of the patch, increasing the probability of rejection? We don't know, and it's unlikely that Eric does either.

Hacker culture has long been concerned with good design, and a core concept of good design is that code should fail safe - ie, if something unexpected happens or an assumption turns out to be untrue, the desirable outcome is the one that does least harm. A command that fails to receive a filename as an argument shouldn't assume that it should modify all files. A network transfer that fails a checksum shouldn't be permitted to overwrite the existing data. An authentication server that receives an unexpected error shouldn't default to granting access. And a development process that may be subject to unconscious bias should have processes in place that make it less likely that said bias will result in the rejection of useful contributions.

When people criticise meritocracy, they're not criticising the concept of treating contributions based on their merit. They're criticising the idea that humans are sufficiently self-aware that they will be able to identify and reject every subconscious prejudice that will affect their treatment of others. It's not a criticism of a desirable goal, it's a criticism of a flawed implementation. There's evidence that organisations that claim to embody meritocratic principles are more likely to reward men than women even when everything else is equal. The "cult of meritocracy" isn't the belief that meritocracy is a good thing, it's the belief that a project founded on meritocracy will automatically be free of bias.

Projects like the Contributor Covenant that Eric finds so objectionable exist to help create processes that (at least partially) compensate for our flaws. Review of our processes to determine whether we're making poor social decisions is just as important as review of our code to determine whether we're making poor technical decisions. Just as the bazaar overtook the cathedral by making it easier for developers to be involved, inclusive communities will overtake "pure meritocracies" because, in the long run, these communities will produce better output - not just in terms of the quality of the code, but also in terms of the ability of the project to meet the needs of a wider range of people.

The fight between the cathedral and the bazaar came from people who were outside the cathedral. Those fighting against the assumption that meritocracies work may be outside what Eric considers to be hacker culture, but they're already part of our communities, already making contributions to our projects, already bringing free software to more people than ever before. This time it's Eric building a cathedral and decrying the decadent hordes in their bazaar, Eric who's failed to notice the shift in the culture that surrounds him. And, like those who continued building their cathedrals in the 90s, it's Eric who's now irrelevant to hacker culture.

(Edited to add: for two quite different perspectives on why Eric's wrong, see Tim's and Coraline's posts)
I've previously written about Canonical's obnoxious IP policy and how Mark Shuttleworth admits it's deliberately vague. After spending some time discussing specific examples with Canonical, I've been explicitly told that while Canonical will gladly give me a cost-free trademark license permitting me to redistribute unmodified Ubuntu binaries, they will not tell me what Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries actually means.

Why does this matter? The free software definition requires that you be able to redistribute software to other people in either unmodified or modified form without needing to ask for permission first. This makes it clear that Ubuntu itself isn't free software - distributing the individual binary packages without permission is forbidden, even if they wouldn't contain any infringing trademarks[1]. This is obnoxious, but not inherently toxic. The source packages for Ubuntu could still be free software, making it fairly straightforward to build a free software equivalent.

Unfortunately, while true in theory, this isn't true in practice. The issue here is the apparently simple phrase you must remove and replace the Trademarks and will need to recompile the source code. "Trademarks" is defined later as being the words "Ubuntu", "Kubuntu", "Juju", "Landscape", "Edubuntu" and "Xubuntu" in either textual or logo form. The naive interpretation of this is that you have to remove trademarks where they'd be infringing - for instance, shipping the Ubuntu bootsplash as part of a modified product would almost certainly be clear trademark infringement, so you shouldn't do that. But that's not what the policy actually says. It insists that all trademarks be removed, whether they would embody an infringement or not. If a README says "To build this software under Ubuntu, install the following packages", a literal reading of Canonical's policy would require you to remove or replace the word "Ubuntu" even though failing to do so wouldn't be a trademark infringement. If an @ubuntu.com email address is present in a changelog, you'd have to change it. You wouldn't be able to ship the juju-core package without renaming it and the application within. If this is what the policy means, it's so impractical to be able to rebuild Ubuntu that it's not free software in any meaningful way.

This seems like a pretty ludicrous interpretation, but it's one that Canonical refuse to explicitly rule out. Compare this to Red Hat's requirements around Fedora - if you replace the fedora-logos, fedora-release and fedora-release-notes packages with your own content, you're good. A policy like this satisfies the concerns that Dustin raised over people misrepresenting their products, but still makes it easy for users to distribute modified code to other users. There's nothing whatsoever stopping Canonical from adopting a similarly unambiguous policy.

Mark has repeatedly asserted that attempts to raise this issue are mere FUD, but he won't answer you if you ask him direct questions about this policy and will insist that it's necessary to protect Ubuntu's brand. The reality is that if Debian had had an identical policy in 2004, Ubuntu wouldn't exist. The effort required to strip all Debian trademarks from the source packages would have been immense[2], and this would have had to be repeated for every release. While this policy is in place, nobody's going to be able to take Ubuntu and build something better. It's grotesquely hypocritical, especially when the Ubuntu website still talks about their belief that people should be able to distribute modifications without licensing fees.

All that's required for Canonical to deal with this problem is to follow Fedora's lead and isolate their trademarks in a small set of packages, then tell users that those packages must be replaced if distributing a modified version of Ubuntu. If they're serious about this being a branding issue, they'll do it. And if I'm right that the policy is deliberately obfuscated so Canonical can encourage people to buy licenses, they won't. It's easy for them to prove me wrong, and I'll be delighted if they do. Let's see what happens.

[1] The policy is quite clear on this. If you want to distribute something other than an unmodified Ubuntu image, you have two choices:
  1. Gain approval or certification from Canonical
  2. Remove all trademarks and recompile the source code
Note that option 2 requires you to rebuild even if there are no trademarks to remove.

[2] Especially when every source package contains a directory called "debian"…
The Washington Post published an article today which describes the ongoing tension between the security community and Linux kernel developers. This has been roundly denounced as FUD, with Rob Graham going so far as to claim that nobody ever attacks the kernel.

Unfortunately he's entirely and demonstrably wrong, it's not FUD and the state of security in the kernel is currently far short of where it should be.

An example. Recent versions of Android use SELinux to confine applications. Even if you have full control over an application running on Android, the SELinux rules make it very difficult to do anything especially user-hostile. Hacking Team, the GPL-violating Italian company who sells surveillance software to human rights abusers, found that this impeded their ability to drop their spyware onto targets' devices. So they took advantage of the fact that many Android devices shipped a kernel with a flawed copy_from_user() implementation that allowed them to copy arbitrary userspace data over arbitrary kernel code, thus allowing them to disable SELinux.

If we could trust userspace applications, we wouldn't need SELinux. But we assume that userspace code may be buggy, misconfigured or actively hostile, and we use technologies such as SELinux or AppArmor to restrict its behaviour. There's simply too much userspace code for us to guarantee that it's all correct, so we do our best to prevent it from doing harm anyway.

This is significantly less true in the kernel. The model up until now has largely been "Fix security bugs as we find them", an approach that fails on two levels:

1) Once we find them and fix them, there's still a window between the fixed version being available and it actually being deployed
2) The forces of good may not be the first ones to find them

This reactive approach is fine for a world where it's possible to push out software updates without having to perform extensive testing first, a world where the only people hunting for interesting kernel vulnerabilities are nice people. This isn't that world, and this approach isn't fine.

Just as features like SELinux allow us to reduce the harm that can occur if a new userspace vulnerability is found, we can add features to the kernel that make it more difficult (or impossible) for attackers to turn a kernel bug into an exploitable vulnerability. The number of people using Linux systems is increasing every day, and many of these users depend on the security of these systems in critical ways. It's vital that we do what we can to avoid their trust being misplaced.

Many useful mitigation features already exist in the Grsecurity patchset, but a combination of technical disagreements around certain features, personality conflicts and an apparent lack of enthusiasm on the side of upstream kernel developers has resulted in almost none of it landing in the kernels that most people use. Kees Cook has proposed a new project to start making a more concerted effort to migrate components of Grsecurity to upstream. If you rely on the kernel being a secure component, either because you ship a product based on it or because you use it yourself, you should probably be doing what you can to support this.

Microsoft received entirely justifiable criticism for the terrible state of security on their platform. They responded by introducing cutting-edge security features across the OS, including the kernel. Accusing anyone who says we need to do the same of spreading FUD is risking free software being sidelined in favour of proprietary software providing more real-world security. That doesn't seem like a good outcome.
Reaction to Sarah's post about leaving the kernel community was a mixture of terrible and touching, but it's still one of those things that almost certainly won't end up making any kind of significant difference. Linus has made it pretty clear that he's fine with the way he behaves, and nobody's going to depose him. That's unfortunate, because earlier today I was sitting in a presentation at Linuxcon and remembering how much I love the technical side of kernel development. "Remembering" is a deliberate choice of word - it's been increasingly difficult to remember that, because instead I remember having to deal with interminable arguments over the naming of an interface because Linus has an undying hatred of BSD securelevel, or having my name forever associated with the deepthroating of Microsoft because Linus couldn't be bothered asking questions about the reasoning behind a design before trashing it.

In the end it's a mixture of just being tired of dealing with the crap associated with Linux development and realising that by continuing to put up with it I'm tacitly encouraging its continuation, but I can't be bothered any more. And, thanks to the magic of free software, it turns out that I can avoid putting up with the bullshit in the kernel community and get to work on the things I'm interested in doing. So here's a kernel tree with patches that implement a BSD-style securelevel interface. Over time it'll pick up some of the power management code I'm still working on, and we'll see where it goes from there. But, until there's a significant shift in community norms on LKML, I'll only be there when I'm being paid to be there. And that's improved my mood immeasurably.

(Edited to add a context link for the "deepthroating of Microsoft" reference)
When I wrote about TPM attestation via 2FA, I mentioned that you needed a bootloader that actually performed measurement. I've now written some patches for Shim and Grub that do so.

The Shim code does a couple of things. The obvious one is to measure the second-stage bootloader into PCR 9. The perhaps less expected one is to measure the contents of the MokList and MokSBState UEFI variables into PCR 14. This means that if you're happy simply running a system with your own set of signing keys and just want to ensure that your secure boot configuration hasn't been compromised, you can simply seal to PCR 7 (which will contain the UEFI Secure Boot state as defined by the UEFI spec) and PCR 14 (which will contain the additional state used by Shim) and ignore all the others.

The grub code is a little more complicated because there's more ways to get it to execute code. Right now I've gone for a fairly extreme implementation. On BIOS systems, the grub stage 1 and 2 will be measured into PCR 9[1]. That's the only BIOS-specific part of things. From then on, any grub modules that are loaded will also be measured into PCR 9. The full kernel image will be measured into PCR10, and the full initramfs will be measured into PCR11. The command line passed to the kernel is in PCR12. Finally, each command executed by grub (including those in the config file) is measured into PCR 13.

That's quite a lot of measurement, and there are probably fairly reasonable circumstances under which you won't want to pay attention to all of those PCRs. But you've probably also noticed that several different things may be measured into the same PCR, and that makes it more difficult to figure out what's going on. Thankfully, the spec designers have a solution to this in the form of the TPM measurement log.

Rather than merely extending a PCR with a new hash, software can extend the measurement log at the same time. This is stored outside the TPM and so isn't directly cryptographically protected. In the simplest form, it contains a hash and some form of description of the event associated with that hash. If you replay those hashes you should end up with the same value that's in the TPM, so for attestation purposes you can perform that verification and then merely check that specific log values you care about are correct. This makes it possible to have a system perform an attestation to a remote server that contains a full list of the grub commands that it ran and for that server to make its attestation decision based on a subset of those.

No promises as yet about PCR allocation being final or these patches ever going anywhere in their current form, but it seems reasonable to get them out there so people can play. Let me know if you end up using them!

[1] The code for this is derived from the old Trusted Grub patchset, by way of Sirrix AG's Trusted Grub 2 tree.
I have an Amazon Echo. I also have a LIFX Smart Bulb. The Echo can integrate with Philips Hue devices, letting you control your lights by voice. It has no integration with LIFX. Worse, the Echo developer program is fairly limited - while the device's built in code supports communicating with devices on your local network, the third party developer interface only allows you to make calls to remote sites[1]. It seemed like I was going to have to put up with either controlling my bedroom light by phone or actually getting out of bed to hit the switch.

Then I found this article describing the implementation of a bridge between the Echo and Belkin Wemo switches, cunningly called Fauxmo. The Echo already supports controlling Wemo switches, and the code in question simply implements enough of the Wemo API to convince the Echo that there's a bunch of Wemo switches on your network. When the Echo sends a command to them asking them to turn on or off, the code executes an arbitrary callback that integrates with whatever API you want.

This seemed like a good starting point. There's a free implementation of the LIFX bulb API called Lazylights, and with a quick bit of hacking I could use the Echo to turn my bulb on or off. But the Echo's Hue support also allows dimming of lights, and that seemed like a nice feature to have. Tcpdump showed that asking the Echo to look for Hue devices resulted in similar UPnP discovery requests to it looking for Wemo devices, so extending the Fauxmo code seemed plausible. I signed up for the Philips developer program and then discovered that the terms and conditions explicitly forbade using any information on their site to implement any kind of Hue-compatible endpoint. So that was out. Thankfully enough people have written their own Hue code at various points that I could figure out enough of the protocol by searching Github instead, and now I have a branch of Fauxmo that supports searching for LIFX bulbs and presenting them as Hues[2].

Running this on a machine on my local network is enough to keep the Echo happy, and I can now dim my bedroom light in addition to turning it on or off. But it demonstrates a somewhat awkward situation. Right now vendors have no real incentive to offer any kind of compatibility with each other. Instead they're all trying to define their own ecosystems with their own incompatible protocols with the aim of forcing users to continue buying from them. Worse, they attempt to restrict developers from implementing any kind of compatibility layers. The inevitable outcome is going to be either stacks of discarded devices speaking abandoned protocols or a cottage industry of developers writing bridge code and trying to avoid DMCA takedowns.

The dystopian future we're heading towards isn't Gibsonian giant megacorporations engaging in physical warfare, it's one where buying a new toaster means replacing all your lightbulbs or discovering that the code making your home alarm system work is now considered a copyright infringement. Is there a market where I can invest in IP lawyers?

[1] It also requires an additional phrase at the beginning of a request to indicate which third party app you want your query to go to, so it's much more clumsy to make those requests compared to using a built-in app.
[2] I only have one bulb, so as yet I haven't added any support for groups.
The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.
I bumped into Mark Shuttleworth today at Linuxcon and we had a brief conversation about Canonical's IP policy. The short summary:
  • Canonical assert that the act of compilation creates copyright over the binaries, and you may not redistribute those binaries unless (a) the license prevents Canonical from restricting redistribution (eg, the GPL), or (b) you follow the terms of their IP policy. This means that, no matter what Dustin's blogpost says, Canonical's position is that you must ask for permission before distributing any custom container images that contain Ubuntu binaries, even if you use no Ubuntu trademarks in the process. Doing so without their permission is an infringement of their copyright.
  • Canonical have no intention of clarifying their policy, because Canonical benefit from companies being legally uncertain as to whether they have permission to do something or not.
  • Mark justifies maintaining this uncertainty by drawing an analogy between it and the perceived uncertainties that exist around certain aspects of the GPL. I disagree with this analogy pretty strongly. One of the main reasons for the creation of GPLv3 was to deal with some more ambiguous aspects of GPLv2 (such as what actually happened after license termination and how patents interacted with the GPL). The FSF publish a large FAQ intended to provide further clarity. The major ambiguity is in what a derivative work actually is, which is something the FSF can't answer absolutely (that's going to be up to courts) but will give its opinion on when asked. The uncertainties in Canonical's IP policy aren't a result of a lack of legal clarity - they're a result of Canonical's refusal to answer questions.

The even shorter summary: Canonical won't clarify their IP policy because they believe they can make more money if they don't.

Why do I keep talking about this? Because Canonical are deliberately making it difficult to create derivative works, and that's one of the core tenets of the definition of free software. Their IP policy is fundamentally incompatible with our community norms, and that's something we should care about rather than ignoring.
After less than a week of complaints, the TODO group have decided to pause development of their code of conduct. This seems to have been triggered by the public response to the changes I talked about here, which TODO appear to have been completely unprepared for.

While disappointing in a bunch of ways, this is probably the correct decision. TODO stumbled into this space with a poor understanding of the problems that they were trying to solve. Nikki Murray pointed out that the initial draft lacked several of the key components that help ensure less privileged groups can feel that their concerns are taken seriously. This was mostly rectified last week, but nobody involved appeared to be willing to stand behind those changes in a convincing way. This wasn't helped by almost all of this appearing to land on Github's plate, with the rest of the TODO group largely missing in action[1]. Where were Google in this? Yahoo? Facebook? Left facing an angry mob with nobody willing to make explicit statements of support, it's unsurprising that Github would try to back away from the situation.

But that doesn't remove their blame for being in the situation in the first place. The statement claims
We are consulting with stakeholders, community leaders, and legal professionals, which is great. It's also far too late. If an industry body wrote a new kernel from scratch and deployed it without any external review, then discovered that it didn't work and only then consulted any of the existing experts in the field, we'd never take them seriously again. But when an industry body turns up with a new social policy, fucks up spectacularly and then goes back to consult experts, it's expected that we give them a pass.

Why? Because we don't perceive social problems as difficult problems, and we assume that anybody can solve them by simply sitting down and talking for a few hours. When we find out that we've screwed up we throw our hands in the air and admit that this is all more difficult than we imagined, and we give up. We ignore the lessons that people have learned in the past. We ignore the existing work that's been done in the field. We ignore the people who work full time on helping solve these problems.

We wouldn't let an industry body with no experience of engineering build a bridge. We need to accept that social problems are outside our realm of expertise and defer to the people who are experts.

[1] The repository history shows the majority of substantive changes were from Github, with the initial work appearing to be mostly from Twitter.
The TODO group is an industry body that appears to be trying to define community best practices or something. I don't really know what their backstory is and whether they're trying to do meaningful work or just provide a fig leaf of respectability to organisations that dislike being criticised for doing nothing to improve the state of online communities but don't want to have to actually do anything, and their initial work on codes of conduct was, perhaps, suboptimal. But they do appear to be trying to improve things - this commit added a set of inappropriate behaviours, and also clarified that reverseisms were not actionable behaviour.

At which point Reddit lost its shit, because Reddit is garbage. And now the repository is a mess of white men attempting to explain how any policy that could allow them to be criticised is the real racism.

Fuck that shit.

Being a cis white man who's a native English speaker from a fairly well-off background, I'm pretty familiar with privilege. Spending my teenage years as an atheist of Irish Catholic upbringing in a Protestant school in a region of Northern Ireland that made parts of the bible belt look socially progressive, I'm also pretty familiar with the idea that that said privilege doesn't shield me from everything bad in life. Having privilege isn't a guarantee that my life will be better, in the same way that avoiding smoking doesn't mean I won't die of lung cancer. But there's an association in both cases, one that's strong enough to alter the statistical likelihood in meaningful ways.

And that inherently affects discussions about race or gender or sexuality. The probability that I've been subject to systematic discrimination because of these traits is vanishingly small. In the communities this policy is intended to cover, I'm the default. It's very difficult for any minority to exercise power over me. "You're white, you wouldn't understand" isn't fundamentally about my colour, it's about the fact that my colour means I haven't been subject to society trying to make my life more difficult at every opportunity. A community that considers saying that to be racist is a community that will never change the default, a community that will never be able to empower people who didn't grow up with that privilege. A code of conduct that makes it clear that "reverse racism" isn't grounds for complaint makes it clear that certain conversations are legitimate and helps ensure we have the framework we need to gradually change that default, and as such is better than one that doesn't.

(comments disabled because I don't trust any of you)
Update: A Canonical employee responded here, but doesn't appear to actually contradict anything I say below.

I wrote about Canonical's Ubuntu IP policy here, but primarily in terms of its broader impact, but I mentioned a few specific cases. People seem to have picked up on the case of container images (especially Docker ones), so here's an unambiguous statement:

If you generate a container image that is not a 100% unmodified version of Ubuntu (ie, you have not removed or added anything), Canonical insist that you must ask them for permission to distribute it. The only alternative is to rebuild every binary package you wish to ship[1], removing all trademarks in the process. As I mentioned in my original post, the IP policy does not merely require you to remove trademarks that would cause infringement, it requires you to remove all trademarks - a strict reading would require you to remove every instance of the word "ubuntu" from the packages.

If you want to contact Canonical to request permission, you can do so here. Or you could just derive from Debian instead.

[1] Other than ones whose license explicitly grants permission to redistribute binaries and which do not permit any additional restrictions to be imposed upon the license grants - so any GPLed material is fine
(In order to avoid any ambiguity here, this is a personal opinion. The Free Software Foundation's opinion on this matter is here)

Canonical have a legal policy surrounding reuse of Intellectual Property they own in Ubuntu, and you can find it here. It's recently been modified to handle concerns raised by various people including the Free Software Foundation[1], who have some further opinions on the matter here. The net outcome is that Canonical made it explicit that if the license a piece of software is under explicitly says you can do something, you can do that even if the Ubuntu IP policy would otherwise forbid it.

Unfortunately, "Canonical have made it explicit that they're not attempting to violate the GPL" is about the nicest thing you can say about this. The most troubling statement is Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries.. The apparent aim here is to avoid situations where people take Ubuntu, modify it and continue to pass it off as Ubuntu. But it reaches far further than that. Cases where this may apply include (but are not limited to):
  • Anyone producing a device that runs an operating system based on Ubuntu, even if it's entirely invisible to the user (eg, an embedded ARM device using Ubuntu as its base OS)
  • Anyone producing containers based on Ubuntu
  • Anyone producing cloud images (such as AMIs) based on Ubuntu

In each of these cases, a strict reading of the policy indicates that you are distributing a modified version of Ubuntu and therefore must either get it approved by Canonical or remove the trademarks and rebuild everything. The strange thing is that this doesn't limit itself to rebuilding packages that include Canonical's trademarks - there's a requirement that you rebuild all binaries.

Now obviously this is good engineering practice in a whole bunch of ways, but it's a huge pain in the ass. And to make things worse, Canonical won't clarify what they consider to be use of their trademarks. Many Ubuntu packages rebuilt from Debian include the word "ubuntu" in their version string. Many Ubuntu packages will contain the word "ubuntu" in maintainer email addresses. Many Ubuntu packages include references to Ubuntu (for instance, documentation might say "This configuration file is located under /etc/default in Debian and Ubuntu"). And many Ubuntu packages will include the compiler version string, which will include the word "ubuntu". Realistically, there's no risk of confusion by using the trademarks in this way, and as a consequence there would be no infringement under trademark law. But Canonical aren't using trademark law here. Canonical assert that they hold copyright over binaries that they have built form source, and require that for you to have permission to redistribute these binaries under copyright law you must remove the trademarks. This means that it doesn't matter whether your use of the trademarks would be infringing or not - you're required to remove them, because fuck you that's why.

This is a huge overreach. It's hostile to free software, in that it makes it significantly more difficult to produce derivative works of Ubuntu and doesn't benefit the community in the process. It's hostile to our understanding of IP law, in that it claims that the mechanical process of turning source code into binaries creates an independently copyrightable work. And in some cases it may make it impossible to create derivative works that interoperate with Ubuntu due to applications making assumptions about the presence of strings.

It'd be easy write this off as an over the top misinterpretation of the policy if it hadn't been confirmed by the Ubuntu Community Manager that any binaries shipped by Ubuntu under licenses that don't grant an explicit right to redistribute the binaries can't be redistributed without permission or rebuilding. When I asked for clarification from Canonical over a year ago, I got no response[2]. Perhaps Canonical don't want to force you to remove every single use of the word Ubuntu from derivative works, but their policy is written such that the natural reading is that they do, and they've refused every single opportunity they've been given to clarify the point.

So, we're left with a policy that makes it hugely impractical to redistribute modified versions of Ubuntu unless Canonical approve of it. That's not freedom, and it's certainly not Ubuntu. If Canonical are serious about participating in the free software community then they need to demonstrate their willingness to continue improving this policy to bring it closer to our goals. Failure to do so will give a strong indication of their priorities.

[1] While I'm a member of the FSF's board of directors, I'm not involved in the majority of the FSF's day to day activities and was not part of this process
[2] Nebula's OS was a mixture of binary packages we pulled straight from Ubuntu and packages we rebuilt, so we were obviously pretty interested in what the answer was