From: (Anonymous)
Short answer: there is no security risk here. This blog post is about criticizing a broken community-process, in similar (future) situations.

Long answer, somewhat facetious: if you look at a 'typical' user, they're running Windows on a PC, or they're running iOS on a tablet-slash-smartphone. They are at no risk from this security flaw.

If you look at a typical Linux user, or even a typical Ubuntu user, they are running a stock distro, and again are at no risk due to this flaw.

The actual *target* user of this patch is the technically-savvy, bleeding edge brave, willing and ready to debug horrible problems person. Therefore, the typical *scenario* for that broad spectrum of endusers (albeit drawn from a very small niche population) is that they will be downloading the master branch of Mir from git, to test it out.

There are some cases where they can be safe:

1. they create a special install for testing Mir, either on a usbkey or in a virtual machine, and therefore create a test-uid and test-passwd, such as username==u and password=p. That info might get released into the interwebs, but nobody would care.

2. they test Mir with their network cable unplugged (be it a physical one or a disabled virtual nic), and use some other machine for non-Mir tasks. In that case, even if they accidentally type in a secret passwd, which is then accidentally sent to the X session, it cannot go out over the network, so they are still fine.

3. they test Mir in a reasonably cursory fashion, running a few benchmarks and opening a few apps to kick the tires, but either don't ever open an IM session, or don't ever open a VT session (or at least don't have those open simultaneously and catastrophically).

(Probably more ways to be safe exist.)

There is really only one way in which this flaw can be a problem:

4. they test Mir by installing it onto their primary devbox, and they use this box-with-Mir as their primary machine for everyday tasks. In other words, they install Mir for *beta* testing purposes, and try to use it as they would normally use Linux. Under that sort of scenario, sooner or later, they are prolly going to have IRC or Pidgin or somesuch open in their gui session, and decide to use ctrl+alt+F4 or similar to switch over to a VT console for something, while meanwhile back in the GUI session their password just got sent out in the clear over the network.

You can avoid the trouble, if you are testing Mir, by not using it (yet) for *beta* testing purposes. Even better, you can test it from within a virtual machine environment (easily wiped when a new upgrade comes out or when you want to try an alternative config/installation/whatnot), and specify harmless username=u and password=p credentials for login to that VM, and ideally disable the virtual NIC, while you're at it.

So at the end of the day, even most people testing Mir are never going to get caught out by this security hole. It is not a real-world concern, in and of itself, more or less. What MJG is complaining about is that there was a public request for testing, and a tiny footnote about some security risk. That's not acceptable -- either fix the security risk *before* you invite people to test, or put the risk in HUGE CAPITALS at the top of your request, so nobody will get surprised. (Being well aware of the risk, they'll use the VM approach, not the beta-tester one, for instance.)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

If you are unable to use this captcha for any reason, please contact us by email at support@dreamwidth.org

Profile

Matthew Garrett

About Matthew

Power management, mobile and firmware developer on Linux. Security developer at Aurora. Ex-biologist. [personal profile] mjg59 on Twitter. Content here should not be interpreted as the opinion of my employer. Also on Mastodon.

Expand Cut Tags

No cut tags