[personal profile] mjg59
The Washington Post published an article today which describes the ongoing tension between the security community and Linux kernel developers. This has been roundly denounced as FUD, with Rob Graham going so far as to claim that nobody ever attacks the kernel.

Unfortunately he's entirely and demonstrably wrong, it's not FUD and the state of security in the kernel is currently far short of where it should be.

An example. Recent versions of Android use SELinux to confine applications. Even if you have full control over an application running on Android, the SELinux rules make it very difficult to do anything especially user-hostile. Hacking Team, the GPL-violating Italian company who sells surveillance software to human rights abusers, found that this impeded their ability to drop their spyware onto targets' devices. So they took advantage of the fact that many Android devices shipped a kernel with a flawed copy_from_user() implementation that allowed them to copy arbitrary userspace data over arbitrary kernel code, thus allowing them to disable SELinux.

If we could trust userspace applications, we wouldn't need SELinux. But we assume that userspace code may be buggy, misconfigured or actively hostile, and we use technologies such as SELinux or AppArmor to restrict its behaviour. There's simply too much userspace code for us to guarantee that it's all correct, so we do our best to prevent it from doing harm anyway.

This is significantly less true in the kernel. The model up until now has largely been "Fix security bugs as we find them", an approach that fails on two levels:

1) Once we find them and fix them, there's still a window between the fixed version being available and it actually being deployed
2) The forces of good may not be the first ones to find them

This reactive approach is fine for a world where it's possible to push out software updates without having to perform extensive testing first, a world where the only people hunting for interesting kernel vulnerabilities are nice people. This isn't that world, and this approach isn't fine.

Just as features like SELinux allow us to reduce the harm that can occur if a new userspace vulnerability is found, we can add features to the kernel that make it more difficult (or impossible) for attackers to turn a kernel bug into an exploitable vulnerability. The number of people using Linux systems is increasing every day, and many of these users depend on the security of these systems in critical ways. It's vital that we do what we can to avoid their trust being misplaced.

Many useful mitigation features already exist in the Grsecurity patchset, but a combination of technical disagreements around certain features, personality conflicts and an apparent lack of enthusiasm on the side of upstream kernel developers has resulted in almost none of it landing in the kernels that most people use. Kees Cook has proposed a new project to start making a more concerted effort to migrate components of Grsecurity to upstream. If you rely on the kernel being a secure component, either because you ship a product based on it or because you use it yourself, you should probably be doing what you can to support this.

Microsoft received entirely justifiable criticism for the terrible state of security on their platform. They responded by introducing cutting-edge security features across the OS, including the kernel. Accusing anyone who says we need to do the same of spreading FUD is risking free software being sidelined in favour of proprietary software providing more real-world security. That doesn't seem like a good outcome.

ARM kernel hardening in v4.3

Date: 2015-11-06 11:10 am (UTC)
From: (Anonymous)
So Russell made a patch like this which is merged in kernel v4.3, and those Android devices could all take advantage of it to harden their kernels. Now the problem is another one: Android vendors think it is OK to fork the Linux kernel and stay indefinetly at kernel v3.4 and most or all devices never have their kernel updated, just userspace. The ball is with the vendors, especially people like Samsung, LG, HTC, SONY etc that all others tend to follow. Qualcomm etc also have a huge responsibility here for being so far off mainline that getting this kind of fixes in becomes impossible. And now I hear that Google have "selected" kernel v4.2 (without this patch) for their next release. Sigh.

Re: ARM kernel hardening in v4.3

Date: 2015-11-06 01:06 pm (UTC)
From: (Anonymous)
Can't they just backport it? Is them selecting 4.2 preventing further security fixes to be backported (if I use that term correctly?)

Re: ARM kernel hardening in v4.3

Date: 2015-11-06 02:57 pm (UTC)
From: (Anonymous)
They can be backported yes, but that doesn't mean they will nor that upstream SoC vendors will include those backports in the kernels they build their drivers into. Which I feel is the bigger issue, until we can get hardware vendors to work upstream and to be more consistent with updates supporting products for more than a few months there is little hope of things truly improving for Android, not to mention things like routers and the "Internet of Shit."

Until vendors get serious about actually patching and updating devices trying to focus on the whole security bug vs non security bug issue is pointless as most devices aren't seeing any patches at all.

Re: ARM kernel hardening in v4.3

Date: 2015-11-06 07:03 pm (UTC)
From: (Anonymous)
You solve the problems that are possible to solve. The Linux developers can't decide what vendors like Samsung, Huawei, Sony and others do. They can however make the security of the next versions of Linux better.
From: (Anonymous)
One factor might be that the argument assumes you're actually using/thinking about kernel security features, like SELinux.

That is, if not many people love SELinux and friends (https://utcc.utoronto.ca/~cks/space/blog/linux/SELinuxUsability), then popular pressure to fix the kernel bypasses would naturally be low.

It seems to me containers have so much potential, and are so affected by weak kernels, that they could become a major motivation for change. (Technically they may use SELinux, but most of the time we think of them in terms of namespaces).

-- Alan Jenkins
From: (Anonymous)
With my limited understanding of SELinux as a caveat, the use of SELinux with docker containers at the moment is limited to putting them all into a single context, which is good from the POV of container-to-host, but perhaps not good enough for container-to-container, going forward.
From: [identity profile] richard.maw.name
I had cause to investigate it. It uses a form of parametrised label, so while every container is part of the same class of label, they aren't all using the same label, so containers are isolated from each other too.
From: [identity profile] https://me.yahoo.com/a/YODrfpN.jpaHUdSury1GKW6nTRo-#57a4e
I think what's being referred to here are the kernel self-protection features of grsec which are now a part of modern MacOS/Windows, but missing on mainline Linux... not just the extra bits (paxctl/tpe/etc.) which are awkwardly compared to SELinux and friends.
From: (Anonymous)
IIRC, Windows does not use any grsecurity features which are not already in upstream Linux. Windows kernel security focuses primarily on extensive static analysis. I'd love to be proven wrong though.

Date: 2015-11-06 06:04 pm (UTC)
From: [identity profile] grok-mctanys.livejournal.com
Out of curiosity, are you planning on including any of the grsecurity patches in your tree (http://mjg59.dreamwidth.org/38136.html)?


Date: 2015-11-06 06:29 pm (UTC)
From: (Anonymous)
Ultimately the only fix will come via reputation. This could be via simple stamps of approval (think "organic", "cage free", "dolphin friendly" in the food world), or some independent third party producing open rankings that are actively maintained. People also have to part with their cash based on those stamps of approval or rankings, which then puts a price tag for the manufacturers to do the work (or forgo it). That creates a feedback loop to improve things. Without the feedback loop we'll continue to have the current mess.

Date: 2015-11-08 07:02 am (UTC)
From: (Anonymous)
> Accusing anyone who says we need to do the same of spreading FUD is risking...

I'm afraid I can't parse this, even in context. Is there a typo here, or is my parser broken?

Date: 2015-11-08 07:58 am (UTC)
From: (Anonymous)
I get it now! The correct interpretation is even one of those things that you can't unsee. Hah!

Date: 2015-11-09 03:24 pm (UTC)
From: (Anonymous)
Accusing (anyone who says we need to do the same) of (spreading FUD) is risking...
From: (Anonymous)

The problem that I have with the want to add more code (GRSecurity keeps coming back) is that it is basically obfuscation.

People don't generally understand file permissions, yet we add SELinux and practically no one understands that, so we add more stuff like containers, vms over the top.

If we can't build a secure system using the simple UNIX chmod features without making mistakes, what is the probability that the more complicated systems will be better?
marahmarie: (M In M Forever) (Default)
From: [personal profile] marahmarie
That's an extremely good question, and points to the fact that all code has its flaws. Increase the code = multiply the amount of inherent - even if often obscure - flaws.

How do you solve that? (It's one of the worst downsides of coding anything - by the very act of imperfect humans writing code, they're forever playing whack-a-mole with the end result. I'd go further and say that until humans get better at it, even computers built to future proof the code written upon them won't be able to, either. But getting computers to perform security tests using every possibility that we can't fathom nor compute might be the only answer. A better one from the beginning would have been for coders to pass their handiwork straight on to the hackers hired to find its vulnerabilities before it was ever put out into the world. But software didn't start out with security in mind - and the entire world is paying the price for that shortsightedness now.)
From: (Anonymous)
If you think that grsecurity patch is an obfuscation method, you are probably not fit to discuss kernel security even further you should probably be excused to talk about any security topic at all.

Kernel hardening security-minded people now the benefits of grsecurity overall, they aren't obfuscation methods; grsecurity adds complete mitigations mechanism to the kernel that makes almost any or past security vulnerability a non-issue, it doesn't fix all problems but it kills the most popular memory corruptions security vulnerabilities (ie google roku jailbreak).

IMHO grsecurity features are awesome but they need (like every OSS project) good documentation (no more manpages please!), tutorials about implementation of containers and deployments for sysadmins and upstream kernel adoption on certain mitigations techniques (or at least able to distribute on kernel.org a version with grsec features included).
From: (Anonymous)
And this is why we can't have nice things.

You're aware that you can explain something to someone without resorting to personal attacks like, "you are probably not fit to discuss kernel security," right? It wouldn't even require any extra effort; you could've just left out that first paragraph.

- dilinger
From: (Anonymous)
I don't think a claim like "you are probably not fit to discuss kernel security" is out of place here. While it's certainly not polite, sometimes you have to forcefully push a point across. When someone is so unfamiliar with a given technology, but speaks with such an air of familiarity, merely saying they are wrong and trying to explain why tends not to work. They will interpret it as an invitation to an argument where their point actually holds a basis. In a case like this, that's not true, and they must be told in some way that they are *completely* wrong forcefully enough that they are taken aback and actually think for a second why they are making the claims they are making.

While obviously personal experience is just an anecdote in this instance, I will say that people being quite forceful has helped me greatly. I use to be somewhat of a script kiddie, very inexperienced and very arrogant. If it weren't for enough security folk telling me I was completely full of shit, I likely would still be like that today, convinced I understood what I was talking about. But because they shook me out of that delusion, I stepped back and realized I may really be ignorant, and I took steps to learn and improve my own understanding.

I won't argue more about this, as it's a fairly common debate in the FOSS community. There are plenty of resources explaining arguments both for and against such forceful language, mostly as a result of people questioning or defending Linus' methods for telling people they are wrong and to stop wasting his time.

I do think however that a better/more complete explanation should have been given. I'll try to help explain for the OP of this thread:

>The problem that I have with the want to add more code (GRSecurity keeps coming back) is that it is basically obfuscation.
grsecurity is not about obfuscation. Security through obscurity (which is implied by obfuscation) is defined as security which is based on the implementation being secret. The fact that grsecurity is open source does not give a significant advantage to attackers, since it relies on randomization (which is no more obfuscation or obscurity than a strong symmetric cipher like AES is). Here are a few examples:
- UDEREF: Many exploits rely on putting executable code in userland, and tricking the kernel into following a pointer there and executing it in kernelspace (the highest privilege level in the traditional 3 ring system), as if it were part of the kernel itself. You can think of this like a mere civilian writing fake presidential orders, and tricking a member of the federal government into reading and believing those orders. UDEREF completely mitigates this problem, on i386 by separating ring 3 and ring 0 code into their own discrete areas of memory. This ensures that no matter what, you cannot trick the kernel into dererferencing a pointer to userland while in kernelmode.
- RANDSTACK: Puts all structures of certain types into a randomized order. This forces an attacker to guess the locations, and even a single failure results in the attack being detected and crashing the kernel. This is not obfuscation because the randomization is based on "real" entropy collected by the system. If you do not know the seed, you will not have any way to know the order of these structures such that you can exploit them.
- PaX ASLR: Basically the same as normal ASLR, but resistant to entropy exhaustion, and it randomizes more structures in a process (such as the kernel stack). This, like ASLR, will completely mitigate many exploit classes like ROP. Note that in the case of ASLR, infoleaks are not uncommon, so ROP is only killed so long as the ASLR holds strong. Plugins that may be released in the future such as RAP can completely mitigate ROP without depending on ASLR.
- STACKLEAK and SANITIZE: Cleans many structures in the kernel stack as it transitions back to userspace. This ensures that those in-kernel structures cannot be leaked to the process, even if such a leak could normally be triggered. It simply removes these structures. If they are not there, an attacking process cannot read them. That is not obfuscation, it is removal of sensitive data.
- CONSTIFY: Sets variables which are not going to be changed to "const", which prevents them from being modified. This is not obfuscation because an attacker can easily know which variables are const, but knowing that does not let them modify them. For example, structs related to whether or not various LSMs are enabled can be made read-only. That way an attacker cannot simply toggle selinux_enabled to 0, because it simply cannot be written to. The knowledge that it cannot be written to does not give them any advantage. Note that I may be mixing this example up with another feature of grsecurity, but it still applies.
- Security backports: Not many people realize that grsecurity isn't just proactive defense mechanisms, but also includes numerous security backports. Unfortunately, the kernel team seems to fix exploitable bugs without a clear "this is a security issue" commit message for distribution kernel maintainers to read. As a result, many exploitable bugs remain in popular distribution kernels, and the amount of effort required to find and patch the bugs is unnecessarily duplicated. People are told "just use the most recent kernels", but that is obviously not practical (and often comes with security issues on its own). grsecurity contains these backported fixes, so you can be sure that exploitable bugs are far less likely to be present in a grsecurity kernel.
- Miscellaneous changes: Many kernel behaviors are harmful to security, due to increasing attack surface area or leaking sensitive information. grsecurity often fixes these by disabling them, or requiring higher privileges to access. For example, the kcmp() system call is completely disabled in grsecurity, because it allows processes to gain information about the current state of the kernel. Ironically, kcmp() actually does attempt obfuscation to reduce the impact of infoleaks, but it is woefully ineffective. This is not documented as far as I can tell, except for in the source code. The perf_event_open() syscall is limited to CAP_SYS_ADMIN only, because it has a history of severe vulnerabilities, and has multiple unpatched issues which enable side-channel attacks (such as gaining password information via context switch counts). Another example is the bpf() syscall, which is additionally disabled under grsecurity, but in the future may be made accessible to processes with CAP_SYS_ADMIN.

There are many, many more which I have not described, and likely further which I simply do not yet know about. If I made any mistakes, I hope someone will correct me. I'm sure reading through some of the documentation can explain far better than I can. Here's a neat PDF on some kernel-self protections in grsecurity and PaX which are totally unrelated to obfuscation: https://pax.grsecurity.net/docs/PaXTeam-H2HC12-PaX-kernel-self-protection.pdf

>People don't generally understand file permissions, yet we add SELinux and practically no one understands that, so we add more stuff like containers, vms over the top.
SELinux is infamous for being hard to understand, yet there are many premade policies. However, other MAC (Mandatory Access Control) frameworks are much easier to understand, such as AppArmor, TOMOYO, and SMACK. As for containers, those are designed for very low resource pseudo-virtualization. As for VMs, those are not meant for security either. They are not a replacement for a MAC framework, they are used to allow you to utilize multiple untrusted systems on a single piece of hardware, or to snoop on the internals of a running operating system without slow and difficult to use technologies such as JTAG (a type of connection to a CPU which allows for controlling it like a puppet, for very low-level debugging).

>If we can't build a secure system using the simple UNIX chmod features without making mistakes, what is the probability that the more complicated systems will be better?
Simple UNIX permissions (called DAC, for Discretionary Access Controls) is actually quite solid. It has not had a major bug in a very very long time. The only issue with it is it is not complicated enough to provide sufficiently fine-grained protections. Furthermore, grsecurity is not something which requires advanced configuration, with the exception of the optional grsecurity RBAC framework it provides. However even this framework has many sanity checks to ensure you *do* use it right and cannot make big mistakes in configuration. The rest of grsecurity just works out of the box. Also, grsecurity *does* extend the ability of DAC to protect a system, with a feature called TPE (Trusted Path Execution), which prevents a user from accidentally or intentionally allowing another user to place executable code in its path.

Please educate yourself on how systems can be made more secure. It is not just about code correctness, it is about defense in depth such that even in the case of existing exploitable situations, no compromise can occur. I suggest you read up on the basics of ASLR (Address Space Layout Randomization), NX (No-Execute, also called DEP on Windows), SSP (Stack Smashing Protection), RELRO (Read-only Relocations), Bind-Now, etc. These are all popular protections that do not require grsecurity, but can be thought of as inferior implications of many grsecurity features. These features are easy to understand. From this, you can delve into more complicated stuff, such as the mitigations grsecurity provides. Until then, please realize that you will likely be called out for making major mistakes regarding existing security technologies.

tl;dr grsecurity is not obfuscation and OP doesn't know what he's talking about
From: (Anonymous)
Actually it is obfuscation but that is not a bad thing.

Obfuscation has a bad rap but people don't actually understand it, and you see that here.

The difference between "regular" obfuscation and this randomization system is that in a regular way, /someone/ is going to know the answer, whereas with randomization, no one really knows.

It is like letting someone chase you forever. Also, if every guess comes at high risk, the probability of someone ever attempting it goes down considerably. Personally I would use this technique if I had no other recourse, but not in a normal system or situation where I would be confident enough to trust other measures.
From: (Anonymous)
This is only partly obfuscation. Making variables read-only and restricting some rarely used, highly exploitable functions definitely isn't.

Also, this randomization, though useless in theory, makes practical ecploitation far harder. It's like saying your lock screen is useless because your attacker has physical access. It's true, but it works in 99,999% percent of the cases anyway.


Matthew Garrett

About Matthew

Power management, mobile and firmware developer on Linux. Security developer at Google. Ex-biologist. @mjg59 on Twitter. Content here should not be interpreted as the opinion of my employer.

Expand Cut Tags

No cut tags