Why exploits prefer memory corruption
Why do most in-the-wild exploits that target end-user platforms use memory corruption?
At least through 2021 (the last year for which I can find these statistics), memory corruption represented about 60-70% of bugs that are observed exploited in-the-wild (1)(2)(3)(4). Of course, there’s sampling bias in the exploits that get caught. And most software is still written in memory-unsafe languages today, so it’s natural that most bugs are due to memory unsafety.
Even so, I believe that memory corruption techniques will dominate real-world exploitation that targets end-user platforms and products even after the shift to memory-safe languages makes memory unsafety bugs rarer than logic bugs. And I do not expect MTE to change this: it will just make good memory corruption bugs even rarer and harder to exploit.
The reason I believe memory corruption will remain popular is that memory corruption is almost always the simplest way for the attacker to make the target system do what they want, especially when exploiting an end-user device. Typical modern exploit chains consist of multiple stages that traverse isolated execution contexts, and the attacker must often achieve something comparably expressive to arbitrary code execution in each stage in order to be in a position to exploit the vulnerability for the next stage. Memory corruption is usually the most straightforward way to achieve something that expressive.
Some extremely good logic bugs will e.g. exec()
attacker-supplied code or run an attacker-supplied script under a sufficiently-powerful interpreter to exploit the next bug in the chain. Other logic bugs are coupled with an external form of unsafety, such as JIT, and thus produce memory corruption effects. However, it’s much more common that exploiting a logic bug in a vulnerable program will remain bound by the constraints of the program’s source code and the programming language. That is a fundamentally different world than what is offered by memory corruption, where a much higher fraction of bugs can make the vulnerable program shed the constraints of the program and language to only remain bound by the constraints of the underlying CPU.
Memory corruption vs. memory unsafety
I want to make a distinction between memory corruption and memory unsafety vulnerabilities. Memory unsafety vulnerabilities lead to memory corruption. But memory corruption is an effect, not a root cause. Some logic bugs also lead to memory corruption when combined with unsafety in other ways, such as JIT, page tables, or hardware.
An example of this distinction is Manfred Paul’s WebAssembly type confusion bug used in Pwn2Own 2024. At its core, the issue is a logic bug: the code tries to limit the number of types in a WebAssembly module because types at high indexes refer to special, built-in types. Failing to cap the number of user-defined types allows them to alias the built-in ones. The type confusion is the result of eliding a type check in the generated code due to the violated invariant. It could have happened in Rust too.
I am specifically arguing that memory corruption, not memory unsafety bugs, will remain a staple of exploitation. As the world moves to safe languages, the distribution of vulnerabilities will change, so attackers may shift to other avenues to unlock memory corruption. For example, in the kernel, we already see a small shift from use-after-frees towards logic bugs in page table management.
Weird machines and the strongly connected component
In Limiting weird machines, Thomas Dullien (Halvar Flake) builds a very useful intuition around why memory corruption has such gravity. We can model a program as an intended finite state machine (IFSM) that has been compiled to simulate that state machine on a more-complex underlying runnable CPU with n
bits of memory. The state space can be seen as 2^n
nodes, each representing one possible bit-vector for the system’s memory. Each CPU instruction introduces an edge from each node to the new state of the system after executing that instruction. The execution of a program then traces out a path in the 2^n
state space.
While the whole deck is phenomenal, Halvar’s key insight that we’ll draw on in the next section is the following: When a vulnerability in the program is triggered, the CPU instruction transitions the system to a “weird state” that has no analogue in the IFSM, and since the program’s code continues to execute and try to simulate the IFSM on the CPU, we effectively start walking “random” edges in the full state space. If there are sufficiently many random edges in this graph, then a single giant strongly connected component emerges. And because most of those 2^n
states are included in this strongly connected component through those random edges, that means that there is a path from most initial weird states to any other weird state, including some state that represents the attacker achieving their goal. Put simply, the attacker can use the bug to achieve any goal they want.
For example, imagine a zero-click attack against a mobile device where the attacker wants to send malicious text messages that result in the target device uploading all the user’s photos to the attacker’s server. The attacker can send a text message that triggers an out-of-bounds heap write, thereby resulting in the messaging application entering a weird state. As the application’s code continues to run, all subsequent transitions occur in the weird state space, effectively walking random edges to modify the application’s memory. Halvar’s observation is that the weird state space is likely to be strongly connected, so there likely is a path from that memory corruption state to the state representing the phone uploading all the user’s photos. That final state does exist on the CPU after all: for some carefully chosen n
-bit state of memory, the CPU will execute a ROP chain to do just that.
This is why memory corruption is so powerful: when you reach the strongly connected component, almost any n
-bit memory state becomes accessible. And for a sufficiently complicated IFSM the CPU is programmable by the data in memory, which means that the vulnerable program can be made to do basically anything the CPU will allow.
Memory-safe logic bugs are different
This analysis breaks down for memory-safe logic bugs. There’s no math here, I’m arguing solely by intuition, so take this all with a heap of salt. But Halvar’s argument above holds an implicit assumption that the weird state is due to (or leads to) memory corruption and that the CPU is fairly unconstrained in its ability to manipulate the state space (i.e. memory), like for example a physical CPU running arm64 assembly.
To see that the argument breaks down for memory-safe logic bugs, consider trying to reach the state where all of memory filled with 0x41414141
. This state is certainly within the strongly connected component for a memory corruption bug: just make the program jump to memset()
with appropriate arguments. But now imagine exploiting a logic bug in a memory-safe program. Having all of memory set to 0x41
does not correspond to any valid, memory-safe state for such a program, buggy or not. This means the state with all of memory set to 0x41
is not within the strongly connected component for a memory-safe logic bug.
I suspect the reason the strongly connected component argument fails under memory safety involves the randomness assumption for edges in the weird state space. That assumption probably holds well enough for sufficiently good memory corruption bugs, but does not hold under memory safety constraints. That is, the transitions taken by simulating the IFSM on a physical CPU once we’ve entered a weird state, subject to the constraint that the program remains memory-safe, are not “random” enough in the full memory state space for the strongly connected component to emerge.
This backs up the intuitive experience of exploit writers that seeking out memory corruption is the easiest way to make a program do things that are way outside its normal operation, while logic bugs tend to offer bespoke and limited capabilities.
How would you need to change the weird machine argument above to work for memory-safe logic bugs? I’d guess that in general you need to choose a CPU and state space that respect the invariants that are preserved by the set of bugs being considered. For a memory-safe Python program, that might mean choosing the state space to be an assignment of Python object graphs to program variables and the “CPU” to be an abstract machine for the safe subset of the Python language. Put another way, moving the memory safety requirement away from the edges and into the instruction set might help the edges look “random enough” again to talk about some notion of a strongly connected component for logic bugs.
Exploit writers are software developers too
What does all of this mean practically?
Basically, memory corruption is special, and my hope is that I’ve been able to convey some intuition for why to folks without binary exploitation experience. Attackers tend to prefer memory corruption when they need to make a vulnerable program do something arbitrarily complex, like throw the next stage of an exploit chain. Logic bugs are not a plug-and-play replacement here.
From a practical perspective, exploit writers are software developers and they aim for many of the same design principles in exploits that software developers want in normal programs. When you exploit an application to create a weird machine and then program that weird machine, you start to build abstractions. You start to build APIs like arbitrary read/write that let you abstract away the details and constraints of the bug being exploited. This abstraction in turn allows for modularity, so that you can substitute one exploit for another. Suddenly, beyond the read/write abstraction, all memory corruption bugs start to look more or less the same. Most logic bugs don’t work like this.
Thus, I suspect that attackers will still focus on memory corruption bugs even as they get rarer and harder to exploit, because they offer something that’s very hard to find elsewhere.
Thanks to Thomas Dullien, chompie1337, and others.