One thing I’ve often been frustrated by while working on a security team at a large company is a seeming lack of understanding of the difference between code auditing and vulnerability research. These two activities appear superficially similar: they both involve looking at code to find vulnerabilities and improve security. But they have fundamentally different goals, different framings during the research process, and lead to different outputs. This causes problems when a project’s security goals call for one of these activities but the company effectively asks its researchers to perform the wrong one, leading to wasted work or not-useful findings.

At its core, vulnerability research is about understanding the practical security landscape of a system or area of code. Typically this means doing deep-dives to try and find vulnerabilities that actually matter in practice, skipping over attack surfaces that are not reachable or not investigating bugs that are unlikely to be exploitable in practice. This may result in fewer bugs being found, but the true value of vulnerability research is the less tangible, knowledge-based outputs: an understanding of which attack surfaces really matter, an understanding of which areas of the system are particularly attractive, an understanding of what constraints real attackers will face in practice, of what techniques they are likely to employ as part of a larger exploit flow, etc. In some cases, the output may include proof-of-concept exploits that may take a lot of time and effort to produce. In short, vulnerability research is about the research, and builds knowledge.

By contrast, code auditing has the fundamental goal of improving the security of an area of code by as much as possible in a fixed timeframe. Usually this means finding the greatest number of bugs you can within the allotted time without as much emphasis on things like reachability or attacker constraints. Doing the work to understand things like reachability in detail can take a long time, so it’s usually more valuable in an audit setting to use a quick judgement call about which surfaces are most valuable to maximize the time spent finding bugs. A good code auditor will have enough intuition and experience to make the right call most of the time, skipping a lot of the tedium of vulnerability research in order to get a very high security impact in a constrained amount of time.

Each of these activities has its place, but organizations can fail to achieve their security goals when they ask researchers for the wrong one.

For example, when a team has written a new codebase that’s going to ship in 8 weeks and someone realizes that it hasn’t been subject to any sort of security review, a code audit is the right tool. The need is to improve the security of this code as much as possible before it’s running out in the real world and subjecting the company or its customers to the code’s security risk. If the company asks a team to perform vulnerability research on this code, they may get back a few really serious bugs, a sophisticated POC that took a long time to write, and a solid understanding of mitigations that should be added to the code. But there’s probably a lot of bugs still present in the system since so much time was spent doing the deep knowledge building work. And any suggestions for larger structural improvements or mitigations will likely take time to implement, leaving the system vulnerable in the meantime.

More often though, the problem I see is that the company really needs vulnerability research but has asked (or incentivized) its researchers to do a code audit. This ends up wasting resources by building a misleading picture of the security landscape and focusing attention on bugs that don’t matter in practice. For example, fuzzing a particularly buggy library used by a system may yield hundreds of shallow bugs and could be used to argue for a big rearchitecture that swaps out that library for a memory-safe alternative. But if the buggy library is not part of the system’s reachable attack surface because it’s only used to process static resources, then there’s basically no real-world security benefit from any of that work. Setting strategic security objectives based on the output of a code audit will be misleading because a code audit’s job is not to paint a realistic and comprehensive view of the system’s security landscape.

In my view, there are two factors that most often lead to a code audit being performed when security research is actually what’s needed. The first and most obvious is when the organization doesn’t understand the difference between the two and doesn’t know how to ask for what it wants. Bug-finding work can all look the same if you don’t have familiarity with it. The second is that even when an organization understands the difference and tries to ask for research (for example, because the output will be used for security strategy), it may implicitly signal to researchers that bug volume is more important than knowledge-based outputs. I think this happens because there’s no objective quality measure of how “good” the knowledge-based outputs of vulnerability research are. How can you tell whether a researcher’s report stating that the code is solid and they didn’t find any bugs is true or just a cover for them spending the last 3 months goofing off work? Bug counts are more measurable, and so organizations tend to lean on them over time even though they are the wrong tool for the job.

I hope as the software security field matures that companies and leaders learn the difference between vulnerability research and code auditing work, learn to recognize which of the two is needed for any given task, learn how to ask their researchers for the right one, and learn how to recognize when they are getting the wrong one.