Fine-Grained Coverage-Based Fuzzing

Fuzzing is a popular software testing method that discovers bugs by massively feeding target applications with automatically generated inputs. Many state-of-art fuzzers use branch coverage as a feedback metric to guide the fuzzing process. The fuzzer retains inputs for further mutation only if branch coverage is increased. However, branch coverage only provides a shallow sampling of program behaviours and hence may discard interesting inputs to mutate. This work aims at taking advantage of the large body of research over defining finer-grained code coverage metrics (such as control-flow, data-flow or mutation coverage) and at evaluating how fuzzing performance is impacted when using these metrics to select interesting inputs for mutation. We propose to make branch coverage-based fuzzers support most fine-grained coverage metrics out of the box (i.e., without changing fuzzer internals). We achieve this by making the test objectives defined by these metrics (such as conditions to activate or mutants to kill) explicit as new branches in the target program. Fuzzing such a modified target is then equivalent to fuzzing the original target, but the fuzzer will also retain inputs covering the additional metrics objectives for mutation. In addition, all the fuzzer mechanisms to penetrate hard-to-cover branches will help covering the additional metrics objectives. We use this approach to evaluate the impact of supporting two fine-grained coverage metrics (multiple condition coverage and weak mutation) over the performance of two state-of-the-art fuzzers (AFL++ and QSYM) with the standard LAVA-M and MAGMA benchmarks. This evaluation suggests that our mechanism for runtime fuzzer guidance, where the fuzzed code is instrumented with additional branches, is effective and could be leveraged to encode guidance from human users or static analysers. Our results also show that the impact of fine-grained metrics over fuzzing performance is hard to predict before fuzzing, and most of the time either neutral or negative. As a consequence, we do not recommend using them to guide fuzzers, except maybe in some possibly favorable circumstances yet to investigate, like for limited parts of the code or to complement classical fuzzing campaigns.

Michaël Marcozzi [marcozzi.net] is a permanent researcher at CEA List, Université Paris-Saclay (France). Together with the researchers of his group, he designs and studies code analyses to detect software vulnerabilities automatically. He focuses on (1) guiding automated testing tools towards finding vulnerabilities, and (2) understanding advanced types of vulnerabilities. He has published or reviewed for top-tier venues (such as TOSEM, OOPSLA, ICSE and PLDI) and he has been awarded a Young Researcher Grant in 2022 by the French National Agency for Research. He is a visiting lecturer at ENSTA, a top-ranked engineering school from Institut Polytechnique de Paris. Between 2018 and 2020, he was a postdoc at Imperial College London with Cristian Cadar and Ally Donaldson.