A symbolic execution-based technique for patch testing. The old and the new version are run simultaneously in order to discover the introduced behavioural divergences.


While developers are aware of the importance of comprehensively testing patches, the large effort involved in coming up with relevant test cases means that such testing rarely happens in practice. Furthermore, even when test cases are written to cover the patch, they often exercise the same behaviour in the old and the new version of the code.

In this project, we present a symbolic execution technique that is designed to generate test inputs that cover the new program behaviours introduced by a patch. The technique works by executing both the old and the new version in the same symbolic execution instance, with the old version shadowing the new one. During this combined shadow execution, whenever a branch point is reached where the old and the new versions diverge, we generate a test case exercising the divergence and comprehensively test the new behaviours of the new version.

We evaluate our technique on the Coreutils patches from the CoREBench suite of regression bugs, and show that it is able to generate test inputs that exercise newly added behaviours and expose some of the regression bugs.

Annotating patches

In our study, we annotated 18 Coreutils patches studied by the CoREBench project. Please find the information about our proposed annotations of the patches.

Shadow VM

Shadow is available in a binary form (along with the necessary infrastructure) as a downloadable virtual machine. Please contact Tomasz Kuchta and Cristian Cadar for more details.

Research Support

This research project was generously sponsored by the UK EPSRC through the grant EP/J00636X/1 and by Microsoft Research through its PhD Scholarship Programme.