Data Races vs. Data Race Bugs: Telling the Difference with Portend
Even though most data races are harmless, the harmful ones are at the heart of some of the worst concurrency bugs. Alas, spotting just the harmful data races in programs is like finding a needle in a haystack: 76%-90% of the true data races reported by state-of-the- art race detectors turn out to be harmless.
We present Portend, a tool that not only detects races but also automatically classifies them based on their potential consequences: Could they lead to crashes or hangs? Alter system state? Could their effects be visible outside the program? Are they harmless? Our proposed technique achieves high accuracy by efficiently analyzing multiple paths and multiple thread schedules in combination, and by performing symbolic comparison between program outputs.
We ran Portend on 7 real-world applications: it detected 93 true data races and correctly classified 92 of them, with no human effort. 6 of them are serious harmful races. Portend’s classification accuracy is up to 88% higher than that of existing tools, and it produces easy-to-understand evidence of the consequences of harmful races, thus both proving their harmfulness and making debugging easier. We envision using Portend for testing and debugging, as well as for automatically triaging bug reports.
Cristian Zamfir is a 4th year PhD student in the School of Computer and Communication Sciences at EPFL, Switzerland, where he is part of the Dependable Systems Lab led by George Candea. He received his B.S. in Computer Engineering from University Politehnica of Bucharest and M.S. from University of Glasgow. His current research focuses on techniques for automated debugging of concurrent software.
Hosted jointly with the LSDS group.