Prof. Sarfraz Khurshid, University of Texas at Austin
Abstract: The impact of robust and effective bug-finding systems,
such as
KLEE, has received increased recognition, establishing
symbolic execution as the premier analysis for systematic checking in
many domains. This talk focuses on the concept of test ranges that
were initially defined using
KLEE for distributing symbolic execution
to reduce its cost, and later developed for certifying, memoizing, and
integrating symbolic execution to increase its applicability and
usefulness. The talk also presents how using test ranges generalizes
beyond symbolic execution to enhance a few other analyses that share
the spirit of systematic exploration.
Bio: Sarfraz Khurshid is a Professor in the Department of Electrical and
Computer Engineering at the University of Texas at Austin, where he
leads the Software Verification and Testing Research Group. He
obtained PhD in Computer Science at
MIT in 2004. He received BSc in
Mathematics and Computer Science from Imperial College London and read
Part
III of the Mathematical Tripos at Trinity College Cambridge. His
work on automated test generation won awards at top conferences in
software testing (
ISSTA 2002), software engineering (
ICSE 2010), and
security and privacy (
IEEE S&P 2014), and was recognized for its
impact and awarded the
ACM SIGSOFT Impact Paper Award (2012) and the
ASE Most Influential Paper Award (2015). He is a recipient of the
NSF
CAREER award (2009).
Prof. Alessandro Orso, Georgia Institute of Technology
Abstract: Software testing is the most widely used approach for
assessing and improving software quality, but it is inherently
incomplete and may not be representative of how the software is used
in the field. As a result, software products are typically released
with faults or missing functionality. Modern software, with its
increased dynamism, configurability, and portability, is making the
situation even worse. Because of these characteristics, and of the
limitations of testing, deployed applications may behave (or be used)
very differently in house and in the field, which can result in field
failures – failures of the software that occur after deployment, while
the software is running on user machines. In this talk, I will discuss
two techniques, one reactive and one proactive, that we developed to
cope with field failures: BugRedux and Mimicry. BugRedux aims to
synthesize, using runtime data collected in the field, test cases that
reproduce observed field failures. These synthesized tests can give
developers the power to investigate and perform fully-fledged
debugging of field failures in house. Mimicry extends BugRedux and
aims to prevent field failures by imitating and anticipating user
behavior. To do so, Mimicry detects software behaviors that are
observed in the field but were not exercised during testing and tries
to generate test cases that mimic these and similar behaviors. In this
way, Mimicry can increase the representativeness of in-house testing
and the likelihood of detecting failures before they occur in the
field. In the presentation, I will also discuss empirical results that
show the potential usefulness of the presented techniques and discuss
ongoing and future work.
Bio: Alessandro Orso is a Professor and Associate School Chair in the
College of Computing at the Georgia Institute of Technology. He
received his
M.S. degree in Electrical Engineering (1995) and his
Ph.D. in Computer Science (1999) from Politecnico di Milano, Italy.
From March 2000, he has been at Georgia Tech. His area of research is
software engineering, with emphasis on software testing and program
analysis. His interests include the development of techniques and
tools for improving software reliability, security, and
trustworthiness, and the validation of such techniques on real-world
systems. Dr. Orso has received funding for his research from both
government agencies, such as
DARPA,
DHS,
NSF, and
ONR, and industry,
such as Fujitsu Labs, Google,
IBM, and Microsoft. He served on the
editorial boards of
ACM TOSEM and on the Advisory Board of Reflective
Corp, served as program chair for
ACM-
SIGSOFT ISSTA 2010 and program
co-chair for
IEEE ICST 2013,
ACM-
SIGSOFT FSE 2014, and
ACM-
SIGSOFT/
IEEE ICSE 2017. He has also served as a technical
consultant to
DARPA. Dr. Orso is a senior member of the
ACM and of the
IEEE Computer Society.
Prof. Abhik Roychoudhury, National University of Singapore
Abstract: Symbolic analysis of programs have been first studied for the purpose of program verification. In recent decades, symbolic execution technology has witnessed renewed interest due to the maturity of Satisfiability Modulo Theory (
SMT) solvers. Powerful dynamic analysis tools have leveraged the back-end solvers to systematically navigate large search spaces leading to application of symbolic analysis in test generation. In this talk, we will first study the power of state-of-the-art symbolic execution based approaches, mostly built on
KLEE, for various security vulnerability analysis tasks such as finding zero-day vulnerabilities and crash reproduction. We compare the symbolic analysis approaches to grey-box fuzz testing approaches (which are routinely employed in industrial settings). Through such comparison, we can try to understand the state-of-practice of symbolic execution in terms of its ability to detect software vulnerabilities.
In the later part of the talk, we will conceptualize the use of symbolic execution approaches/tools for purposes beyond guiding search. We will sketch the usage of symbolic execution in inferring specification of intended program behavior. This is done by analyzing a buggy program against selected tests. Such specification inference capability can be combined with program synthesis techniques to automatically repair programs. Automated program repair via symbolic execution goes beyond search-based approaches which lift patches from elsewhere in the program. Such an approach can construct “imaginative” patches, serves as a test-bed for the grand-challenge of automated programming, and contributes to the vision of self-healing software.
Bio: Abhik Roychoudhury is a Professor of Computer Science at National University of Singapore. His research focuses on software testing and analysis, software security and trust-worthy software construction. His research group has built scalable techniques for testing, debugging and repair of programs using systematic semantic analysis. He has been an
ACM Distinguished Speaker (2013-19). He is currently leading a large five-year long targeted research effort funded by National Research Foundation in the domain of trust-worthy software. He is the Lead Principal Investigator of the Singapore Cyber-security Consortium, which is a consortium of over 30 companies in the cyber-security space engaging with academia for research and collaboration. He has served as Program Chair of
ACM International Symposium on Software Testing and Analysis (
ISSTA) 2016 and Editorial Board member of
IEEE Transactions on Software Engineering (
TSE) since 2014. Abhik received his Ph.D. in Computer Science from the State University of New York at Stony Brook in 2000.