Can changes in the computational stack affect correctness of Deep Learning Models?
It is well understood that deep learning models can be sensitive to small perturbations in input data and model architecture. There has been significant effort in making models more robust against these data and model perturbations. However, the effect of changes in the computational stack - deep learning frameworks, compilers, optimisations, hardware devices, during model deployment is not well understood. This talk will report on our testing and fault localisation research studying and fixing the effects of changes within the computational stack. We focus in particular on deep learning frameworks, as we found changing the framework during deployment can affect upto 72% of the output labels.
Dr. Ajitha Rajan is a Reader in the School of Informatics at the University of Edinburgh, where she started in 2013. She is a Royal Society Industry Fellow. Dr. Rajan’s research interests are in the area of software testing, verification, robustness and interpretability of artificial intelligence applied to avionics, automotive, embedded systems, blockchains and medical diagnostics. Her work in her Royal Society Industry Fellowship focuses on testing correctness of AI algorithms in self-driving cars. Dr. Rajan has been awarded grants from EPSRC, Royal Society, H2020, Facebook, GCHQ, Huawei, and SICSA.