Daniela Seabra Oliveira, University of Florida
Despite the best efforts of the security community, vulnerabilities in software are still prevalent, with new ones reported daily and older ones repeating. One potential source of these vulnerabilities is API misuse. Developers (as human beings) tend to use shortcuts in their decision-making. They also generally trust APIs, but can misuse them, introducing vulnerabilities. We call the causes of such misuses blindspots. For example, some developers still experience blindspots on the implications of using strcpy(), which can lead to buffer overflows. We investigated API blindspots from a developers’ perspective to: (1) determine the extent to which developers can detect API blindspots in code and (2) examine how developer characteristics (i.e., perception of code correctness, familiarity with code, confidence, professional experience, cognitive functioning levels, and personality) affect this capability. We conducted a study with 109 developers from four countries solving programming tasks involving Java APIs known to cause blindspots in developers. We found that (1) The presence of blindspots correlated negatively with developers’ ability to identify vulnerabilities in code and that this effect was more pronounced for I/O-related APIs and for code with higher cyclomatic complexity. (2) Higher cognitive functioning and more programming experience did not predict better ability to detect software vulnerabilities in code. (3) Developers exhibiting greater openness as a personality trait were more likely to detect software vulnerabilities. The insights from this study and this talk have the potential to advance API security and software development processes. The design of new API functions should leverage developer studies to test for misconceptions in API usage. The documentation of legacy functions should address common blindspots developers experience when using the function. Software security training should highlight that (1) even expert, experienced, and highly intelligent developers will experience blindspots while using APIs, (ii) perceptions and "gut feelings" might be misleading, and (iii) developers should rely more on diagnostics tools. This talk will also highlight that the rationale of many software development companies that developers should and can address functionality and security simultaneously and that hiring experts will substantially increase software security might be misleading. Both of these tasks (functionality and security) are highly cognitively demanding and attempting to address both might be a zero-sum game, even for experts. Our insights have the potential to create awareness, especially for small and medium sized software development companies that having separate teams to address functionality and security might be a much more cost-effective paradigm to increase software security than the sole reliance on experts that are expected to "do it all".
Daniela Seabra Oliveira is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Florida. She received her B.S. and M.S. degrees in Computer Science from the Federal University of Minas Gerais in Brazil. She then earned her Ph.D. in Computer Science from the University of California at Davis. Her main research interest is interdisciplinary computer security, where she employs successful ideas from other fields to make computer systems more secure. Her current research interests include understanding and addressing developers’ blindposts and social engineering from a neuro-psychological perspective. She received a National Science Foundation CAREER Award in 2012 for her innovative research into operating systems' defense against attacks using virtual machines, the 2014 Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, and the 2017 Google Security, Privacy and Anti-Abuse Award. She is a National Academy of Sciences Kavli Fellow and a National Academy of Engineers Frontiers of Engineering Symposium Alumni. Her research has been sponsored by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA), the National Institutes of Health (NIH), the MIT Lincoln Laboratory, and Google.