Enigma 2019 Conference Program

All sessions will be held in Grand Peninsula Ballroom Grand Peninsula ABCD unless otherwise noted.

Enigma 2019 Program Grid

View the program in mobile-friendly grid format.

Download Attendee List (Available to Enigma Conference Attendees)
Note: Only includes attendees who opted in to appearing on the list. Log in to your USENIX account to access this file.

Attendee Files 
Enigma 2019 Attendee List (PDF)

Monday, January 28, 2019

7:30 am–8:45 am

Continental Breakfast

Grand Peninsula Foyer

8:45 am–9:00 am

OPENING REMARKS, DAY 1

Program Co-Chairs: Ben Adida, VotingWorks, and Franziska Roesner, University of Washington

Available Media

9:00 am–10:30 am

Security for Specific User Groups

Session Chair: Wendy Nather, Duo Security

The Kids Aren't Alright—Security and Privacy in the K–12 Classroom

Monday, 9:00 am9:30 am

Alex Smolen, Clever

Available Media

Many of the security and privacy mechanisms we build - permission prompts, security warnings, privacy policies - make one critical assumption: the end-user is an adult with agency to make their own decisions. Children, and especially children in schools, operate in a different security and privacy context than the general-purpose online tools they use. Young students can't evaluate security risks or consent to data sharing, but we give them the same security warnings and privacy controls that confuse adults.

Authentication mechanisms aren't designed for children and don't adapt to their age. Password "best practices" aren't considerate of children who are learning to type. Many two-factor and password reset systems don't work for kids who aren't allowed to have phones. Mobile apps that never expire sessions don't make sense for schools who can't afford a device for every student.

The classroom setting is different than the corporate or consumer internet environment. The dynamic power structure of teachers, school administrators, students, and parents needs to be understood and baked into authentication and authorization tools for schools. Teachers play the role of system administrators, fielding support questions, fixing keyboards, and resetting passwords. School and district administrators have important and complicated relationships with the classroom, and technology is deployed both top-down and bottom-up, making inflexible systems brittle.

While many recognize the promise of technology in the classroom, many attempts to design kid-friendly systems are met with suspicion. Early academic data is sensitive. The concept of a "permanent record" is an educational privacy trope. In the era of big data, this is even more concerning. When students create content in edtech apps, that may be the first time they associate their online identity with data.

While edtech promises a revolution in learning outcomes, it first needs to be both safe and useful. This talk introduces security and privacy challenges kids face using technology in the classroom. It's imperative that we apply security and privacy design principles with an understanding of the real-world classroom context to realize the benefits of education technology for society.

Alex Smolen, Clever

Alex is a security-focused software engineer and architect interested in usable security and privacy by design. He is the Engineering Manager for the Infrastructure and Security teams at Clever. Before joining Clever, Alex was the technical lead for the Account Security team at Twitter. He received his BS in Computer Science and Masters at the School of Information, both at UC Berkeley.

Rethinking the Detection of Child Sexual Abuse Imagery on the Internet

Monday, 9:30 am10:00 am

Elie Bursztein, Google

Available Media

A critical part of child sexual abuse criminal world is the creation and distribution of child sexual abuse imagery (CSAI) on the Internet. To combat this crime efficiently and illuminate current defense short-coming, it is vital to understand how CSAI content is disseminated on the Internet. Despite the importance of the topic very little work was done on the subject so far.

To fill this gap and provide a comprehensive overview of the current situation we conducted the first longitudinal measurement study of CSAI distribution across the Internet. In collaboration with the National Center for Missing and Exploited Children (NCMEC)—a United States clearinghouse for all CSAI content detected by the public and US Internet services—we examined the metadata associated with 23.4M CSAI incidents of CSAI from the 1998–2017 period.

This talk starts by summarizing the key insights we garnered during this study about how CSAI content distribution evolved. In particular we will cover how Internet technologies have exponentially accelerated the pace of CSAI content creation and distribution to a breaking point in the manual review capabilities of NCMEC and law enforcement.

Then we will delve into the most pressing challenges that need to be addressed to be able to keep up with the steady increase of CSAI content and outline promising directions to help meet those challenges.

Elie Bursztein, Google

Elie Bursztein leads Google's anti-abuse research, which helps protect users against Internet threats. Elie has contributed to applied-cryptography, machine learning for security, malware understanding, and web security; authoring over fifty research papers in the field for which he was awarded 5 best papers awards and multiple industry distinctions. Most recently, he was involved in finding the first SHA-1 collision. Elie is a beret aficionado, blog at https://elie.net, tweets @elie, and performs magic tricks in his spare time. Born in Paris, he received a Ph.D from ENS-cachan in 2008 before working at Stanford University and ultimately joining Google in 2011. He now lives with his wife in Mountain View, California.

Callisto: A Cryptographic Approach to #MeToo

Monday, 10:00 am10:30 am

Anjana Rajan, Callisto

Available Media

Three years ago, Callisto launched its sexual assault reporting platform on college campuses. Callisto recently launched a new product that expands our reach to support any survivor of sexual assault and profession sexual coercion in the United States.

In this new product, users are invited to an online "matching escrow" that will detect repeat perpetrators and create pathways to support for victims. Users of this product can enter the identity of their perpetrator into the escrow. This data can only be decrypted by the Callisto Options Counselor (a lawyer), when another user enters the identity of the same perpetrator. If the perpetrator identities match, both users will be put in touch independently with the Options Counselor, who will connect them to each other (if appropriate) and help them determine their best path towards justice. The client relationships with the Options Counselors are structured so that any client-counselor communications would be privileged. A combination of client-side encryption, encrypted communication channels, oblivious pseudorandom functions, key federation, and Shamir secret sharing keep data encrypted so that only the Callisto Options Counselor has access to user submitted data when a match is identified. This presentation will discuss Callisto’s cryptographic approach and infosec strategy to solve an urgent social justice problem.

Anjana Rajan, Callisto

Anjana Rajan is the Chief Technology Officer at Callisto, a non-profit that builds technology to combat sexual assault. In this role, Anjana leads the engineering, security and design teams, with a focus on building products that protect the privacy and civil liberties of sexual assault survivors.

Previously, Anjana lived in London and worked at Palantir Technologies, where she built and deployed products in the Middle East. Before that, she worked as a technologist at Johnson & Johnson focusing on building new software products across global healthcare markets. Additionally, Anjana was a triathlete on Team USA and raced at two Triathlon World Championships; she earned her status as a professional athlete for USA Triathlon in 2015.

Anjana was a Knight Scholar at Cornell University and received her bachelor's and master's degrees in Operations Research and Information Engineering, and is an alumna of Y Combinator.

10:30 am–11:00 am

Break with Refreshments

Grand Peninsula Foyer

11:00 am–12:30 pm

Security in Hardware and Networking

Session Chair: Diogo Monica, Anchorage

Hardware Security Modules: The Ultimate Black Boxes

Monday, 11:00 am11:30 am

Ryan Lackey, Tezos

Available Media

Hardware Security Modules occupy a unique position in computer security–they are used to manage the most important secrets, but they're closed designs where opacity and tamper-response are inherent design requirements. These devices have had varying levels of adoption, from being the only way to do cryptography fast, to only being used when security was required (often by regulation), to now being used to protect high-value secrets at a distance. Unfortunately, many of the designs on the market are very old, and essentially designed for a different use case and threat model than exists today. To a degree, even existing certification procedures act as an impediment to successful use of the technology.

We will describe the issues with premises and cloud-based HSMs, as well as some ways to work around these limitations and how to build a new kind of product for current needs.

Ryan Lackey, Tezos

Ryan Lackey has been a cypherpunk since the early 1990s. As one of the founders of the world's first offshore datahaven (HavenCo on Sealand), he built physical infrastructure to help others engage in jurisdictional arbitrage. In addition to some early anonymous electronic cash projects, he's been a war zone entrepreneur in Iraq and Afghanistan, founded a YC-backed hardware server security startup (sold to Cloudflare), and now works on hardware and systems security for high-risk applications (travel, cryptocurrency, and high-profile targets) and heads security for the Tezos Foundation.

Hardware Is the New Software: Finding Exploitable Bugs in Hardware Designs

Monday, 11:30 am12:00 pm

Cynthia Sturton, University of North Carolina at Chapel Hill

Available Media

Bugs in hardware designs can create vulnerabilities that open the machine to malicious exploit. Despite mature functional validation tools and new research in designing secure hardware, the question of how to find and recognize those bugs remains open. My students and I have developed two tools in response to this question. The first is a security specification miner; it semi-automatically identifies security-critical properties of a design specified at the register transfer level. The second tool, Coppelia, is a symbolic execution engine that explores a hardware design and generates complete exploits for the security bugs it finds. We use Coppelia and our set of generated security properties to find new bugs in the open-source RISC-V and OR1k CPU architectures.

Cynthia Sturton, University of North Carolina at Chapel Hill

Cynthia Sturton is an Assistant Professor and Peter Thacher Grauer Fellow at the University of North Carolina at Chapel Hill. She leads the Hardware Security @ UNC research group to investigate the use of static and dynamic analysis techniques to protect against vulnerable hardware designs. Her research is funded by several National Science Foundation awards, a Junior Faculty Development Award from the University of North Carolina, and a Google Faculty Research Award. She was recently awarded the Computer Science Departmental Teaching Award at the University of North Carolina. Sturton received her M.S. and Ph.D. degrees from the University of California, Berkeley.

Using Architecture and Abstractions to Design a Security Layer for TLS

Monday, 12:00 pm12:30 pm

Daniel Zappala, Brigham Young University

Available Media

TLS is the primary protocol used to provide security and privacy for Internet traffic. Sadly, there is abundant evidence that developers do not use TLS correctly, due to a morass of poorly-designed APIs, lack of security expertise, and poor adherence to best practices. In this talk, we argue this is a problem of architecture and abstraction. We first demonstrate how a security layer fits into the Internet architecture, between applications and TCP, and how the POSIX socket API is both a convenient and simple abstraction for a TLS interface. We then discuss ramifications for developers, administrators, and OS vendors, focused on two major benefits: (1) developers have a centralized, well-tested service to easily create a secure application in minutes, and (2) system administrators and OS vendors have policy to ensure all applications on a device use best practices. We finish by illustrating how this new abstraction and architecture can simplify two of the most complex parts of TLS—certificate validation and client authentication. We are releasing code for the security layer, including both operating system services and application examples, to stimulate developer and industry interest in this approach.

Daniel Zappala, Brigham Young University

Daniel Zappala is the director of the Internet Research Lab at BYU. He is primarily interested in network security and usable security, particularly anywhere that people have to interact with cryptography. Daniel’s recent research includes developing a security layer for TLS, designing better usability for secure messaging apps, and studying mental models of encryption. His students recently won second place in the Facebook Internet Defense Prize and Honorable Mention for Distinguished Paper at SOUPS. Daniel has taught classes on Internet Programming, Networking, Security, Usability, Web Programming, and Western Civilization. He is currently serving on the organizing committees of ACSAC and SOUPS, and on the program committees of USENIX Security and PeTS. Daniel earned his Ph.D. in Computer Science at the University of Southern California and his B.S. in Electrical Engineering at Stanford University. If you visit him, Daniel will make you some great pizza.

12:30 pm–2:00 pm

Lunch

Atrium

2:00 pm–3:30 pm

Enterprise Security

Session Chair: Melody Hildebrandt, 21st Century Fox

Privacy Engineering: Not Just for Privacy Engineers

Monday, 2:00 pm2:30 pm

Jennifer Anderson, Uber

Available Media

Most privacy talks are given by privacy experts. I’m not a privacy expert. In fact, my job is to help teams across Uber access and analyze appropriate data to make our services smarter and more reliable. As such, my team is often on the receiving end of technical and policy requirements from our privacy teams. This talk will discuss how privacy and data engineers at Uber joined forces to build a privacy-protecting approach to data retrieval and what privacy teams need to know about working with data teams to accomplish their goals. I'll share specific examples from Uber engineering on how we work with our privacy colleagues to enforce least privilege, data protection, and compliance with regulatory requirements.

Jennifer Anderson, Uber

Jennifer Anderson is a senior director of engineering at Uber, where she leads the product platform team, responsible for the platforms and data warehouses supporting growth and core service teams. Previously, she lead data analytics and infrastructure for Uber’s engineering organization. Before Uber, Jennifer held various R&D management and software engineering roles at VMware and Bebop. She earned her PhD in computer science from Stanford University, and holds multiple patents in performance optimization and virtual computing.

Building Identity for an Open Perimeter

Monday, 2:30 pm3:00 pm

Tejas Dharamshi, Netflix, Inc.

Available Media

Netflix is a 100% cloud first company. The traditional corporate network security perimeter no longer meets our needs. In this talk, I will be covering the core building blocks comprising of identity, single sign-on using standards like SAML, OIDC and, OAuth, multi-factor authentication, adaptive authentication, device health and authorization we have invested in, to build zero trust networks at Netflix and make identity as the new security perimeter.

Tejas Dharamshi, Netflix, Inc.

Tejas Dharamshi is a Senior Security Software Engineer at Netflix. Tejas specializes in security and is focused on corporate Identity and Access, Multi-Factor Authentication, Adaptive Authentication, User-Focused & Endpoint Security at scale.

Provable Security at AWS

Monday, 3:00 pm3:30 pm

Neha Rungta, Amazon Web Services

Available Media

Using automated reasoning technology, the application of mathematical logic to help answer critical questions about your infrastructure, AWS is able to detect entire classes of misconfigurations that could potentially expose vulnerable data. We call this provable security-absolute assurance in security of the cloud and in the cloud. This talk highlights, how this next generation cloud security technology is protecting customers in an evolving threat landscape and how customers are using provable security features in their AWS cloud environment.

Neha Rungta, Principal Engineer, Amazon Web Services

Dr. Neha Rungta is a Principal Engineer in the Automated Reasoning Group with Amazon Web Services (AWS) working on formal verification techniques for cloud security. Prior to joining AWS, Neha is known for her work on symbolic execution, automated program analysis, and airspace modeling at NASA Ames. She graduated with a PhD in Computer Science from BYU in 2009.

3:30 pm–4:00 pm

Break with Refreshments

Grand Peninsula Foyer
Sponsored by Facebook

4:00 pm–5:30 pm

Across Boundaries

Session Chair: Tim Maurer, Carnegie Endowment for International Peace

Abusability Testing: Considering the Ways Your Technology Might Be Used for Harm

Monday, 4:00 pm4:30 pm

Ashkan Soltani, Independent Researcher and Consultant

Available Media

 

Ashkan Soltani, Independent Researcher and Consultant

Ashkan Soltani is an independent researcher and technologist specializing in privacy, security, and behavioral economics. His work draws attention to privacy problems online, demystifies technology for the non-technically inclined, and provides data-driven insights to help inform policy.

He’s previously served a brief stint as a Senior Advisor to the U.S. Chief Technology Officer in the White House Office of Science and Technology Policy and as the Chief Technologist for the Federal Trade Commission, advising the commission on its technology related policy as well as helping to create its new Office of Technology Research and Investigation. He also served at the FTC in 2010 as one of the first staff technologists in the Division of Privacy and Identity Protection, helping to lead investigations into major technology companies such as Google, Facebook, Twitter, HTC, and PulsePoint.

Ashkan was also recognized as part of the 2014 Pulitzer winning team for his contributions to the Washington Post’s coverage of National Security issues. He was also the primary technical consultant on the Wall Street Journal’s investigative series: “What They Know”, which was a finalist for 2012 Pulitzer Prize for Explanatory Reporting.

Grey Science

Monday, 4:30 pm5:00 pm

Anita Nikolich, Computer Science, Illinois Institute of Technology

Available Media

Traditional scientific disciplines have a long history of discoveries made by amateur researchers or those with no formal scientific training. The cybersecurity community has many parallels. Papers at serious academic conferences and talks at "hacker" conferences contain surprising overlaps in topics and methods. But academics publish in formal, peer reviewed journals that are often behind a paywall, while non-academics produce artifacts in the realm of ephemeral "grey literature". The incentives for each group differ enough that no serious effort has been put forth to draw them together. How can we create feedback loops between the academic community, cybersecurity operators and underground security researchers who may not even think of themselves as "researchers" in order to work together on important security and privacy topics?

Anita Nikolich, Illinois Institute of Technology

Anita is a Visiting Fellow in Computer Science at Illinois Institute of Technology. She served as a Cybersecurity Program Director at the National Science Foundation, and has held a variety of research, security and infrastructure roles in academia, industry and government. While at NSF she expanded the Transition to Practice program to draw upon innovative basic computer science and security research to solve complex, real world problems. She currently does work in cryptocurrency security as well as the impact of data brokers on personal privacy.

It's Not "Our" Data: Do We Want to Create a World of No Surprises?

Monday, 5:00 pm5:30 pm

Denelle Dixon, Mozilla

Available Media

 

Denelle Dixon, Mozilla

As Chief Operating Officer, Denelle is responsible for the overall operating business, leading the strategic and operational teams to scale Mozilla’s mission impact as a robust open source organization.

Denelle also spearheads Mozilla’s business, policy and legal activities in defense of an open, transparent and commercially sustainable Web and leads the team responsible for negotiating partnerships and revenue relationships that ensure Mozilla’s sustainability. Denelle has been a vocal advocate for encryption, the disclosure of vulnerabilities by governments and greater user choice and control.

Denelle was an outside advisor to Mozilla for several years before joining as Associate General Counsel in 2012. Denelle’s experience prior to joining Mozilla includes working at UK-based private equity firm Terra Firma on brand issues and risk assessment, as well as managing Yahoo!’s legal team advising on products, human rights, antitrust and litigation issues.

Denelle is a qualified lawyer with a B.A. from the University of California, Davis and a J.D. from University of California, Hastings College of Law. She also clerked at the United States District Court for the District of Hawaii.

5:30 pm–7:00 pm

Conference Reception

Atrium
Sponsored by Google

Tuesday, January 29, 2019

8:00 am–8:55 am

Continental Breakfast

Grand Peninsula Foyer

8:55 am–9:00 am

Opening Remarks, Day 2

Program Co-Chairs: Ben Adida, VotingWorks, and Franziska Roesner, University of Washington

9:00 am–10:30 am

Cybercrime and Cyberwar

Session Chair: David Freeman, Facebook

The Offline Dimension of Online Crime

Tuesday, 9:00 am9:30 am

Jonathan Lusthaus, University of Oxford

Available Media

The conventional wisdom is that cybercrime is a largely anonymous activity that exists essentially in cyberspace. The supposed anonymity of attackers feeds into a narrative that cybercrime is strange, new, ubiquitous and ultimately very difficult to counteract. The central purpose of this presentation is to dispute this view. When one looks for it, there is actually a strong offline and local element within cybercrime, alongside the online dimension. In a number of cases, offenders are physically known to each other and work together. Understanding this phenomenon is important for informing policy approaches that seek to address this challenge. The arguments made in this presentation are supported by fieldwork carried out over a 7 year period in some 20 countries, including cybercrime "hotspots" like Russia, Ukraine, Romania, Nigeria, Brazil, China and the USA. This included interviews with almost 250 participants from across law enforcement, the private sector and former cybercriminals.

Jonathan Lusthaus, University of Oxford

Jonathan Lusthaus is Director of The Human Cybercriminal Project in the Department of Sociology and a Research Fellow at Nuffield College, University of Oxford. His research focusses on the "human" side of profit-driven cybercrime: who cybercriminals are and how they are organised. In particular, he is interested in issues of trust, cooperation and governance. Jonathan has recently completed a seven-year global study on the organisation of cybercrime published late 2018 by Harvard University Press under the title Industry of Anonymity: Inside the Business of Cybercrime. As part of this research, he has travelled to cybercrime hotspots around the globe and conducted almost 250 interviews with law enforcement, the private sector and former cybercriminals. Jonathan holds a doctorate in sociology from the University of Oxford, where he was a Clarendon Scholar.

Learning from the Dark Web Dimension of Data

Tuesday, 9:30 am10:00 am

Munish Walther-Puri, Presearch Strategy

Available Media

If data should be treated like money, how do we figure out how much it is worth? What is the value of sensitive personal data to individuals and businesses? Often, it is only when that data is lost or compromised do we understand its true value.

Currently, the value of compromised or lost data is based on the consequences of a breach or major exposure: cost of remediation, damage to corporate reputation, drop in share price, or enforcement actions, legal settlements, and payouts. We acknowledge and understand that the fallout from lack of security is expensive, however, we need a better way to measure and evaluate compromised digital assets.

On the underground economy of the dark web, cybercriminals have created a market for data, including pricing based on monetization. This market prices the goods (data) and can help us estimate the cost to the economy. Cybercrime pays and data is the gateway good, an item of value in and of itself. The valuation of this data and market activity can quantify the effective harm caused by cybercrime, fraud, and identity theft. Using concepts from economics, this talk aims to provide an alternative framework for valuing stolen and leaked personal and financial data to help us fight cybercrime more effectively and empower business to operate more securely.

This talk aims to provide an alternative framework for valuing stolen and leaked personal and financial data to help us fight cybercrime more effectively and empower business to operate more securely.

Munish Walther-Puri, Presearch Strategy

Munish Walther-Puri is the founder of Presearch Strategy, a firm dedicated to applying technology and analytics to geopolitical risk, strategic intelligence, and cybersecurity. Previously, he was the Chief Research Officer and Head of Intelligence Analytics at Terbium Labs, where he led the strategy and operations of the company’s analysis, intelligence, and reporting. He also spent time at Citigroup, working on the intersection of fraud, cyber investigations, and terrorism. Throughout his career, Walther-Puri has worked as an analyst, consultant, and trainer in strategic intelligence and analysis at a bank, a big data analytics startup, political risk consultancies, and think tanks, including the Brookings Institution. He is an active member of the analyst and intelligence community, and a board member of Women 2.0

Countering Adversarial Cyber Campaigns

Tuesday, 10:00 am10:30 am

Max Smeets, Stanford University

Available Media

Over the course of the last three decades, and increasingly over the past eight years, state and semi-state actor behavior in cyberspace is veering in a direction that much of the cyber security research has not. While much of the academic and policy communities focus on ‘the high-and-right’ cyber action equivalent to an armed attack - the concept of cyber war - the actual behavior of actors has been of a far more nuanced and different nature. What we have been observing are campaigns comprised of linked cyber operations, with the specific objective of achieving strategic outcomes without the need of armed attack. These campaigns are not simply transitory clever tactics. Rather, they are reflections of the structural imperatives of cyberspace itself as a domain and as such will be the central mechanism of state and semi-state competition in this realm as long as the core structure of cyberspace endures. The fundamental nature of cyberspace rests on a structure of interconnectedness and a condition of constant contact. Once recognized, that nature requires us to study cyber means not as enablers of war, although they can be, but more critically as the alternative to it.

This presentation puts forth the argument that cyberspace is a new field of competition in power politics and cyber campaigns are now a salient means, alternative to war, of achieving strategic outcomes. We propose and evaluate a new set of measures - which go beyond conventional approaches of norms setting, deterrence and resilience - to address today's cyber policy challenges.

Max Smeets, Stanford University

Dr. Max Smeets is a cybersecurity postdoctoral fellow at Stanford University Center for International Security and Cooperation (CISAC). He is also a non-resident cybersecurity policy fellow at New America, and Research Associate at the Centre for Technology & Global Affairs, University of Oxford.

Max's current book project focuses on the causes underlying cyber proliferation and restraint. He was awarded the annual 2018 Amos Perlmutter Prize of the Journal of Strategic Studies for the most outstanding manuscript submitted for publication by a junior faculty member.

Max was previously a College Lecturer at Keble College, University of Oxford. He has held research positions at Oxford Cyber Studies Programme, Columbia University SIPA, Sciences Po CERI, and NATO CCD COE.

He holds an undergraduate degree from University College Roosevelt, Utrecht University, an M.Phil (Brasenose College) and DPhil (St. John's College) in International Relations from the University of Oxford.

10:30 am–11:00 am

Break with Refreshments

Grand Peninsula Foyer
Sponsored by Shopify

11:00 am–12:30 pm

Payments and Cryptocurrency

Session Chair: Lea Kissner, Google

Usage of Behavioral Biometric Technologies to Defend Against Bots and Account Takeover Attacks

Tuesday, 11:00 am11:30 am

Ajit Gaddam, Visa Inc.

Available Media

Frictionless strong authentication is a critical driver for enabling ecommerce and many other modern technology systems to thrive. In this talk, I showcase the challenges of tackling modern sophisticated machine based attacks and other malicious human activity attempting account takeover using stolen or compromised credentials. This is followed by a quick dive into the engineered solution that can perform behavioral analytics utilizing biometric data and how it tackles machine learning problems at the scale of hundreds of millions of authentication attempts. Insight is provided into implementation challenges, machine learning model generation, and finally integration into a very complex ecosystem. This talk will also showcase wins and how this eventually enabled a zero-trust environment.

Ajit Gaddam, Visa Inc.

Ajit Gaddam is the Head of Security Engineering at Visa, where he is responsible for building large scale machine learning driven defenses, leading engineering programs, and providing expert guidance on cybersecurity matters. He has presented at conferences worldwide including Black Hat, Strata Hadoop, COSO Dubin, and GCS Ukraine. Ajit has been quoted by major media organizations and his work showcased in academic journals, security publications, and in two published books. He is an active participant in various open source and information security standards bodies, is a prolific inventor of disruptive technologies (over 80+ global patents) and moonlights as an instructor (SANS, community colleges).

Cryptocurrency: Burn It with Fire

Tuesday, 11:30 am12:00 pm

Nicholas Weaver, International Computer Science Institute (ICSI) and University of California, Berkeley

Available Media

The entire cryptocurrency and blockchain space is effectively one big fraud. Cryptocurrencies are not fit for purpose unless you need censorship resistance, are fundamentally incompatible with modern finance, and are unfixable. They are, however, destroyable as they have technical, legal, and social weaknesses that can be exploited.

Nicholas Weaver, International Computer Science Institute (ICSI) and University of California, Berkeley

Nicholas received a B.A. in Astrophysics and Computer Science in 1995, and his Ph.D. in Computer Science in 2003 from the University of California at Berkeley. Although his dissertation was on novel FPGA architectures, he also was highly interested in Computer Security, including postulating the possibility of very fast computer worms in 2001.

In 2003, he joined ICSI, first as a postdoc and then as a staff researcher. His primary research focus is on network security, notably worms, botnets, and other internet-scale attacks, and network measurement. Other areas have included both hardware acceleration and software parallelization of network intrusion detection, defenses for DNS resolvers, and tools for detecting ISP-introduced manipulations of a user's network connection.

Building a Secure Data Market on Blockchain

Tuesday, 12:00 pm12:30 pm

Noah Johnson, Oasis Labs

Available Media

Data analytics and machine learning can provide enormous societal value and foster advancements in many industries. However, most of the valuable data needed to power these innovations remains restricted and siloed due to privacy concerns. This talk will discuss how blockchain technology, combined with privacy-preserving techniques, can enable a secure data market allowing users to share their data for analytics and machine learning while maintaining privacy, transparency, and control—without relying on trust of any central organization.

Noah Johnson, Oasis Labs

Noah Johnson is co-founder and Chief Product Officer at Oasis Labs with expertise in program analysis, security policy enforcement, and privacy-preserving techniques. Noah obtained his PhD in Electrical Engineering and Computer Science from UC Berkeley where he was advised by Professor Dawn Song. Previously Noah co-led a team of students in the development of DroidBlaze, a platform for automated security analysis and policy enforcement for mobile applications. Noah and Professor Song commercialized the DroidBlaze platform by founding Ensighta Security, which was acquired by FireEye in 2012.

12:30 pm–1:30 pm

Lunch

Atrium

1:30 pm–3:30 pm

Web Security

Session Chair: Julien Vehent, Mozilla

Next Steps For Browser Privacy: Pursuing Privacy Protections Beyond Extensions

Tuesday, 1:30 pm2:00 pm

Peter Snyder, Brave Software

Available Media

Practically-focused privacy research has disproportionately focused on the browser extension layer. This extension-predilection is a double edged sword. On the positive side, extensions are both simpler to develop and easier to distribute than deeper-reaching modifications, allowing researchers to iterate quickly and share their work with a large audience. On the negative side, an extension focus reduces the privacy improvements that can be achieved, as extensions can only modify a limited set of browser behavior. Researchers exploring modifications beyond the extension layer also lack easy ways of sharing their findings with a broad audience.

As a result, many possible web privacy improvements go under-explored. In this talk, I'll discuss three privacy improvements being developed at Brave that would not be possible at the extension layer. I hope to encourage other researchers and privacy activists to move beyond an extension-focused deployment strategy, and to consider privacy-oriented browser vendors as deployment strategies for getting their improvements in the hands of web users.

Peter Snyder, Brave Software

Peter Snyder is the Privacy Researcher at Brave Software, where he works on improving the privacy guarantees of the Brave Browser. He received his Ph.D. in Computer Science from the University of Illinois at Chicago in 2018. His research focuses on web security and privacy, browser hardening and measuring how the growth of the Web API has impacted user privacy and security.

User Agent 2.0: What Can the Browser Do for the User?

Tuesday, 2:00 pm2:30 pm

Ben Livshits, Brave Software

Available Media

Browsers are the window that the user has onto the ever-expanding web, with the good, the bad, and the ugly that it contains. Security mechanism design on the web has traditionally relied on the user to make rational, carefully-considered choices. Too often this become a barrage of prompts and dialogues, which end-users ultimately tend to ignore.

In this talk, we highlight the fact that this assumption is based on flimsy science at best, and, at worst, completely debunked. We therefore argue that the browser should do more to help the user with these decision, thereby truly stepping into the shoes of a user agent. While there may be decisions the user has to make, they must be less frequent and asked in a way where the user has a reasonable basis for making a well-informed decision. For example, a prompt to switch the browser into private browsing mode or to block all 3rd party cookies on a given site, due to the nature of the content they’re browsing may be accompanied with a side-by-side before-and-after picture.

Ben Livshits, Brave Software

Ben Livshits is the Chief Scientist for Brave Software, a company that makes a novel privacy-friendly web browser. Dr. Livshits is also an Associate Professor at Imperial College London and an affiliate professor at the University of Washington. Previously, he was a research scientist at Microsoft Research. He received a bachelor's degree in Computer Science and Math from Cornell University and his M.S. and Ph.D. in Computer Science from Stanford University.

Dr. Livshits' research interests include application of sophisticated static and dynamic analysis techniques to finding errors in programs. Ben has published papers at PLDI, POPL, Oakland Security, Usenix Security, CCS, SOSP, ICSE, FSE, and many other venues and has been on the PCs of the same conferences. He is known for his work in software reliability and especially tools to improve software security, with a primary focus on approaches to finding buffer overruns in C programs and a variety of security vulnerabilities (cross-site scripting, SQL injections, etc.) in Web-based applications. He is the author of over 100 academic papers; Ben has also received dozens of patents and multiple tech transfer awards for bringing research in practice.

Where Is the Web Closed?

Tuesday, 2:30 pm3:00 pm

Sadia Afroz, International Computer Science Institute (ICSI)

Available Media

One of the Internet's greatest strengths is the degree to which it facilitates access to any of its resources from users anywhere in the world. The Internet has already become a crucial part of our life. People around the world use the internet to communicate, connect, and do business. Yet various commercial, technical, and national interests constrain universal access to information on the internet.

I will discuss three reasons for the closed web that are not caused by government censorship: blocking visitors from the EU to avoid GDPR compliance, blocking based upon the visitor's country, and blocking due to security concerns. These decisions can have an adverse effect on the people of the blocked regions, especially for the developing regions. With many key services, such as education, commerce, and news, offered by a small number of web-based Western companies who might not view the developing world as worth the risk, these indiscriminate blanket blocking could slow the growth of blocked developing regions.

As we are building the future web, we need to discuss the implication of such blocking practices and build technologies that ensure an open web for users around the world.

Sadia Afroz, International Computer Science Institute (ICSI)

Sadia Afroz is a research scientist at the International Computer Science Institute (ICSI). Her work focuses on anti-censorship, anonymity and adversarial learning. Her work on adversarial authorship attribution received the 2013 Privacy Enhancing Technology (PET) award, the best student paper award at the 2012 Privacy Enhancing Technology Symposium (PETS) and the 2014 ACM SIGSAC dissertation award (runner-up).

The URLephant in the Room

Tuesday, 3:00 pm3:30 pm

Emily Stark, Google

Available Media

In a security professional’s ideal world, every web user would carefully inspect their browser’s URL bar on every page they visit, verifying that they are accessing the site they intend to be accessing. In reality, many users rarely notice the URL bar and don’t know how to interpret the URL to verify a website’s identity. An evil URL may even be carefully designed to be indistinguishable from a legitimate one, such that even an expert couldn’t tell the difference! In this talk, I’ll discuss the URLephant in the room: the fact that the web security model rests on users noticing and understanding URLs as indicators of website identities, but they don’t actually work very well for that purpose. I’ll discuss how the Chrome usable security team measures whether an indicator of website identity is working, and when the security community should consider breaking some rules of usable security in search of better solutions. Finally, I’ll share some thoughts on the big question: is it time to give up entirely on URLs as a user-facing security mechanism?

Emily Stark, Google Inc.

Emily Stark leads the Google Chrome usable security team, which is responsible for helping users and developers make safe decisions on the web. Her work includes promoting HTTPS adoption, making HTTPS more usable and secure, and improving many of Chrome's user-facing security and privacy features, from warnings to DevTools to URL display. She holds degrees in computer science from Stanford University and MIT.

3:30 pm–4:00 pm

Break with Refreshments

Grand Peninsula Foyer
Sponsored by Uber

4:00 pm–5:00 pm

Mobile Security

Session Chair: Aanchal Gupta, Facebook

Mobile App Privacy Analysis at Scale

Tuesday, 4:00 pm4:30 pm

Serge Egelman, University of California, Berkeley, and International Computer Science Institute (ICSI)

Available Media

Mobile platforms have enabled third-party app ecosystems that provide users with an endless supply of rich content. At the same time, mobile devices present very serious privacy risks: their ability to capture real-time data about our behaviors and preferences has created a marketplace for user data that most consumers are simply unaware of. In this talk, I will present research that my research group has conducted to automatically examine the privacy behaviors of mobile apps. Using analysis tools that we developed, we have tested over 80,000 of the most popular Android apps to examine what data they access and with whom they share it. I will present data on how mobile apps are tracking and profiling users, how these practices are often against users' expectations and public disclosures, and how app developers may be violating various privacy regulations.

The main takeaway from this talk is that there are many stakeholders who can be doing more to improve privacy on mobile platforms: (1) mobile app developers need to better understand the privacy behaviors of the third-party SDKs that they use, as well as to better communicate their privacy practices to their users; (2) the providers of third-party services (e.g., SDKs) and platforms need to do a better job of enforcing their own terms of service; (3) and regulators need tools that allow them to proactively audit compliance.

Serge Egelman, University of California, Berkeley, and International Computer Science Institute (ICSI)

Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction research; his research on privacy on mobile platforms has been cited in numerous lawsuits and regulatory actions. He received his PhD from Carnegie Mellon University.

Insider Attack Resistance in the Android Ecosystem

Tuesday, 4:30 pm5:00 pm

René Mayrhofer, Google

Available Media

The threat model for a mobile device ecosystem is complex. In addition to the obvious physical attacks on lost or stolen devices and malicious code threats, typical mobile devices integrate a significant amount of code from different organizations into their system images, which are in turn executed on an increasingly complex hardware infrastructure. Both benign mistakes, as well as malicious attacks, could happen on any of these layers, by any of these organizations. Therefore, users as well as app developers and service providers currently have to trust every single one of these organizations. Note that OEMs (original equipment manufacturers) in their role as integrators typically verify their supply chain and components they integrate. However, there are also other parties in the full chain that can tamper with devices after they leave an OEM and before they are in the hands of users. Summarizing, many people could—by honest mistake or malicious intent—tamper with components of a modern smartphone to compromise user security. We call such attacks insider attacks, independently of the motivation or association of these insiders. The basic threat is that insiders have privileged access to some components during the manufacturing or update chain that would allow them to make modifications that third parties could not. This talk will introduce the complexity of the insider attack problem (which is not unique to Android) and introduce some defenses that have already been put in place. In Android, we counter such insider attacks on multiple levels and aim to remove or limit the capability of insiders to harm users, which implies the limiting required trust in many of the involved parties. At the secure hardware level, Android Pie 9.0 introduced insider attack resistance (IAR) for updates to tamper-resistant hardware such as secure elements that is used to validate the user knowledge factor in authentication and for deriving, storing, and using cryptographic key material. Even Google and the respective OEM are technically incapable of distributing modified firmware to such tamper-resistant hardware to exfiltrate user keys without their cooperation. On the system software level, some devices make the hash of their currently running firmware available for (anonymous) local and remote verification. The combination of these features already provide transparency on the system software level and severely limit the possibility of targeted attacks on firmware and system software levels. We continue to work on this problem, and this talk is partially a call to action for the security community to devise additional novel methods to mitigate against insider attacks on components in the mobile device landscape.

René Mayrhofer, Google

René Mayrhofer is currently heading the Android Platform Security team and tries to make recent advances in usable, mobile security research available to the Billions of Android users. He is on leave from the Institute of Networks and Security at Johannes Kepler University Linz (JKU), Austria, where he continues to supervise PhD and Master students. Previously, he held a full professorship for Mobile Computing at Upper Austria University of Applied Sciences, Campus Hagenberg, a guest professorship for Mobile Computing at University of Vienna, and a Marie Curie Fellowship at Lancaster University, UK. His research interests include computer security, mobile devices, network communication, and machine learning, which he currently brings together in his work on securing mobile devices. Within the scope of u'smile, the Josef Ressel Center for User-friendly Secure Mobile Environments, his research group looked into full-stack security of mobile devices from hardware through firmware up to user interaction aspect. One particular outcome was a prototype for a privacy conscious Austrian mobile Driving License (AmDL) on Android smartphones supported by tamper-resistant hardware.

5:30 pm–7:00 pm

Conference Reception

Atrium
Sponsored by Netflix

Wednesday, January 30, 2019

8:00 am–8:55 am

Continental Breakfast

Grand Peninsula Foyer

8:55 am–9:00 am

Opening Remarks, Day 3

Program Co-Chairs: Ben Adida, VotingWorks, and Franziska Roesner, University of Washington

9:00 am–10:30 am

Computer Security and Government

Session Chair: Joe Calandrino, Federal Trade Commission

Digital Authoritarianism, Data Protection, and the Battle over Information Control

Wednesday, 9:00 am9:30 am

Andrea Little Limbago, Virtru

Available Media

Authoritarian regimes increasingly integrate automated bots, digital trolls, and cyber warriors to achieve a broad range of objectives, including data theft, destruction, and manipulation. This strategy for information control and dominance is no longer limited to major power nation-states. It is increasingly diffusing to smaller states as well as a range of non-state actors, and has impacted international events ranging from multi-state economic boycotts to election interference across the globe. As it proliferates this modern authoritarian playbook is also restructuring global regimes and defining global norms pertaining to security and privacy in the absence of a strong and resilient democratic model. To counter the proliferation of this authoritarian model, a major, strategic overhaul of information security within democracies is required. It is time for a strategic renaissance in information security. This requires the removal of the stovepipes that divide information operations and cybersecurity, avoiding conceptual stretching in favor of greater specificity in the terminology and strategy to modernize the democratic playbook. Importantly, this reimagination must be in sync with technological and social changes, and provide a democratic alternative to the authoritarian model that is increasingly taking a global stronghold. I will first provide an overview of the major innovations across bots, trolls, and warriors, including specific use cases of their integration as a holistic strategy. Next, I will address how this authoritarian model is restructuring the international system, shaping global norms, internet standards, and redefining acceptable behavior in war and peace. Finally, I will offer recommendations for the path ahead given this shifting international landscape, and what the private and public sectors within democracies should do as the digital defenders of security, privacy, and individual freedoms.

Andrea Little Limbago, Virtru

Dr. Andrea Little Limbago is a computational social scientist specializing in the intersection of technology, national security, and society. She currently is the Chief Social Scientist at Virtru, an encryption and data privacy software company, where she researches and writes on the geopolitics of cybersecurity, global data protection trends, and usable security. Her writing has been featured in numerous outlets, including Politico, the Hill, Business Insider, War on the Rocks, and Forbes. Andrea frequently presents on a range of cybersecurity topics such as norms, attacker trends, computational propaganda, data protection, and workforce development. Andrea is also a Senior Fellow and Program Director for the Emerging Technologies Law and Policy Program at the National Security Institute at George Mason, and contributes to numerous security conference program review committees. She previously was the Chief Social Scientist at Endgame. Prior to that, Andrea taught in academia and was a technical lead at the Department of Defense, where she earned a top award for technical excellence. Andrea earned a PhD in Political Science from the University of Colorado at Boulder.

Mr. Lord Goes to Washington, or Applying Security outside the Tech World

Wednesday, 9:30 am10:00 am

Bob Lord, Democratic National Committee

Available Media

Over the past year, I have had the honor of applying some of my experiences securing large enterprises to a new domain: a major political party. Along the way, I dealt with phishing attacks (including one you have already read about), helped roll out best practices to a decentralized party ecosystem, and encountered disinformation campaigns. In this talk, I’ll present my findings, many of which apply to any small or medium-sized business, as well as a number of suggestions for people building tech products.

Bob Lord, Democratic National Committee

Bob Lord is the Chief Security Officer at the Democratic National Committee, bringing more than twenty years of experience in the information security space to the Committee, state parties, and campaigns. Previously he was Yahoo’s CISO, covering areas such as risk management, product security, security software development, e-crimes, and APT programs. Before that he acted as the CISO in Residence at Rapid 7, and before that headed up Twitter’s information security program as its first security hire. You can see some of his hobbies at https://www.ilord.com.

Convincing the Loser: Securing Elections against Modern Threats

Wednesday, 10:00 am10:30 am

Ben Adida, VotingWorks

Available Media

What does it take to build a secure voting system that people actually use? Because a democracy is only as vibrant as its citizens are involved, this talk will arm the audience with the latest answers we have to that question, including what's settled, what's not serious, and what's still being debated.

Ben Adida, VotingWorks

Ben Adida is CEO of VotingWorks, a non-profit building secure and affordable voting machines. His passion is using technology to empower individuals and level the playing field. Ben has extensive experience leading product & engineering teams designing voting, identity, payment, and health IT systems. Ben holds a PhD, Masters, and Bachelors in Computer Science from MIT. He sits on the Board of Creative Commons.

10:30 am–11:00 am

Break with Refreshments

Grand Peninsula Foyer
Sponsored by Dropbox

11:00 am–12:30 pm

(Mis)Configurations and Experimental Design

Session Chair: Carrie Gates, Bank of America

Moving Fast and Breaking Things: Security Misconfigurations

Wednesday, 11:00 am11:30 am

Kevin Borgolte, Princeton University

Available Media

Nowadays, security incidents have become a familiar "nuisance," and they regularly lead to the exposure of private and sensitive data. In practice, the root causes for such incidents are rarely complex attacks. Instead, they are enabled by simple misconfigurations, such as authentication not being required, or security updates not being installed. For example, the leak of over 140 million Americans' private data from Equifax's systems is among most severe misconfigurations in recent history: The underlying vulnerability was long known, and a security patch had been available for months, but it was never applied. Ultimately, Equifax blamed an employee for forgetting to update the affected system, highlighting his personal responsibility.

In this talk, we investigate the operators' perspective on security misconfigurations to approach the human component of these security issues. We focus on system operators, because they are, ultimately, the ones being made responsible for the misconfigurations. Yet, they might not actually be a security issue's root cause, but other organizational factors might have led to it. We provide an analysis of system operators' perspective on security misconfigurations, and we determine the factors that operators perceive as the root causes. Finally, based on our findings, we provide practical recommendations on how to reduce security misconfigurations' frequency and impact.

Kevin Borgolte, Princeton University

Kevin Borgolte is a postdoctoral research scientist at Princeton University in the Department of Computer Science and the Center for Information Technology Policy. His research interests span network and system security, currently focused on large-scale Internet abuse, IPv6 security, and security misconfigurations. He is a member of the Shellphish Capture the Flag team, and he won third place in the DARPA Cyber Grand Challenge (CGC). Kevin holds a PhD in Computer Science from the University of California, Santa Barbara, which he earned in September 2018.

Stethoscope: Securely Configuring Devices without Systems Management

Wednesday, 11:30 am12:00 pm

Andrew M. White

Available Media

Insecurely configured endpoints are a major risk for both organizations and individuals, one which is particularly hard to address in an increasingly bring-your-own-device world. Netflix works with hundreds of individual contractors, companies, vendors and other third-parties who need access to corporate data and services. These third-parties often have their own devices which Netflix does not own and cannot control yet must secure.

To address these issues, we developed the Stethoscope native app, a tool which recommends to the user configuration changes to improve the security of their device and optionally allows organizations to verify device configuration at authentication time. The app, designed to avoid the operational burdens and risks of traditional systems management tooling, does not require administrator access, is read-only, and is open-source. It guides users through securely configuring their device while providing the context they need to understand why these changes are important. Incorporating Stethoscope into an endpoint strategy helps provide security without the need to fully control or own devices.

Andrew M. White[node:field-speakers-institution]

Andrew worked on user-focused security and behavioral analytics for anomaly detection at Netflix. He holds a PhD in Computer Science from the University of North Carolina at Chapel Hill; his dissertation dealt primarily with mitigating and exploiting side channels in encrypted network traffic.

When the Magic Wears Off: Flaws in ML for Security Evaluations (and What to Do about It)

Wednesday, 12:00 pm12:30 pm

Lorenzo Cavallaro, King's College London

Available Media

Academic research on machine learning-based malware classification appears to leave very little room for improvement, boasting F1 performance figures of up to 0.99. Is the problem solved? In this talk, we argue that there is an endemic issue of inflated results due to two pervasive sources of experimental bias: spatial bias, caused by distributions of training and testing data not representative of a real-world deployment, and temporal bias, caused by incorrect splits of training and testing sets (e.g., in cross-validation) leading to impossible configurations. To overcome this issue, we propose a set of space and time constraints for experiment design. Furthermore, we introduce a new metric that summarizes the performance of a classifier over time, i.e., its expected robustness in a real-world setting. Finally, we present an algorithm to tune the performance of a given classifier. We have implemented our solutions in TESSERACT, an open source evaluation framework that allows a fair comparison of malware classifiers in a realistic setting. We used TESSERACT to evaluate two well-known malware classifiers from the literature on a dataset of 129K applications, demonstrating the distortion of results due to experimental bias and showcasing significant improvements from tuning.

Lorenzo Cavallaro, King's College London

Lorenzo Cavallaro is a Full Professor of Computer Science, Chair in Cybersecurity (Systems Security) in the Department of Informatics at King's College London, where he leads the Systems Security Research Lab. He received a combined BSc-MSc (summa cum laudae) in Computer Science from the University of Milan in 2004 and a PhD in Computer Science from the same University in 2008. Prior to joining King's College London, Lorenzo worked in the Information Security Group at Royal Holloway, University of London (Assistant Professor, 2012; Associate Professor, 2016; Full Professor, 2018), and held Post-Doctoral and Visiting Scholar positions at Vrije Universiteit Amsterdam (2010--2011), UC Santa Barbara (2008--2009), and Stony Brook University (2006--2008). His research builds on program analysis and machine learning to address threats against the security of computing systems. Lorenzo is Principal Investigator in a number of research projects primarily funded by the UK EPSRC, the EU, Royal Holloway, and McAfee. He received the USENIX WOOT Best Paper Award 2017 and publishes at & sits on the technical program committee of well-known international conferences, including USENIX Security, ACM CCS, NDSS, WWW, ACSAC, and RAID.

12:30 pm–1:30 pm

Lunch

Atrium

1:30 pm–3:00 pm

Vulnerabilities

Session Chair: Michael Coates, Altitude Networks

If Red Teaming Is Easy: You're Doing It Wrong

Wednesday, 1:30 pm2:00 pm

Aaron Grattafiori, Facebook

Available Media

Red Teaming is a popular topic for both internal security teams, and for external contractors to emulate real world attacks and improve defenses. Going beyond the pentest model, Red Teaming delivers inarguable results that critically inform detection, prevention and response for an organization's security. However, it is often thought of as the "easy" side of InfoSec, and many Red Teams operate on a "win and go home" model. It can be quite easy, but if it is, you're not achieving the true goal: improved security at an organization or company via an adversarial perspective.

In this talk, Aaron will explore how proper Red Teaming can be extremely challenging, how it often requires understanding an organization functions, knowing how to attack different technology stacks, even exploring business risks, insider threats and abuse. To have an impact or achieve a compromise, sometimes a team may need to understand the target areas more than the people who create or maintain them. However popular Red Teaming is now, and whatever is being targeted, we're only scratching the surface of what is possible.

Aaron Grattafiori, Facebook

Aaron Grattafiori leads the Red Team at Facebook, where he focuses on offensive security, vulnerability research, adversary simulation, and performing bold full scope operations. Previously, Aaron was a principal consultant and research lead at iSEC Partners/NCC Group for many years. Aaron has spoken at national security conferences such as Black Hat and DEFCON as well as regional conferences such as Toorcon and SOURCE. When not breaking things, he enjoys covert channels and long walks on the blockchain.

Why Even Experienced and Highly Intelligent Developers Write Vulnerable Code and What We Should Do about It

Wednesday, 2:00 pm2:30 pm

Daniela Seabra Oliveira, University of Florida

Available Media

Despite the best efforts of the security community, vulnerabilities in software are still prevalent, with new ones reported daily and older ones repeating. One potential source of these vulnerabilities is API misuse. Developers (as human beings) tend to use shortcuts in their decision-making. They also generally trust APIs, but can misuse them, introducing vulnerabilities. We call the causes of such misuses blindspots. For example, some developers still experience blindspots on the implications of using strcpy(), which can lead to buffer overflows. We investigated API blindspots from a developers’ perspective to: (1) determine the extent to which developers can detect API blindspots in code and (2) examine how developer characteristics (i.e., perception of code correctness, familiarity with code, confidence, professional experience, cognitive functioning levels, and personality) affect this capability. We conducted a study with 109 developers from four countries solving programming tasks involving Java APIs known to cause blindspots in developers. We found that (1) The presence of blindspots correlated negatively with developers’ ability to identify vulnerabilities in code and that this effect was more pronounced for I/O-related APIs and for code with higher cyclomatic complexity. (2) Higher cognitive functioning and more programming experience did not predict better ability to detect software vulnerabilities in code. (3) Developers exhibiting greater openness as a personality trait were more likely to detect software vulnerabilities. The insights from this study and this talk have the potential to advance API security and software development processes. The design of new API functions should leverage developer studies to test for misconceptions in API usage. The documentation of legacy functions should address common blindspots developers experience when using the function. Software security training should highlight that (1) even expert, experienced, and highly intelligent developers will experience blindspots while using APIs, (ii) perceptions and "gut feelings" might be misleading, and (iii) developers should rely more on diagnostics tools. This talk will also highlight that the rationale of many software development companies that developers should and can address functionality and security simultaneously and that hiring experts will substantially increase software security might be misleading. Both of these tasks (functionality and security) are highly cognitively demanding and attempting to address both might be a zero-sum game, even for experts. Our insights have the potential to create awareness, especially for small and medium sized software development companies that having separate teams to address functionality and security might be a much more cost-effective paradigm to increase software security than the sole reliance on experts that are expected to "do it all".

Daniela Seabra Oliveira, University of Florida

Daniela Seabra Oliveira is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Florida. She received her B.S. and M.S. degrees in Computer Science from the Federal University of Minas Gerais in Brazil. She then earned her Ph.D. in Computer Science from the University of California at Davis. Her main research interest is interdisciplinary computer security, where she employs successful ideas from other fields to make computer systems more secure. Her current research interests include understanding and addressing developers’ blindposts and social engineering from a neuro-psychological perspective. She received a National Science Foundation CAREER Award in 2012 for her innovative research into operating systems' defense against attacks using virtual machines, the 2014 Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, and the 2017 Google Security, Privacy and Anti-Abuse Award. She is a National Academy of Sciences Kavli Fellow and a National Academy of Engineers Frontiers of Engineering Symposium Alumni. Her research has been sponsored by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA), the National Institutes of Health (NIH), the MIT Lincoln Laboratory, and Google.

How to Predict Which Vulnerabilities Will Be Exploited

Wednesday, 2:30 pm3:00 pm

Tudor Dumitras, University of Maryland, College Park

Available Media

The rate at which software vulnerabilities are discovered is growing: the National Vulnerability Database includes over 100,000 vulnerabilities, and 10% of these entries were added in the last year. Very few of these vulnerabilities are exploited in real-world attacks, yet the exploits can compromise millions of hosts around the world and can disrupt businesses and critical services.

This talk will discuss what we have learned about vulnerability exploitation by analyzing data from 10 million hosts. These hosts, used by real people around the world and targeted by real attackers, give us an opportunity to quantify the impact of software vulnerabilities on a global scale. Our measurements also allow us to infer statistically which vulnerabilities are likely to be exploited in the wild—before finding the corresponding exploits.

We show that the growing rate of vulnerability discovery does not mean that software is becoming more insecure; in fact, the fraction of vulnerabilities that are exploited follows a decreasing trend. At the same time, popular vulnerability metrics, such as the CVSS score, have a low correlation with the vulnerabilities that are ultimately exploited in the real world. It is difficult to guess why hackers exploit some vulnerabilities and not others, because this decision is influenced by a variety of socio-technical factors. However, we can combine features derived from the technical characteristics of a vulnerability, such as its CVSS score, with features extracted from social media, which reflect how information about the vulnerability spreads among hackers, security researchers and system administrators. Additionally, we can take into account variations in the rates at which vulnerable hosts are patched, after the patch becomes available. By combining these factors into predictive models, we can determine which vulnerabilities present a higher risk of exploitation, and, for some vulnerabilities, we can infer the existence of zero-day exploits on the day of disclosure.

Our predictive models are the result of five years of academic research, and they represent a step toward answering the question "What are the odds that you will get hacked tomorrow?" Along with recent advances on predicting other types of security incidents, these techniques help us assess objectively the impact of various defensive technologies on security in the real world. Such predictive models allow companies to determine their biggest risks and the best mitigations by using data, rather than expert opinions. They also provide evidence for cyber policymaking, and they can be applied to risk modeling in cyber insurance.

Tudor Dumitras, University of Maryland, College Park

Tudor Dumitraș is an Assistant Professor in the Electrical & Computer Engineering Department at the University of Maryland, College Park. His research focuses on data-driven security: he studies real-world adversaries empirically, he builds machine learning systems for detecting attacks and predicting security incidents, and he investigates the security of machine learning in adversarial environments. In his previous role at Symantec Research Labs he built the Worldwide Intelligence Network Environment (WINE) - a data analytics platform for security research. His work on the effectiveness of certificate revocations in the Web PKI was featured in the Research Highlights of the Communications of the ACM in 2018, and his measurement of the duration and prevalence of zero-day attacks received an Honorable Mention in the NSA competition for the Best Scientific Cybersecurity Paper of 2012. He also received the 2011 A. G. Jordan Award from the ECE Department at Carnegie Mellon University, the 2009 John Vlissides Award from ACM SIGPLAN, and the Best Paper Award at ASP-DAC'03. Tudor holds a Ph.D. degree from Carnegie Mellon University.

3:00 pm–3:30 pm

Break with Refreshments

Grand Peninsula Foyer
Sponsored by Tanium

3:30 pm–4:30 pm

Threat Modeling

Session Chair: Amie Stepanovich, Access Now

Physical OPSEC as a Metaphor for Infosec

Wednesday, 3:30 pm4:00 pm

Mark Loveless

Available Media

Being an Infosec professional kind of forces you to be a jack of all trades. It helps to develop a mindset where analyzing risk becomes second nature. Daily security and risk assessment decisions are an excellent exercise and will help build security muscle memory, and it benefits you professionally and personally. The premise is simple - I will outline what I do from a physical OPSEC standpoint when I travel or am just out and about, and you will reflect (with nudging) on my metaphors. We all do it to a certain extent, so why not consciously put it to the test? What better way to start the process than while traveling to a security conference?

Mark Loveless[node:field-speakers-institution]

Mark Loveless—aka Simple Nomad—is a security researcher, hacker, and explorer. He has worked in startups, large corporations, hardware and software vendors, and even a government think tank. He has spoken at numerous security and hacker conferences worldwide on security and privacy topics, including Black Hat, DEF CON, ShmooCon, RSA, AusCERT, and many others. He has been quoted in television, online, and print media outlets as a security expert including CNN, Washington Post, New York Times, and many others. An avid paranoid, he knows that They are out to get him, and he pledges They never will.

Something You Have and Someone You Know—Designing for Interpersonal Security

Wednesday, 4:00 pm4:30 pm

Periwinkle Doerfler, New York University

Available Media

While a variety of strategies for threat modeling exist, they largely share two assumptions: that the attacker is remote and sophisticated. Given the military origins of the security community, it is not surprising that by default, we tend to focus on the types of threats that face an organization, instead of the types of threats that face individuals. As a result of my own work with survivors of domestic violence, as well as others' findings about individuals' security and privacy concerns, I suggest a new threat framework–The Interpersonal Threat Model–that provides a completely different set of assumptions about an attacker’s capabilities and motivations than a more traditional Organizational Threat Model. This is a call for the security and privacy communities to consider interpersonal threats–those that stem from people with whom we cohabitate or share devices–when designing consumer-facing technology. In doing so, perhaps we can begin to better address the concerns of everyday people and offer solutions for at-risk populations.

Periwinkle Doerfler, New York University

Periwinkle Doerfler is a PhD Candidate at NYU Tandon School of Engineering within the Center for Cyber Security, advised by Prof. Damon McCoy. Her research focuses on the intersection of intimate partner violence and technology. She looks at this issue with regard to abusers, and how they come to use technology to perpetuate violence, as well as with regard to survivors, and how technology can help or hinder escape from abusive situations. Her past work has also examined cryptocurrency as it relates to human trafficking, doxing communities, and authentication schemes.

4:30 pm

Closing Remarks

Program Co-Chairs: Ben Adida, VotingWorks, and Franziska Roesner, University of Washington