Conference Program

All sessions will take place at the Oakland Marriott City Center.

Download Attendee List (Available to Enigma Conference Attendees)
Note: Only includes attendees who opted into appearing on the list. Log in to your USENIX account to access this file.

This content is available to:

Enigma 2017 Attendee List

Monday, January 30, 2017

Continental Breakfast in the Exhibit Foyer
8:45 am–9:00 am
9:00 am–10:00 am

Passwords and Human Secrets

Session Chair: Mark Risher, Google

Human Computation with an Application to Passwords

9:00 am9:30 am

Manuel Blum, Bruce Nelson University Professor of Computer Science, Carnegie Mellon University

Never memorize passwords. Compute them in your head instead, using secure humanly computable functions from challenges (website names) to responses (passwords).

Never memorize passwords. Compute them in your head instead, using secure humanly computable functions from challenges (website names) to responses (passwords).

Additional Materials: Publishable Humanly Usable Secure Password Creation Schemas, Manuel Blum and Santosh Vempala

Available Media

Moving Account Recovery beyond Email and the "Secret" Question

9:30 am10:00 am

Brad Hill, Security Engineer, Facebook

Managing a lost password or other credential is a problem every application must contend with, but which remains the most neglected part of account lifecycle management. Best common practice has failed to advance beyond the choices of “security” questions, emailed password reset links, or SMS-delivered codes. Federated Identity systems solve some problems but are economically unacceptable in many situations to both users and platforms. This talk introduces a lightweight, purpose-built, and self-assembling protocol with a prototype implementation by Facebook. It allows users to recover account access at any service using whatever other service(s) are best able to re-authenticate them. The design is focused on user choice and privacy and avoids asking people to bargain with their personal information to obtain this basic necessity of online life.

Managing a lost password or other credential is a problem every application must contend with, but which remains the most neglected part of account lifecycle management. Best common practice has failed to advance beyond the choices of “security” questions, emailed password reset links, or SMS-delivered codes. Federated Identity systems solve some problems but are economically unacceptable in many situations to both users and platforms. This talk introduces a lightweight, purpose-built, and self-assembling protocol with a prototype implementation by Facebook. It allows users to recover account access at any service using whatever other service(s) are best able to re-authenticate them. The design is focused on user choice and privacy and avoids asking people to bargain with their personal information to obtain this basic necessity of online life. It also presents the opportunity to build recovery capabilities for end-to-end encryption keys that meet the needs and abilities of ordinary humans.

Available Media
Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
10:30 am–11:30 am

Crypto and Security

Session Chair: Bryan Payne, Netflix

Secrets at Scale: Automated Bootstrapping of Secrets & Identity in the Cloud

10:30 am11:00 am

Ian Haken, Senior Security Software Engineer, Netflix

Almost any web service needs secrets to operate. Whether it’s encryption keys for storing credit card data and personally identifiable information, authentication tokens for talking to third party services, or just a password for connecting to the local database, if your application lives online it probably has a secret. But how do you actually keep those secrets secret? In an ideal world access would be tightly restricted; neither developers, nor continuous integration, nor deployment tools would ever see them. But for applications deployed in the cloud which need to automatically instantiate new instances to match demand and replace unhealthy nodes, this creates an even greater challenge; how can an application be automatically deployed with its secrets if even the deployment tools can’t be allowed to see them?

Almost any web service needs secrets to operate. Whether it’s encryption keys for storing credit card data and personally identifiable information, authentication tokens for talking to third party services, or just a password for connecting to the local database, if your application lives online it probably has a secret. But how do you actually keep those secrets secret? In an ideal world access would be tightly restricted; neither developers, nor continuous integration, nor deployment tools would ever see them. But for applications deployed in the cloud which need to automatically instantiate new instances to match demand and replace unhealthy nodes, this creates an even greater challenge; how can an application be automatically deployed with its secrets if even the deployment tools can’t be allowed to see them?

In this talk I will describe how we have approached this problem at Netflix: an environment supporting thousands of independent microservice applications, all of which need the capability to automatically scale and self-heal. Along the way, I’ll describe how this problem becomes inexorably intertwined with the question of secure, provable, and ephemeral identity, and how we ultimately architected a solution to both problems.

Available Media

Inside "MOAR TLS:" How We Think about Encouraging External HTTPS Adoption on the Web

11:00 am11:30 am

Emily Schechter, Google

HTTPS has been around for over 20 years, but adoption has only slowly increased recently. At Google, a project called "MOAR TLS" is guiding the web towards HTTPS everywhere by methodically hunting and addressing major hurdles for TLS adoption. This talk will give an inside look into the unexpected speedbumps we've faced along the way (why has it taken years to indicate HTTP as "not secure" in Chrome?) and research we've used to guide our strategy. We'll share some talking points that lead to successful HTTPS conversations, in case you're also interested in promoting HTTPS. Finally, we’ll discuss remaining hurdles on the road to HTTPS everywhere.

HTTPS has been around for over 20 years, but adoption has only slowly increased recently. At Google, a project called "MOAR TLS" is guiding the web towards HTTPS everywhere by methodically hunting and addressing major hurdles for TLS adoption. This talk will give an inside look into the unexpected speedbumps we've faced along the way (why has it taken years to indicate HTTP as "not secure" in Chrome?) and research we've used to guide our strategy. We'll share some talking points that lead to successful HTTPS conversations, in case you're also interested in promoting HTTPS. Finally, we’ll discuss remaining hurdles on the road to HTTPS everywhere.

Available Media
11:30 am–12:00 pm
Lunch in the East Hall
Visit the Enigma Sponsor Showcase!
1:00 pm–3:00 pm

Internet of Things

Session Chair: Tudor Dumitras, University of Maryland, College Park

Ghost in the Machine: Challenges in Embedded Binary Security

1:00 pm1:30 pm

Jos Wetzels, Distributed and Embedded System Security (DIES) Group, University of Twente, Netherlands

Embedded systems are everywhere, from consumer electronics to critical infrastructure, vehicles, airplanes and military equipment. Yet public attention to security of embedded systems is relatively recent compared to that of the general purpose world. Combined with its polyculture of hardware architectures and operating systems and the proprietary and restricted nature of many systems and protocols, this has led to an opaque security landscape for both researchers and developers.

As a result embedded binary security generally lags behind what is commonly expected of modern general purpose systems. Hardening embedded systems via adoption of modern exploitation mitigations isn't, however, a trivial endeavor due to the many challenges and intrinsic constraints imposed by embedded environments. If we take into consideration the dominance of unsafe languages and the fact that patch deployment is far more involved on these systems, this is all the more reason for concern.

Embedded systems are everywhere, from consumer electronics to critical infrastructure, vehicles, airplanes and military equipment. Yet public attention to security of embedded systems is relatively recent compared to that of the general purpose world. Combined with its polyculture of hardware architectures and operating systems and the proprietary and restricted nature of many systems and protocols, this has led to an opaque security landscape for both researchers and developers.

As a result embedded binary security generally lags behind what is commonly expected of modern general purpose systems. Hardening embedded systems via adoption of modern exploitation mitigations isn't, however, a trivial endeavor due to the many challenges and intrinsic constraints imposed by embedded environments. If we take into consideration the dominance of unsafe languages and the fact that patch deployment is far more involved on these systems, this is all the more reason for concern.

In this talk we will delve into the embedded ecosystem, present an overview of the state of embedded binary security and outline some of the challenges faced in the adoption of modern exploit mitigations, drawing upon our experiences during security research conducted into popular embedded OSes and our involvement in developing such mitigations for Industrial Control Systems.

Available Media

LLC Cache Attacks: Applicability and Countermeasures

1:30 pm2:00 pm

Gorka Irazoqui, Intel and Worcester Polytechnic Institute

Cache attacks have been demonstrated to be a big concern for security code designers by being able to recover a wide range of information, ranging from cryptographic keys to user privacy-related information. These attacks take advantage of the fact that two processes are utilizing the same hardware resource, thus leveraging unexpected leakages that can be exploited by a malicious user. More specifically, Last Level Cache (LLC) attacks make use of the fact that the LLC is shared across cores, thus being able to steal information from users located in different cores. This presentation describes the approaches that Flush and Reload and Prime and Probe take, together with the requirements that they need to successfully be launched. In addition, we evaluate different examples of everyday usage software that can be targeted by these kinds of attacks to violate our privacy.

Cache attacks have been demonstrated to be a big concern for security code designers by being able to recover a wide range of information, ranging from cryptographic keys to user privacy-related information. These attacks take advantage of the fact that two processes are utilizing the same hardware resource, thus leveraging unexpected leakages that can be exploited by a malicious user. More specifically, Last Level Cache (LLC) attacks make use of the fact that the LLC is shared across cores, thus being able to steal information from users located in different cores. This presentation describes the approaches that Flush and Reload and Prime and Probe take, together with the requirements that they need to successfully be launched. In addition, we evaluate different examples of everyday usage software that can be targeted by these kinds of attacks to violate our privacy. Further, this presentation expands on the scenarios in which both attacks can succeed, including but not limited to, IaaS and PaaS co-located VMs/processes, Web browsing javascript attacks, trusted execution environment attacks or cross smartphone application attacks. Finally, we evaluate the effectiveness of possible preventions at different levels of the system, including network level, software level, OS/hypervisor level, and hardware level countermeasures.

Available Media

IoT, a Cybercriminal's Paradise

2:00 pm2:30 pm

Attila Marosi, Sophos

Over 90% of the unprotected devices were found to be infected with at least one Malware threat or exhibiting the signs of an attack. In this talk, we'll reveal the results of the research, exposing the number of vulnerable devices and the gigabytes of storage that are now freely available to attackers. We’ll also share the technical results of the malware analysis. In summary, this talk will provide an insight into how the very old Internet protocols are being exploited on modern internet connected "things," explain the risks it creates to home and corporate users, and suggest recommendations on how businesses and users should be protecting themselves better against these unsophisticated but dangerous and highly successful attack scenarios.

Over 90% of the unprotected devices were found to be infected with at least one Malware threat or exhibiting the signs of an attack. In this talk, we'll reveal the results of the research, exposing the number of vulnerable devices and the gigabytes of storage that are now freely available to attackers. We’ll also share the technical results of the malware analysis. In summary, this talk will provide an insight into how the very old Internet protocols are being exploited on modern internet connected "things," explain the risks it creates to home and corporate users, and suggest recommendations on how businesses and users should be protecting themselves better against these unsophisticated but dangerous and highly successful attack scenarios.

Available Media

Hacking Sensors

2:30 pm3:00 pm

Yongdae Kim, Professor, Korea Advanced Institute of Science and Technology (KAIST)

Sensors are designed to measure sensor inputs (e.g., physical quantities) and transfer sensor outputs (e.g. voltage signal) into the embedded devices. In addition, sensor-equipped embedded systems (called sensing-and-actuation systems) decide their actuations according to these sensor outputs, and the systems have no doubt whether the sensor outputs are legitimate or not. Sensors are essential components for safety-critical systems such as self-driving cars, drones and medical devices. Breaking safety in these systems may cause loss of life or disasters. Because of these safety reasons, sensors are often designed to be robust against failure or faults. However, can they maintain safety under adversarial conditions? In this talk, I detail how sensors can be spoofed or prevented from providing correct operation through regular and side-channels. Attacks on various devices such as medical devices, drones, and smart wearables will be shown.

Sensors are designed to measure sensor inputs (e.g., physical quantities) and transfer sensor outputs (e.g. voltage signal) into the embedded devices. In addition, sensor-equipped embedded systems (called sensing-and-actuation systems) decide their actuations according to these sensor outputs, and the systems have no doubt whether the sensor outputs are legitimate or not. Sensors are essential components for safety-critical systems such as self-driving cars, drones and medical devices. Breaking safety in these systems may cause loss of life or disasters. Because of these safety reasons, sensors are often designed to be robust against failure or faults. However, can they maintain safety under adversarial conditions? In this talk, I detail how sensors can be spoofed or prevented from providing correct operation through regular and side-channels. Attacks on various devices such as medical devices, drones, and smart wearables will be shown. I'll complete the talk with a few directions and guides to prevent these attacks with a few open problems. 

Available Media
Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
3:30 pm–5:30 pm

Software Security

Session Chair: Michael Bailey, University of Illinois at Urbana–Champaign

Test Driven Security in Continuous Integration

3:30 pm4:30 pm

Julien Vehent, Firefox Services Security Lead at Mozilla

Mozilla runs services for millions of Firefox users that must be operated at reasonable cost while sustaining a fast innovation pace. Development and operation teams have long adopted DevOps' Continuous Integration (CI) and Continuous Delivery (CD) principles, allowing applications to go from a patch submission to production deployment in minutes. These fast cycles have left security controls designed for slow deployment cycles lagging behind. In this talk, we describe how the Mozilla CloudSec team has redesigned security into the DevOps pipelines to accelerate the discovery and mitigation of security issues using a technique called "Test Driven Security" (TDS).

Mozilla runs services for millions of Firefox users that must be operated at reasonable cost while sustaining a fast innovation pace. Development and operation teams have long adopted DevOps' Continuous Integration (CI) and Continuous Delivery (CD) principles, allowing applications to go from a patch submission to production deployment in minutes. These fast cycles have left security controls designed for slow deployment cycles lagging behind. In this talk, we describe how the Mozilla CloudSec team has redesigned security into the DevOps pipelines to accelerate the discovery and mitigation of security issues using a technique called "Test Driven Security" (TDS).

Similar to Test Driven Development, TDS puts the security tests that represent the desired behavior first, then runs these tests continuously against the code. Compared to a traditional approach where controls implementation is done outside of CI/CD, TDS can run in the DevOps pipeline automatically and continuously assert security of a web application.

In this presentation, we show how Mozilla uses Open Source tools to implement TDS and reduce the number of security vulnerabilities and regressions that reach production environments.

Available Media

As We May Code

4:00 pm4:30 pm

David Molnar, Senior Researcher, Microsoft

Advances in automatic reasoning, combined with the shift to cloud services, have changed the software security lifecycle. As we develop software, deploy it into our production systems, look for bugs in that software, and seek to understand these bugs, new techniques help us augment our human abilities with reasoning that can scale up to make decisions in seconds and scale across a wide range of problems. Cloud services that bundle together the data, compute, and specialized talent required for these augmentations make it possible to deploy these techniques across an organization. I will illustrate with examples from different phases of the software security lifecycle, including the "Project Springfield" bug finding service at Microsoft. Finally I will show how the trend toward cloud infrastructure will enable deeper impact on how we manage security for code we build, code we buy, or code we transform.

Advances in automatic reasoning, combined with the shift to cloud services, have changed the software security lifecycle. As we develop software, deploy it into our production systems, look for bugs in that software, and seek to understand these bugs, new techniques help us augment our human abilities with reasoning that can scale up to make decisions in seconds and scale across a wide range of problems. Cloud services that bundle together the data, compute, and specialized talent required for these augmentations make it possible to deploy these techniques across an organization. I will illustrate with examples from different phases of the software security lifecycle, including the "Project Springfield" bug finding service at Microsoft. Finally I will show how the trend toward cloud infrastructure will enable deeper impact on how we manage security for code we build, code we buy, or code we transform.

Available Media

Leveraging the Power of Automated Reasoning in Security Analysis of Web Applications and Beyond

4:30 pm5:00 pm

Nestan Tsiskaridze, University of California, Santa Barbara

The modern world critically depends on the security and safety of software. We seek to ensure customer confidence and protect privacy, intellectual property, and national security. As threats to software security have become more sophisticated, so too have the techniques developed to ensure security.

This talk focuses on novel opportunities to automate bug detection and security exploit generation provided by advances in symbolic execution and automated constraint solving. It discusses how symbolic execution can benefit from novel techniques in Satisfiability Modulo Theories (SMT), a subfield of automated theorem proving that in the past 10 years has revolutionized the discipline. The talk presents a recent highly successful application of SMT solvers in support of the security analysis of Web applications and how these new capabilities open opportunities for automating such analysis beyond the Web.

The modern world critically depends on the security and safety of software. We seek to ensure customer confidence and protect privacy, intellectual property, and national security. As threats to software security have become more sophisticated, so too have the techniques developed to ensure security.

This talk focuses on novel opportunities to automate bug detection and security exploit generation provided by advances in symbolic execution and automated constraint solving. It discusses how symbolic execution can benefit from novel techniques in Satisfiability Modulo Theories (SMT), a subfield of automated theorem proving that in the past 10 years has revolutionized the discipline. The talk presents a recent highly successful application of SMT solvers in support of the security analysis of Web applications and how these new capabilities open opportunities for automating such analysis beyond the Web.

This is a joint work with Clark Barrett (NYU/Stanford University), Morgan Deters (NYU), Tianyi Liang (The University of Iowa), Andrew Reynolds (The University of Iowa/EPFL), and Cesare Tinelli (The University of Iowa).

Available Media

Startups + Industry: How Everyone Can Win

5:00 pm5:30 pm

Hudson Thrift, Uber

Companies need better, not more, security products. The usual choices when looking for a new solution are buy, build, or use open source, but what if there are other options? In this spirit, the engineering security team at Uber has been exploring how to engage with early stage startups to bring new product features, and cause slight roadmap pivots, to address Uber’s security needs. The results have been cost effective, time efficient, and have created a better product for everyone.

This talk will discuss how to identify appropriate problems that can be solved with this model, how to find the right kind of startup, and how to work with them under your specific requirements. We’ll explore the concept from an abstract point of view as well as in practice using a case study from the Uber team.

Companies need better, not more, security products. The usual choices when looking for a new solution are buy, build, or use open source, but what if there are other options? In this spirit, the engineering security team at Uber has been exploring how to engage with early stage startups to bring new product features, and cause slight roadmap pivots, to address Uber’s security needs. The results have been cost effective, time efficient, and have created a better product for everyone.

This talk will discuss how to identify appropriate problems that can be solved with this model, how to find the right kind of startup, and how to work with them under your specific requirements. We’ll explore the concept from an abstract point of view as well as in practice using a case study from the Uber team.

Available Media
6:00 pm–7:30 pm
Conference Reception
East Hall, Sponsored by Google
7:30 pm–10:30 pm
Birds-of-a-Feather Sessions
Birds-of-a-Feather sessions (BoFs) are informal gatherings of peers interested in discussing a particular topic and are a great way to maximize the value of your time at a conference. You can always sign up to lead a BoF on site if you're inspired by something you heard in one of the sessions or in the hallway track and want to talk further. See the BoFs page for more information.

Tuesday, January 31, 2017

Continental Breakfast in the Exhibit Foyer
Visit the Enigma Sponsor Showcase!
8:55 am–9:00 am
9:00 am–10:00 am

Behaviors and Detection

Session Chair: Carrie Gates, Dell

Behaviors and Patterns of Bulletproof and Anonymous Hosting Providers

9:00 am9:30 am

Dhia Mahjoub, Principal Engineer, OpenDNS Research Labs

Bulletproof and anonymous hosting providers are key enabling factors of ransomware, phishing, and other cybercrime operations. Bulletproof hosters shield criminal content from abuse complaints and takedowns, whereas anonymous offshore hosters preserve privacy and free speech for their customers. Despite being conceptually different, the distinction between both classes tends to blur in practice. These hosters leverage multiple factors in their operations: the anonymity of the internet when establishing their businesses, heterogeneous laws and norms that exist in cross-border online spaces, and jurisdictions with little or no legislation to enforce laws against cyber criminals. Focusing threat intelligence efforts on these services and the actors that provide them is an important step to identifying and removing illegal and malicious content on the Internet.

Bulletproof and anonymous hosting providers are key enabling factors of ransomware, phishing, and other cybercrime operations. Bulletproof hosters shield criminal content from abuse complaints and takedowns, whereas anonymous offshore hosters preserve privacy and free speech for their customers. Despite being conceptually different, the distinction between both classes tends to blur in practice. These hosters leverage multiple factors in their operations: the anonymity of the internet when establishing their businesses, heterogeneous laws and norms that exist in cross-border online spaces, and jurisdictions with little or no legislation to enforce laws against cyber criminals. Focusing threat intelligence efforts on these services and the actors that provide them is an important step to identifying and removing illegal and malicious content on the Internet. As an example, we choose The Netherlands, one of the world's top transit and hosting spaces, and through our research we bring together findings from the network and the field to shed light on criminal hosting in the Dutch IP space. This talk will be useful to threat analysts, security researchers, and law enforcement.

This is a joint work with Sarah Brown (Security Links/NATO).

Available Media

StreamAlert: A Serverless, Real-time Intrusion Detection Engine

9:30 am10:00 am

Jack Naglieri, Security @ Airbnb

StreamAlert is a serverless, real-time intrusion detection engine. By building upon technologies such as AWS (platform-as-a-service), Terraform (infrastructure-as-code) and Chef (configuration-as-code), any Security organization can quickly deploy StreamAlert and have it scale to terabytes per hour. StreamAlert enables you to write both simple and complex rules against any of your application, system, or network logs. StreamAlert will remove all of the headaches typically associated with availability, scalability, segmentation, and maintenance (patching, upgrades, etc...). This lets your team focus on their core competency—catching attackers.

StreamAlert is a serverless, real-time intrusion detection engine. By building upon technologies such as AWS (platform-as-a-service), Terraform (infrastructure-as-code) and Chef (configuration-as-code), any Security organization can quickly deploy StreamAlert and have it scale to terabytes per hour. StreamAlert enables you to write both simple and complex rules against any of your application, system, or network logs. StreamAlert will remove all of the headaches typically associated with availability, scalability, segmentation, and maintenance (patching, upgrades, etc...). This lets your team focus on their core competency—catching attackers.

Available Media
Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
10:30 am–12:00 pm

Neuroscience and Security

Session Chair: Daniela Oliveira, University of Florida

Neural and Behavioral Insights on Trust

10:30 am11:00 am

Uma R. Karmarkar, Assistant Professor, Harvard Business School; Affiliate of the Harvard Center for Brain Science; currently a Visiting Scholar at the Haas School of Business, University of California, Berkeley

Trust is a psychological factor that can gate channels of communication, persuasion, and collaboration. Here, I offer an overview of some of the neural and psychological mechanisms involved in coding for trust and coding for distrust. Trust can be conceptualized as two types of functions. The first is a factor in a relationship with another agent that is often socially developed through one or more interactions. The second is as an individual perception that contributes to certainty or confidence in the face of uncertainty. My research relates to the latter, and how people use incomplete information to handle uncertain or ambiguous decisions. I will show how the relationship between information and feelings of certainty is important for persuasion, and can lead to both optimistic and pessimistic biases in individual decision-making.

Trust is a psychological factor that can gate channels of communication, persuasion, and collaboration. Here, I offer an overview of some of the neural and psychological mechanisms involved in coding for trust and coding for distrust. Trust can be conceptualized as two types of functions. The first is a factor in a relationship with another agent that is often socially developed through one or more interactions. The second is as an individual perception that contributes to certainty or confidence in the face of uncertainty. My research relates to the latter, and how people use incomplete information to handle uncertain or ambiguous decisions. I will show how the relationship between information and feelings of certainty is important for persuasion, and can lead to both optimistic and pessimistic biases in individual decision-making.

Available Media

What Does the Brain Tell Us about Usable Security?

11:00 am11:30 am

Anthony Vance, Brigham Young University

A popular saying in information security is, “Given a choice between dancing pigs and security, users will pick dancing pigs every time.” However, this statement is unfair—insights from neuroscience show that users’ seemingly lazy or careless security behaviors are not necessarily careless at all, but are a natural consequence of how the brain works.  

In this talk, we’ll show how neuroscience can be used to better understand users’ security behavior and design more usable security UI. To illustrate, we’ll share insights from of a series of experiments that combine functional magnetic resonance imaging (fMRI) and traditional usability testing. The results reveal the substantial effect of neurobiology on users’ security behavior, and suggest practical ways to improve security UI.

A popular saying in information security is, “Given a choice between dancing pigs and security, users will pick dancing pigs every time.” However, this statement is unfair—insights from neuroscience show that users’ seemingly lazy or careless security behaviors are not necessarily careless at all, but are a natural consequence of how the brain works.  

In this talk, we’ll show how neuroscience can be used to better understand users’ security behavior and design more usable security UI. To illustrate, we’ll share insights from of a series of experiments that combine functional magnetic resonance imaging (fMRI) and traditional usability testing. The results reveal the substantial effect of neurobiology on users’ security behavior, and suggest practical ways to improve security UI.

This is a joint work with Bonnie Anderson, Jeff Jenkins, and Brock Kirwan of the Neurosecurity Lab (http://neurosecurity.byu.edu) at Brigham Young University.

Available Media

Brains Can Be Hacked. Why Should You Care?

11:30 am12:00 pm

Tamara Bonaci, University of Washington

What happens if a surgical robot, used to perform a life-saving medical procedure, gets compromised and is used to harm a patient on an operating table, a surgeon performing a procedure, or both of them? What happens if a brain-computer interface, used either by severely disabled people or by early adopters, gets compromised and starts allowing anyone interested to listen in on its user's preferences, prejudices, or secrets? A lot—and sci-fi literature is full of interesting but rather unsettling examples.

So a better question to ask is: what can we done to prevent these attacks from happening? The answer is: a lot, and in this talk I will show that many of the mitigation strategies that we can apply rely on users' uniqueness in the way they interact with the system. 

What happens if a surgical robot, used to perform a life-saving medical procedure, gets compromised and is used to harm a patient on an operating table, a surgeon performing a procedure, or both of them? What happens if a brain-computer interface, used either by severely disabled people or by early adopters, gets compromised and starts allowing anyone interested to listen in on its user's preferences, prejudices, or secrets? A lot—and sci-fi literature is full of interesting but rather unsettling examples.

So a better question to ask is: what can we done to prevent these attacks from happening? The answer is: a lot, and in this talk I will show that many of the mitigation strategies that we can apply rely on users' uniqueness in the way they interact with the system. 

Available Media
Lunch in the East Hall
Visit the Enigma Sponsor Showcase!
1:00 pm–3:00 pm

Security Helping Society

Session Chair: Chenxi Wang, Twistlock Inc.

Won't Somebody Please Think of the Journalists?

1:00 pm1:30 pm

Tom Lowenthal, Staff Technologist, Committee to Protect Journalists

When researching security/privacy and developing tools, it is tempting to focus on the abstract technical merits of a problem. In practice, attacks are not graded for difficulty, only success. Why spend the time and expense of a zero-day exploit which bypasses ASLR to achieve remote code execution when spearphishing is so effective? The biggest barriers to widespread computer security are not technical. Wide deployment of privacy-preserving tools and trustworthy computers isn't limited by cutting-edge challenges in cryptography or formal methods. The obstacles are getting everyday tools to implement secure development best-practices, incorporate end-to-end crypto, and offer multi-factor authentication. The problem is fighting an endless public relations war about whether we should have to invent the impossible to create back-doors or design tools which protect their users except when the user is trying to do something bad.

When researching security/privacy and developing tools, it is tempting to focus on the abstract technical merits of a problem. In practice, attacks are not graded for difficulty, only success. Why spend the time and expense of a zero-day exploit which bypasses ASLR to achieve remote code execution when spearphishing is so effective? The biggest barriers to widespread computer security are not technical. Wide deployment of privacy-preserving tools and trustworthy computers isn't limited by cutting-edge challenges in cryptography or formal methods. The obstacles are getting everyday tools to implement secure development best-practices, incorporate end-to-end crypto, and offer multi-factor authentication. The problem is fighting an endless public relations war about whether we should have to invent the impossible to create back-doors or design tools which protect their users except when the user is trying to do something bad.

Here's the trick: think and talk about journalists. Talking about journalism as a first-class use case changes the mental calculus. It allows for focus on the real technical challenges of developing safe systems, and bypasses poorly-thought-out objections. Even better, thinking about the needs of journalists as first class users helps make design choices which better protect all users.

Available Media

Security in the Wild for Low-Profile Activists

1:30 pm2:00 pm

Zeynep Tufekci, University of North Carolina and The New York Times

Why are many activists more worried about what their mom knows than being tortured? And what does this mean for their digital communication habits and methods? Will they add an extra five minutes to their log-on routine--even if their lives are on the line? Why not?  Why are they likely to trust SMS second-factor authentication rather than a more secure "security key"? How could we change their mind? Why are content policies part of security? Why do so many activists ask me more about how to be public and how to prove they took the pictures they did, than ask me about hiding their tracks?  Why do activists really need to hide their tracks when it comes to personal information, but not necessarily political communication?

Why are many activists more worried about what their mom knows than being tortured? And what does this mean for their digital communication habits and methods? Will they add an extra five minutes to their log-on routine--even if their lives are on the line? Why not?  Why are they likely to trust SMS second-factor authentication rather than a more secure "security key"? How could we change their mind? Why are content policies part of security? Why do so many activists ask me more about how to be public and how to prove they took the pictures they did, than ask me about hiding their tracks?  Why do activists really need to hide their tracks when it comes to personal information, but not necessarily political communication? Can governments defeat end-to-end security with just a rumor?  I draw on more than a decade of studying digital habits and needs of activists in a variety of countries, ranging from developed countries to fairly repressive ones to share what I know about security and safety in the networked public sphere, especially for non-high profile activists and ordinary people. 

Available Media

Why Philanthropy Is Critical for Cybersecurity

2:00 pm2:30 pm

Eli Sugarman, Program Officer, William and Flora Hewlett Foundation

Philanthropy has a critical role to play in improving cybersecurity worldwide. As new technologies affect every aspect of our lives, the applicable laws, norms and policies—as well as the decision-makers that shape them—are struggling to keep up. High-profile breaches—at Sony Pictures, the Office of Personnel Management, and the Democratic National Committee, among many others—underscore the magnitude of the risks we face and the need for informed cybersecurity policies.

Philanthropy has a critical role to play in improving cybersecurity worldwide. As new technologies affect every aspect of our lives, the applicable laws, norms and policies—as well as the decision-makers that shape them—are struggling to keep up. High-profile breaches—at Sony Pictures, the Office of Personnel Management, and the Democratic National Committee, among many others—underscore the magnitude of the risks we face and the need for informed cybersecurity policies.

Yet despite its critical importance, funding to develop long-term cybersecurity policy for the benefit of the public is practically non-existent. The funding gap is, moreover, structural. Government and industry are directing significant resources to cybersecurity, but their efforts are and will remain focused on countering immediate threats and triaging new breaches. Unlike government or industry, philanthropy can be a neutral player not motivated by profit, politics, or self-interest.

There is critical work to be done for the safety of the public—work that government cannot and the private sector will not fund. What’s needed is flexible support from institutions that have the latitude to take a long-term, strategic approach—the kind of funding, in other words, that philanthropy is uniquely positioned to provide.

Available Media

Security for Vulnerable Populations—On the Interplay of Weapons of Influence and Life Domains in Predicting Older Adults' Susceptibility to Spear-Phishing Emails

2:30 pm3:00 pm

Daniela Oliveira, Associate Professor, University of Florida

Older adults constitute a vulnerable population for online social engineering attacks because of their cognition decline with age, including deception sensitivity. They are the fastest-growing segment of the U.S. population, control over half of the financial wealth, and occupy many positions of power.

Older adults constitute a vulnerable population for online social engineering attacks because of their cognition decline with age, including deception sensitivity. They are the fastest-growing segment of the U.S. population, control over half of the financial wealth, and occupy many positions of power.

Spear-phishing emails are used as a central component in many cyber-attacks. Successful emails employ psychological weapons of influence (e.g., scarcity), and relevant life domains (e.g., health) to deceive victims. This talk discusses older adults’ susceptibility to spear-phishing emails as a function of weapon of influence and life domain. A 21-day study was conducted with 83 participants (young and older adults) with data collection at the participants’ homes. Our results show that older users, especially older women, were more susceptible to phishing than younger users. While younger users were susceptible to a few weapons of influence and life domains, older users were susceptible to a variety of them. Further, there was a discrepancy, particularly among older users, between self-reported susceptibility awareness and actual susceptibility. Our results could lead to a well-grounded next generation of phishing defense solutions and training, which would take into account user age, weapons of influence, and life domains, to be effective.

Available Media
Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
3:30 pm–5:30 pm

Trustworthy Computing

Session Chairs: Alex Stamos, Facebook, and Kurt Opsahl, Electronic Frontier Foundation

Beyond Warm & Fuzzy: Ethics as a Value Prop

3:30 pm4:00 pm

Erin Kenneally, Program Manager, Trust & Innovation, U.S. Department of Homeland Security, Cyber Security Division

Industry is grappling, arguably unsuccessfully, with core sociotechnical tensions between individual and collective rights and interests in privacy, security, innovation, and autonomy. This is manifest in issues associated with smart-X (homes, health wearables, vehicles), predictive services, and precision measurements, for example.  Because these issues force new applications and interpretations of our traditional social mooring—law, economics and security—we need to rely on ethics as a common ordering force to address the challenges facing industry in achieving customer privacy and autonomy, business innovation and profit, and public and private cyber security. While the concept of corporate social responsibility is not new, the notion of collective industry ethics is shallow at best.

Industry is grappling, arguably unsuccessfully, with core sociotechnical tensions between individual and collective rights and interests in privacy, security, innovation, and autonomy. This is manifest in issues associated with smart-X (homes, health wearables, vehicles), predictive services, and precision measurements, for example.  Because these issues force new applications and interpretations of our traditional social mooring—law, economics and security—we need to rely on ethics as a common ordering force to address the challenges facing industry in achieving customer privacy and autonomy, business innovation and profit, and public and private cyber security. While the concept of corporate social responsibility is not new, the notion of collective industry ethics is shallow at best. This talk aims to advance the dialogue and collective action by discussing some of the core elements of ethics on which industry doesn't compete but without which it can individually fail.

Available Media

Legislative Engineering: Design Privacy Laws, Don't Just Draft Them

4:00 pm4:30 pm

Chris Conley, Policy Attorney, ACLU of Northern California

Law is code, but too much of it is bad code. And just like bad software, bad legal code creates a host of problems, from hidden (by accident or design) biases to vague language that leads to unexpected and even manipulable results. Nowhere is this problem more acute than in the realm of laws addressing modern technology, many of which are drafted by legislators or advocates who fail to understand the present capacities and limitations of the technology, let alone its future potential.

Law is code, but too much of it is bad code. And just like bad software, bad legal code creates a host of problems, from hidden (by accident or design) biases to vague language that leads to unexpected and even manipulable results. Nowhere is this problem more acute than in the realm of laws addressing modern technology, many of which are drafted by legislators or advocates who fail to understand the present capacities and limitations of the technology, let alone its future potential.

This talk will discuss two software engineering techniques, component-based architecture and modeling, as potential ways to improve future technology legislation. Carefully-designed and vetted existing legislation, such as the California Electronic Communications Privacy Act, can provide future legislators with "libraries"—from individual definitions to full model statutes—to incorporate into future work without adding complexity and inconsistency by reinventing the wheel. And developing robust models of the interplay between existing law, such as the various statutes that relate to search warrants, can help lawmakers at least conduct thought experiments on the effects of proposed changes. Can using techniques like these become the norm rather than the exception?

Available Media

The Paper Ballot Is Not Enough

4:30 pm5:00 pm

Ben Adida, VP Engineering, Clever

Over the last 15 years, election integrity advocates have pushed the paper ballot as the unifying solution to various election audit problems. As long as we have the paper ballot, we thought, we can always recount. The US election of 2016 has given us ample evidence that a paper ballot recount is a lot less likely to occur than we had imagined. We cannot continue to rely solely on paper recounts to ensure the integrity of our elections.

Over the last 15 years, election integrity advocates have pushed the paper ballot as the unifying solution to various election audit problems. As long as we have the paper ballot, we thought, we can always recount. The US election of 2016 has given us ample evidence that a paper ballot recount is a lot less likely to occur than we had imagined. We cannot continue to rely solely on paper recounts to ensure the integrity of our elections.

Instead, we should build election systems that inherently provide evidence of their integrity, not via a post-hoc audit process, but by virtue of running the election itself. We should be able to challenge election results not because an election is close, but because there is hard objective evidence of questionable integrity. We'll focus on end-to-end cryptographic techniques and how they can work in practice, and we'll also touch on how important voter registration is, and what we can do to increase our confidence in the maintenance of voter rolls.

Available Media

What Cybersecurity Can Learn from the Secret Service

5:00 pm5:30 pm

Nathaniel Gleicher, Head of Cybersecurity Strategy, Illumio

It’s easy to break into networks, and it always will be, because a bad actor only has to be right once to get inside. It should be hard to stay hidden once they're in, because each move could expose them. But today it isn't. Intruders spent more than a year inside the DNC and six months inside OPM. The 2013 Yahoo hack wasn't discovered until stolen data appeared for sale in 2016. We've spent the last decade trying to shift our defenses to the interior, but all the defense in depth in the world won't help us if we don't solve this puzzle. The Secret Service offers a compelling model. There's a reason that jumping the wrought iron fence at the White House is easy, but avoiding capture once inside is almost impossible. The Secret Service accepts porous boundaries and has used control of their environment to defend the President for more than a century.

It’s easy to break into networks, and it always will be, because a bad actor only has to be right once to get inside. It should be hard to stay hidden once they're in, because each move could expose them. But today it isn't. Intruders spent more than a year inside the DNC and six months inside OPM. The 2013 Yahoo hack wasn't discovered until stolen data appeared for sale in 2016. We've spent the last decade trying to shift our defenses to the interior, but all the defense in depth in the world won't help us if we don't solve this puzzle. The Secret Service offers a compelling model. There's a reason that jumping the wrought iron fence at the White House is easy, but avoiding capture once inside is almost impossible. The Secret Service accepts porous boundaries and has used control of their environment to defend the President for more than a century. This talk will examine how the Secret Service protects the President, how their approach can improve cybersecurity, and what we need to do to get there.>/p>

Available Media
5:30 pm–7:00 pm
Conference Reception
East Hall, Sponsored by Netflix
7:00 pm–10:00 pm
Birds-of-a-Feather Sessions
Birds-of-a-Feather sessions (BoFs) are informal gatherings of peers interested in discussing a particular topic and are a great way to maximize the value of your time at a conference. You can always sign up to lead a BoF on site if you're inspired by something you heard in one of the sessions or in the hallway track and want to talk further. See the BoFs page for more information.

Wednesday, February 1, 2017

Continental Breakfast in the Exhibit Foyer
8:55 am–9:00 am
9:00 am–10:00 am

Spam, Abuse, and Criminal Profit

Session Chair: Damon McCoy, New York University

How WhatsApp Reduced Spam while Launching End-to-End Encryption

9:00 am9:30 am

Matt Jones, Software Engineer, WhatsApp

End-to-end encryption, which protects message content so that only the sender and recipient can access it, is gaining popularity in messaging applications. At the same time, there is some concern about potential deleterious effects on spam detection systems. At WhatsApp we have successfully launched such "e2e" encryption for over 1 billion people—while also reducing the amount of spam they receive. This talk will discuss techniques we've found successful for preventing spam without access to message content, and some of the challenges we faced along the way. It should help dispel concerns that e2e encryption necessarily means reduced effectiveness of spam detection.

End-to-end encryption, which protects message content so that only the sender and recipient can access it, is gaining popularity in messaging applications. At the same time, there is some concern about potential deleterious effects on spam detection systems. At WhatsApp we have successfully launched such "e2e" encryption for over 1 billion people—while also reducing the amount of spam they receive. This talk will discuss techniques we've found successful for preventing spam without access to message content, and some of the challenges we faced along the way. It should help dispel concerns that e2e encryption necessarily means reduced effectiveness of spam detection.

Available Media

Drawing the Foul: Operation of a DDoS Honeypot

9:30 am10:00 am

Damian Menscher, Security Reliability Engineer, Google

Botnet growth allows DDoS attacks to outpace Moore's Law, and novel attack styles create additional challenges. Meanwhile, most defenders share limited technical information about attacks, often limiting reports to exaggerated bandwidth estimates. We propose information gathering via the operation of a DDoS honeypot -- a victim site likely to attract interesting attacks -- and present our findings from the operation of KrebsOnSecurity.com.

Botnet growth allows DDoS attacks to outpace Moore's Law, and novel attack styles create additional challenges. Meanwhile, most defenders share limited technical information about attacks, often limiting reports to exaggerated bandwidth estimates. We propose information gathering via the operation of a DDoS honeypot -- a victim site likely to attract interesting attacks -- and present our findings from the operation of KrebsOnSecurity.com.

Available Media
Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
10:30 am–12:00 pm

Usability and Privacy

Session Chair: Eric Rozier, Iowa State University

Putting Privacy Notices to the Test

10:30 am11:00 am

Lorrie Faith Cranor, Professor of Computer Science and of Engineering and Public Policy, Carnegie Mellon University

We all know that hardly anybody ever reads privacy notices or security warnings, and when people try to read them, they tend to be long and extremely difficult to understand. In this talk I will start by discussing why privacy notices are important, explain why they are largely failing to inform people, and discuss some of the approaches companies and researchers are taking in an attempt to make privacy notices more useful. Then I’ll present a theory about the cognitive processes that take place when someone encounters a privacy notice or security warning. Finally, I will share several examples in which my students conducted user studies to test the effectiveness of privacy notices or security warnings. I will show some examples of notices that don’t seem to be very effective, as well as some examples of how notices can be improved through an iterative design and testing process.

We all know that hardly anybody ever reads privacy notices or security warnings, and when people try to read them, they tend to be long and extremely difficult to understand. In this talk I will start by discussing why privacy notices are important, explain why they are largely failing to inform people, and discuss some of the approaches companies and researchers are taking in an attempt to make privacy notices more useful. Then I’ll present a theory about the cognitive processes that take place when someone encounters a privacy notice or security warning. Finally, I will share several examples in which my students conducted user studies to test the effectiveness of privacy notices or security warnings. I will show some examples of notices that don’t seem to be very effective, as well as some examples of how notices can be improved through an iterative design and testing process.

Available Media

Understanding and Designing for End Users' Security Expectations

11:00 am11:30 am

Franziska Roesner, University of Washington

As technology designers, we must do two things to build systems that are both usable and meaningfully secure. First, we must work to understand the actual security and privacy needs and expectations of our target user groups. Second, we must think not only about how to communicate better with users superficially, but about how to (re)design our systems more fundamentally to better match their needs and expectations. In this talk, I will describe a set of studies illustrating this approach, bringing together usable security and system design. Specifically, I will touch on lessons from our work studying and designing for sensitive communications among journalists/sources and lawyers/clients.

As technology designers, we must do two things to build systems that are both usable and meaningfully secure. First, we must work to understand the actual security and privacy needs and expectations of our target user groups. Second, we must think not only about how to communicate better with users superficially, but about how to (re)design our systems more fundamentally to better match their needs and expectations. In this talk, I will describe a set of studies illustrating this approach, bringing together usable security and system design. Specifically, I will touch on lessons from our work studying and designing for sensitive communications among journalists/sources and lawyers/clients.

Available Media

Privacy and Security Practices of Individuals Coping with Intimate Partner Abuse

11:30 am12:00 pm

Sunny Consolvo, Google

This talk will present results of a qualitative study of the digital privacy and security practices and challenges of survivors of intimate partner abuse (IPA). We propose a framework for organizing survivors' technology practices and challenges into three phases: physical control, escape, and life apart. This framework is intended to help technology creators consider how survivors of IPA can leverage new and existing technologies. Overall, our results suggest that the usability of and control over privacy and security functions should be or continue to be high priorities for technology creators seeking ways to better support survivors of IPA.

This talk will present results of a qualitative study of the digital privacy and security practices and challenges of survivors of intimate partner abuse (IPA). We propose a framework for organizing survivors' technology practices and challenges into three phases: physical control, escape, and life apart. This framework is intended to help technology creators consider how survivors of IPA can leverage new and existing technologies. Overall, our results suggest that the usability of and control over privacy and security functions should be or continue to be high priorities for technology creators seeking ways to better support survivors of IPA.

Lunch in the East Hall
1:00 pm–3:00 pm

Rise of AI in Security and Privacy

Session Chair: Yan Shoshitaishvili, University of California, Santa Barbara

Deviation from the Norms: The Necessity and Challenges of a Norms-based Approach to Digital Offense

1:00 pm1:30 pm

Andrea Little Limbago, Chief Social Scientist, Endgame

One of the few areas in which the tech and policy communities generally agree is the need for norms to guide acceptable behavior in the digital domain. Given the increasingly vocal demand for—and noted cases of—greater retaliatory capabilities within the private sector, this is a necessary discussion and directly impacts security and privacy. With everything from cyber militias to full-fledged hacking back capabilities entering the discourse, there is a significant need for policy innovation and creativity that can set the foundation for the broader establishment of global norms. However, there are significant hurdles to norm implementation and a clear lack of comprehension of those factors that impact norm diffusion, including the technical difficulty of verification, outdated policies, as well as collective action problems and expected utility challenges. Despite these hurdles, the US must continue to attempt to shape global digital norms.

One of the few areas in which the tech and policy communities generally agree is the need for norms to guide acceptable behavior in the digital domain. Given the increasingly vocal demand for—and noted cases of—greater retaliatory capabilities within the private sector, this is a necessary discussion and directly impacts security and privacy. With everything from cyber militias to full-fledged hacking back capabilities entering the discourse, there is a significant need for policy innovation and creativity that can set the foundation for the broader establishment of global norms. However, there are significant hurdles to norm implementation and a clear lack of comprehension of those factors that impact norm diffusion, including the technical difficulty of verification, outdated policies, as well as collective action problems and expected utility challenges. Despite these hurdles, the US must continue to attempt to shape global digital norms. Otherwise, other states will fill that void. And to get it right, tech community collaboration is required to avoid another Wassenaar situation. This presentation will discuss the opportunities and challenges of norm diffusion, while embedding the discussion in the recent discourse on offense-based behavior, which has significant implications for both security and privacy.

Available Media

FeatureSmith: Learning to Detect Malware by Mining the Security Literature

1:30 pm2:00 pm

Tudor Dumitras, University of Maryland, College Park

Companies facing rampant attacks and data breaches have started turning to artificial intelligence techniques, such as machine learning, for security tasks. A machine learning classifier automatically learns models of malicious activity from a set of known-benign and known-malicious observations, without the need for a precise description of the activity prepared in advance. However, the effectiveness of these techniques primarily depends on the feature engineering process, which is usually a manual task based on human knowledge and intuition. Can we automate this process? Can we build an intelligent system that not only learns from examples, but can also help us build other intelligent systems?

Companies facing rampant attacks and data breaches have started turning to artificial intelligence techniques, such as machine learning, for security tasks. A machine learning classifier automatically learns models of malicious activity from a set of known-benign and known-malicious observations, without the need for a precise description of the activity prepared in advance. However, the effectiveness of these techniques primarily depends on the feature engineering process, which is usually a manual task based on human knowledge and intuition. Can we automate this process? Can we build an intelligent system that not only learns from examples, but can also help us build other intelligent systems?

We developed a system, called FeatureSmith, that engineers features for malware detectors by synthesizing the knowledge described in thousands of research papers. As a demonstration, we trained a machine learning classifier with automatically engineered features for detecting Android malware and we achieved a performance comparable to that of a state-of-the-art detector for Android malware, which uses manually engineered features. In addition, FeatureSmith can suggest informative features that are absent from the manually engineered set and can link the features generated to human-understandable concepts that describe malware behaviors.

Available Media

Classifiers under Attack

2:00 pm2:30 pm

David Evans, University of Virginia

Machine learning classifiers are widely used in security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can often be thwarted by motivated adversaries who can construct evasive variants which are misclassified as benign. The main reason for this is that classifiers are trained on samples collected from previous attacks, which often differ from benign samples in superficial and easily-modified ways. Further, many machine learning techniques, including deep neural networks, are inherently fragile. In this talk, I’ll highlight the reasons most classifiers can be evaded by motivated adversaries and demonstrate some successful evasion techniques, including ones that can be fully automated. Then, I’ll talk about methods that could be used to make classifiers less vulnerable to evasion and to evaluate the robustness of a deployed classifiers in the presence of adversaries.

Machine learning classifiers are widely used in security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can often be thwarted by motivated adversaries who can construct evasive variants which are misclassified as benign. The main reason for this is that classifiers are trained on samples collected from previous attacks, which often differ from benign samples in superficial and easily-modified ways. Further, many machine learning techniques, including deep neural networks, are inherently fragile. In this talk, I’ll highlight the reasons most classifiers can be evaded by motivated adversaries and demonstrate some successful evasion techniques, including ones that can be fully automated. Then, I’ll talk about methods that could be used to make classifiers less vulnerable to evasion and to evaluate the robustness of a deployed classifiers in the presence of adversaries.

Available Media

Adversarial Examples in Machine Learning

2:30 pm3:00 pm

Nicolas Papernot, Google PhD Fellow at The Pennsylvania State University

Machine learning models, including deep neural networks, were shown to be vulnerable to adversarial examples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software.

In fact, the feasibility of misclassification attacks based on adversarial examples has been shown for image, text, and malware classifiers. Furthermore, adversarial examples that affect one model often affect another model, even if the two models are very different. This effectively enables attackers to target remotely hosted victim classifiers with very little adversarial knowledge.

Machine learning models, including deep neural networks, were shown to be vulnerable to adversarial examples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software.

In fact, the feasibility of misclassification attacks based on adversarial examples has been shown for image, text, and malware classifiers. Furthermore, adversarial examples that affect one model often affect another model, even if the two models are very different. This effectively enables attackers to target remotely hosted victim classifiers with very little adversarial knowledge.

This talk covers adversarial example crafting algorithms operating under varying threat models and application domains, as well as defenses proposed to mitigate such attacks. A practical tutorial will be given throughout the talk, allowing participants to familiarize themselves with adversarial example crafting.

Available Media
Break with Refreshments in the East Hall
3:30 pm–5:30 pm

Security in .gov

Session Chair: Eric Mill, 18F, GSA

Cyber Security: A DARPA Perspective

3:30 pm4:30 pm

John Launchbury, Director, Information Innovation Office (I2O)

Cyber threats against our information systems have grown in sophistication and number, yet progress in the cyber security of best-of-breed systems has been significant over the last few years, giving us hope that we are no longer facing an impossible task. This talk will present an overview of the state of current cyber systems, the anatomy of a cyber attack, and discuss the cyber R&D portfolio in DARPA I2O.

Cyber threats against our information systems have grown in sophistication and number, yet progress in the cyber security of best-of-breed systems has been significant over the last few years, giving us hope that we are no longer facing an impossible task. This talk will present an overview of the state of current cyber systems, the anatomy of a cyber attack, and discuss the cyber R&D portfolio in DARPA I2O.

Available Media

National Scale Cyber Security

3:30 pm4:00 pm

Dr Ian Levy, National Cyber Security Centre, UK

In October 2016, the UK National Cyber Security Centre was formally launched, charged with better defending the UK at large from all forms of cyberattack. The National Cyber Security Strategy was launched shortly afterwards and—strangely for a government strategy—talked about some of the national scale technical things that the Centre would lead. The NCSC is determined to get objective evidence and data to better inform cybersecurity decisions and investments, as well as making people’s lives better by reducing both the number of attacks they see and the impact of those that succeed. Ian will talk about some of these interventions and what the NCSC has learned on the way—all supported by real data and analysis.

In October 2016, the UK National Cyber Security Centre was formally launched, charged with better defending the UK at large from all forms of cyberattack. The National Cyber Security Strategy was launched shortly afterwards and—strangely for a government strategy—talked about some of the national scale technical things that the Centre would lead. The NCSC is determined to get objective evidence and data to better inform cybersecurity decisions and investments, as well as making people’s lives better by reducing both the number of attacks they see and the impact of those that succeed. Ian will talk about some of these interventions and what the NCSC has learned on the way—all supported by real data and analysis.

Available Media

Green Sprouts: Encouraging Signs of Life from the Department of Defense’s 2016 Security Strategy

4:30 pm5:00 pm

Lisa Wiswell, Bureaucracy Hacker, Defense Digital Service 

Not that long ago, the Department of Defense started to evolve from security through obscurity to more open practices that welcome contributions from the outside world. A new “Hack the Pentagon” bug bounty pilot proved that outside hackers could help secure DoD systems. The DoD then launched a vulnerability disclosure policy so that researchers could point out general security issues. And a recent “Hack the Army” contest has further proven that bug bounties are an effective tool in the DoD’s security toolkit.

This talk will discuss 2016’s rocky road to get to where we are today. Though there is still work to be done, there are signs of life to report. Progress at the DoD will help legitimize practices for other sectors and finally provide more clarity on the gray areas of the Computer Fraud & Abuse Act. Security researchers and prosecutors alike will have a better understanding of what is and isn’t legal in the hacking realm.

Not that long ago, the Department of Defense started to evolve from security through obscurity to more open practices that welcome contributions from the outside world. A new “Hack the Pentagon” bug bounty pilot proved that outside hackers could help secure DoD systems. The DoD then launched a vulnerability disclosure policy so that researchers could point out general security issues. And a recent “Hack the Army” contest has further proven that bug bounties are an effective tool in the DoD’s security toolkit.

This talk will discuss 2016’s rocky road to get to where we are today. Though there is still work to be done, there are signs of life to report. Progress at the DoD will help legitimize practices for other sectors and finally provide more clarity on the gray areas of the Computer Fraud & Abuse Act. Security researchers and prosecutors alike will have a better understanding of what is and isn’t legal in the hacking realm.

Available Media

Understanding and Securing DoD Weapon Systems

5:00 pm5:30 pm

Lt. Col. Tim Booher, DIUx and CYBERCOM

DARPA Cyber Grand Challenge (CGC) participants demonstrated recent advances in cyber autonomy to include automatic patching and vulnerability discovery. Relying on teams of hackers doesn't scale, presents no general conclusions, and presents high costs without the guarantee of increased weapons system survivability. DARPA, DIUx, CYBERCOM, the Air Force and Navy are investing in automated tools developed for CGC for application in embedded, real time, and heterogeneous mission systems. In addition, they plan to demonstrate a suite of advanced technologies to include high fidelity multi-resolution modeling, high-integrity processing, high-fidelity situational awareness, systems hardening and data provenance.

DARPA Cyber Grand Challenge (CGC) participants demonstrated recent advances in cyber autonomy to include automatic patching and vulnerability discovery. Relying on teams of hackers doesn't scale, presents no general conclusions, and presents high costs without the guarantee of increased weapons system survivability. DARPA, DIUx, CYBERCOM, the Air Force and Navy are investing in automated tools developed for CGC for application in embedded, real time, and heterogeneous mission systems. In addition, they plan to demonstrate a suite of advanced technologies to include high fidelity multi-resolution modeling, high-integrity processing, high-fidelity situational awareness, systems hardening and data provenance.

Available Media