Conference Program

All sessions will take place at the Oakland Marriott City Center.

Download Attendee List (Available to Enigma Conference Attendees)
Note: Only includes attendees who opted into appearing on the list. Log in to your USENIX account to access this file.

Attendee Files 
Enigma 2017 Attendee List

Monday, January 30, 2017

Continental Breakfast in the Exhibit Foyer
8:45 am–9:00 am
9:00 am–10:00 am

Passwords and Human Secrets

Session Chair: Mark Risher, Google

Human Computation with an Application to Passwords

9:00 am9:30 am

Manuel Blum, Bruce Nelson University Professor of Computer Science, Carnegie Mellon University

Manuel Blum, Bruce Nelson University Professor of Computer Science, Carnegie Mellon University

Manuel Blum received his BSEE from MIT in 1959. He started his career working in the Neurophysiology Laboratory of Dr. Warren S. McCulloch. He got his PhD in Mathematics at MIT under the supervision of Professor Marvin Minsky in 1964. After a brief stint as Assistant Professor of Math at MIT, he joined the EECS Department at UC Berkeley, 1968-2000. From 2000 to the present, he has been a Professor of CS at Carnegie Mellon University. Manuel is a Turing award winner and member of the National Academy of Science, the National Academy of Engineering, and the American Association of Arts and Sciences. He is the proud academic father of 34 unique and exceptional PhD students. For recreation, Manuel replaced his motorcycle–which gave him no exercise–with a bicycle.

Never memorize passwords. Compute them in your head instead, using secure humanly computable functions from challenges (website names) to responses (passwords).

Never memorize passwords. Compute them in your head instead, using secure humanly computable functions from challenges (website names) to responses (passwords).

Additional Materials: Publishable Humanly Usable Secure Password Creation Schemas, Manuel Blum and Santosh Vempala

Available Media

Moving Account Recovery beyond Email and the "Secret" Question

9:30 am10:00 am

Brad Hill, Security Engineer, Facebook

Brad Hill, Security Engineer, Facebook

Brad is a Security Engineer at Facebook and long term contributor to internet-scale security efforts in organizations including the W3C, FIDO Alliance, IETF and CA/Browser Forum.

Managing a lost password or other credential is a problem every application must contend with, but which remains the most neglected part of account lifecycle management. Best common practice has failed to advance beyond the choices of “security” questions, emailed password reset links, or SMS-delivered codes. Federated Identity systems solve some problems but are economically unacceptable in many situations to both users and platforms. This talk introduces a lightweight, purpose-built, and self-assembling protocol with a prototype implementation by Facebook. It allows users to recover account access at any service using whatever other service(s) are best able to re-authenticate them. The design is focused on user choice and privacy and avoids asking people to bargain with their personal information to obtain this basic necessity of online life.

Managing a lost password or other credential is a problem every application must contend with, but which remains the most neglected part of account lifecycle management. Best common practice has failed to advance beyond the choices of “security” questions, emailed password reset links, or SMS-delivered codes. Federated Identity systems solve some problems but are economically unacceptable in many situations to both users and platforms. This talk introduces a lightweight, purpose-built, and self-assembling protocol with a prototype implementation by Facebook. It allows users to recover account access at any service using whatever other service(s) are best able to re-authenticate them. The design is focused on user choice and privacy and avoids asking people to bargain with their personal information to obtain this basic necessity of online life. It also presents the opportunity to build recovery capabilities for end-to-end encryption keys that meet the needs and abilities of ordinary humans.

Available Media

Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
10:30 am–11:30 am

Crypto and Security

Session Chair: Bryan Payne, Netflix

Secrets at Scale: Automated Bootstrapping of Secrets & Identity in the Cloud

10:30 am11:00 am

Ian Haken, Senior Security Software Engineer, Netflix

Ian Haken, Senior Security Software Engineer, Netflix

Ian Haken is a senior security software engineer at Netflix where he works on the Platform Security team to develop tools and services that defend the Netflix platform. Before working at Netflix, he spent two years as security researcher at Coverity where worked on static analysis tools for the automatic discovery and remediation of security vulnerabilities. He received his PhD in mathematics from the University of California, Berkeley in 2014 with a focus in computability theory and algorithmic information theory. Ian has given talks at several other information security conferences including BlackHat EU, AppSec EU, and BSides Seattle.

Almost any web service needs secrets to operate. Whether it’s encryption keys for storing credit card data and personally identifiable information, authentication tokens for talking to third party services, or just a password for connecting to the local database, if your application lives online it probably has a secret. But how do you actually keep those secrets secret? In an ideal world access would be tightly restricted; neither developers, nor continuous integration, nor deployment tools would ever see them. But for applications deployed in the cloud which need to automatically instantiate new instances to match demand and replace unhealthy nodes, this creates an even greater challenge; how can an application be automatically deployed with its secrets if even the deployment tools can’t be allowed to see them?

Almost any web service needs secrets to operate. Whether it’s encryption keys for storing credit card data and personally identifiable information, authentication tokens for talking to third party services, or just a password for connecting to the local database, if your application lives online it probably has a secret. But how do you actually keep those secrets secret? In an ideal world access would be tightly restricted; neither developers, nor continuous integration, nor deployment tools would ever see them. But for applications deployed in the cloud which need to automatically instantiate new instances to match demand and replace unhealthy nodes, this creates an even greater challenge; how can an application be automatically deployed with its secrets if even the deployment tools can’t be allowed to see them?

In this talk I will describe how we have approached this problem at Netflix: an environment supporting thousands of independent microservice applications, all of which need the capability to automatically scale and self-heal. Along the way, I’ll describe how this problem becomes inexorably intertwined with the question of secure, provable, and ephemeral identity, and how we ultimately architected a solution to both problems.

Available Media

Inside "MOAR TLS:" How We Think about Encouraging External HTTPS Adoption on the Web

11:00 am11:30 am

Emily Schechter, Google

Emily Schechter, Google

Emily Schechter is Product Manager for Chrome Security at Google, where she works on Chrome Security UX and HTTPS adoption on the web. She has previously worked on the Google Safe Browsing and Anti-Malvertising teams to keep Google and web users safe from online threats. Emily has degrees in Computer Engineering and Economics from Dartmouth College.

HTTPS has been around for over 20 years, but adoption has only slowly increased recently. At Google, a project called "MOAR TLS" is guiding the web towards HTTPS everywhere by methodically hunting and addressing major hurdles for TLS adoption. This talk will give an inside look into the unexpected speedbumps we've faced along the way (why has it taken years to indicate HTTP as "not secure" in Chrome?) and research we've used to guide our strategy. We'll share some talking points that lead to successful HTTPS conversations, in case you're also interested in promoting HTTPS. Finally, we’ll discuss remaining hurdles on the road to HTTPS everywhere.

HTTPS has been around for over 20 years, but adoption has only slowly increased recently. At Google, a project called "MOAR TLS" is guiding the web towards HTTPS everywhere by methodically hunting and addressing major hurdles for TLS adoption. This talk will give an inside look into the unexpected speedbumps we've faced along the way (why has it taken years to indicate HTTP as "not secure" in Chrome?) and research we've used to guide our strategy. We'll share some talking points that lead to successful HTTPS conversations, in case you're also interested in promoting HTTPS. Finally, we’ll discuss remaining hurdles on the road to HTTPS everywhere.

Available Media

11:30 am–12:00 pm
Lunch in the East Hall
Visit the Enigma Sponsor Showcase!
1:00 pm–3:00 pm

Internet of Things

Session Chair: Tudor Dumitras, University of Maryland, College Park

Ghost in the Machine: Challenges in Embedded Binary Security

1:00 pm1:30 pm

Jos Wetzels, Distributed and Embedded System Security (DIES) Group, University of Twente, Netherlands

Jos Wetzels, Distributed and Embedded System Security (DIES) Group, University of Twente

Jos Wetzels is a Research Assistant with the Distributed and Embedded System Security (DIES) Group at the University of Twente. He currently works on projects aimed at hardening embedded systems used in critical infrastructure, where he focuses on binary security in general and exploit development and mitigation in particular, and has been involved in research regarding on-the-fly detection and containment of unknown malware and Advanced Persistent Threats. He has assisted teaching hands-on offensive security classes for graduate students at the Dutch Kerckhoffs Institute for several years.

Embedded systems are everywhere, from consumer electronics to critical infrastructure, vehicles, airplanes and military equipment. Yet public attention to security of embedded systems is relatively recent compared to that of the general purpose world. Combined with its polyculture of hardware architectures and operating systems and the proprietary and restricted nature of many systems and protocols, this has led to an opaque security landscape for both researchers and developers.

As a result embedded binary security generally lags behind what is commonly expected of modern general purpose systems. Hardening embedded systems via adoption of modern exploitation mitigations isn't, however, a trivial endeavor due to the many challenges and intrinsic constraints imposed by embedded environments. If we take into consideration the dominance of unsafe languages and the fact that patch deployment is far more involved on these systems, this is all the more reason for concern.

Embedded systems are everywhere, from consumer electronics to critical infrastructure, vehicles, airplanes and military equipment. Yet public attention to security of embedded systems is relatively recent compared to that of the general purpose world. Combined with its polyculture of hardware architectures and operating systems and the proprietary and restricted nature of many systems and protocols, this has led to an opaque security landscape for both researchers and developers.

As a result embedded binary security generally lags behind what is commonly expected of modern general purpose systems. Hardening embedded systems via adoption of modern exploitation mitigations isn't, however, a trivial endeavor due to the many challenges and intrinsic constraints imposed by embedded environments. If we take into consideration the dominance of unsafe languages and the fact that patch deployment is far more involved on these systems, this is all the more reason for concern.

In this talk we will delve into the embedded ecosystem, present an overview of the state of embedded binary security and outline some of the challenges faced in the adoption of modern exploit mitigations, drawing upon our experiences during security research conducted into popular embedded OSes and our involvement in developing such mitigations for Industrial Control Systems.

Available Media

LLC Cache Attacks: Applicability and Countermeasures

1:30 pm2:00 pm

Gorka Irazoqui, Intel and Worcester Polytechnic Institute

Gorka Irazoqui, Intel and Worcester Polytechnic Institute

Gorka Irazoqui received his BSC (2011) and MSC (2013) in telecommunications from Tecnun Universidad de Navarra, in Spain. He will start his 4th year as a PhD student at Worcester Polytechnic Institute in Massachusetts under the supervision of Thomas Eisenbarth. His research topics are microarchitectural side-channel attacks and countermeasures in the cloud. He spent the summer of 2016 doing an internship at Intel.

This is a joint work with Xiaofeo Guo (Intel Corporation).

Cache attacks have been demonstrated to be a big concern for security code designers by being able to recover a wide range of information, ranging from cryptographic keys to user privacy-related information. These attacks take advantage of the fact that two processes are utilizing the same hardware resource, thus leveraging unexpected leakages that can be exploited by a malicious user. More specifically, Last Level Cache (LLC) attacks make use of the fact that the LLC is shared across cores, thus being able to steal information from users located in different cores. This presentation describes the approaches that Flush and Reload and Prime and Probe take, together with the requirements that they need to successfully be launched. In addition, we evaluate different examples of everyday usage software that can be targeted by these kinds of attacks to violate our privacy.

Cache attacks have been demonstrated to be a big concern for security code designers by being able to recover a wide range of information, ranging from cryptographic keys to user privacy-related information. These attacks take advantage of the fact that two processes are utilizing the same hardware resource, thus leveraging unexpected leakages that can be exploited by a malicious user. More specifically, Last Level Cache (LLC) attacks make use of the fact that the LLC is shared across cores, thus being able to steal information from users located in different cores. This presentation describes the approaches that Flush and Reload and Prime and Probe take, together with the requirements that they need to successfully be launched. In addition, we evaluate different examples of everyday usage software that can be targeted by these kinds of attacks to violate our privacy. Further, this presentation expands on the scenarios in which both attacks can succeed, including but not limited to, IaaS and PaaS co-located VMs/processes, Web browsing javascript attacks, trusted execution environment attacks or cross smartphone application attacks. Finally, we evaluate the effectiveness of possible preventions at different levels of the system, including network level, software level, OS/hypervisor level, and hardware level countermeasures.

Available Media

IoT, a Cybercriminal's Paradise

2:00 pm2:30 pm

Attila Marosi, Sophos

Attila Marosi, Senior Threat Researcher, Sophos

Attila Marosi has always worked in the information security field ever since he started in IT. As a lieutenant of active duty he worked for almost a decade on special information security tasks occurring within the Special Service for National Security. Later he was transferred to the newly established GovCERT-Hungary, which is an additional national level in the internationally known system of CERT offices. Now he works for the SophosLabs as a Senior Threat Researcher in the Emerging Threats Team to provide novel solutions for the newest threats.

Over 90% of the unprotected devices were found to be infected with at least one Malware threat or exhibiting the signs of an attack. In this talk, we'll reveal the results of the research, exposing the number of vulnerable devices and the gigabytes of storage that are now freely available to attackers. We’ll also share the technical results of the malware analysis. In summary, this talk will provide an insight into how the very old Internet protocols are being exploited on modern internet connected "things," explain the risks it creates to home and corporate users, and suggest recommendations on how businesses and users should be protecting themselves better against these unsophisticated but dangerous and highly successful attack scenarios.

Over 90% of the unprotected devices were found to be infected with at least one Malware threat or exhibiting the signs of an attack. In this talk, we'll reveal the results of the research, exposing the number of vulnerable devices and the gigabytes of storage that are now freely available to attackers. We’ll also share the technical results of the malware analysis. In summary, this talk will provide an insight into how the very old Internet protocols are being exploited on modern internet connected "things," explain the risks it creates to home and corporate users, and suggest recommendations on how businesses and users should be protecting themselves better against these unsophisticated but dangerous and highly successful attack scenarios.

Available Media

Hacking Sensors

2:30 pm3:00 pm

Yongdae Kim, Professor, Korea Advanced Institute of Science and Technology (KAIST)

Yongdae Kim, Professor, Korea Advanced Institute of Science and Technology (KAIST)

Yongdae Kim is a Professor in the Department of Electrical Engineering and an Affiliate Professor in the GSIS at KAIST. He received his PhD from the computer science department at the University of Southern California under the guidance of Gene Tsudik. Between 2002 and 2012, he was an Associate/Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota Twin Cities. Before joining U of Minnesota, he worked as a research staff member for two years in the Sconce Group at UC Irvine. Before coming to the US, he worked 6 years in ETRI for securing Korean cyberinfrastructure. Between 2013 and 2016, he served as a KAIST Chair Professor. He received an NSF career award on storage security and a McKnight Land-Grant Professorship Award from the University of Minnesota in 2005. Currently, he is serving as a steering committee member of NDSS and Associate Editor for ACM TISSEC. His current research interests include security issues for various systems such as cyber physical systems, social networks, cellular networks, P2P systems, medical devices, storage systems, mobile/ad hoc/sensor networks, and anonymous communication systems.

Sensors are designed to measure sensor inputs (e.g., physical quantities) and transfer sensor outputs (e.g. voltage signal) into the embedded devices. In addition, sensor-equipped embedded systems (called sensing-and-actuation systems) decide their actuations according to these sensor outputs, and the systems have no doubt whether the sensor outputs are legitimate or not. Sensors are essential components for safety-critical systems such as self-driving cars, drones and medical devices. Breaking safety in these systems may cause loss of life or disasters. Because of these safety reasons, sensors are often designed to be robust against failure or faults. However, can they maintain safety under adversarial conditions? In this talk, I detail how sensors can be spoofed or prevented from providing correct operation through regular and side-channels. Attacks on various devices such as medical devices, drones, and smart wearables will be shown.

Sensors are designed to measure sensor inputs (e.g., physical quantities) and transfer sensor outputs (e.g. voltage signal) into the embedded devices. In addition, sensor-equipped embedded systems (called sensing-and-actuation systems) decide their actuations according to these sensor outputs, and the systems have no doubt whether the sensor outputs are legitimate or not. Sensors are essential components for safety-critical systems such as self-driving cars, drones and medical devices. Breaking safety in these systems may cause loss of life or disasters. Because of these safety reasons, sensors are often designed to be robust against failure or faults. However, can they maintain safety under adversarial conditions? In this talk, I detail how sensors can be spoofed or prevented from providing correct operation through regular and side-channels. Attacks on various devices such as medical devices, drones, and smart wearables will be shown. I'll complete the talk with a few directions and guides to prevent these attacks with a few open problems. 

Available Media

Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
3:30 pm–5:30 pm

Software Security

Session Chair: Michael Bailey, University of Illinois at Urbana–Champaign

Test Driven Security in Continuous Integration

3:30 pm4:30 pm

Julien Vehent, Firefox Services Security Lead at Mozilla

Julien Vehent, Firefox Services Security Lead at Mozilla

Julien Vehent leads security architecture for Mozilla Firefox Services. He is responsible for defining, implementing and operating the security of web services that millions of Firefox users interact with daily. Julien's background is in background in web applications security, services architecture, cryptography and risk management. Julien is the author of Securing DevOps (Manning Ed.).

Mozilla runs services for millions of Firefox users that must be operated at reasonable cost while sustaining a fast innovation pace. Development and operation teams have long adopted DevOps' Continuous Integration (CI) and Continuous Delivery (CD) principles, allowing applications to go from a patch submission to production deployment in minutes. These fast cycles have left security controls designed for slow deployment cycles lagging behind. In this talk, we describe how the Mozilla CloudSec team has redesigned security into the DevOps pipelines to accelerate the discovery and mitigation of security issues using a technique called "Test Driven Security" (TDS).

Mozilla runs services for millions of Firefox users that must be operated at reasonable cost while sustaining a fast innovation pace. Development and operation teams have long adopted DevOps' Continuous Integration (CI) and Continuous Delivery (CD) principles, allowing applications to go from a patch submission to production deployment in minutes. These fast cycles have left security controls designed for slow deployment cycles lagging behind. In this talk, we describe how the Mozilla CloudSec team has redesigned security into the DevOps pipelines to accelerate the discovery and mitigation of security issues using a technique called "Test Driven Security" (TDS).

Similar to Test Driven Development, TDS puts the security tests that represent the desired behavior first, then runs these tests continuously against the code. Compared to a traditional approach where controls implementation is done outside of CI/CD, TDS can run in the DevOps pipeline automatically and continuously assert security of a web application.

In this presentation, we show how Mozilla uses Open Source tools to implement TDS and reduce the number of security vulnerabilities and regressions that reach production environments.

Available Media

As We May Code

4:00 pm4:30 pm

David Molnar, Senior Researcher, Microsoft

David Molnar, Senior Researcher, Microsoft

David Molnar is a senior researcher at Microsoft AI and Research, where he leads the incubation of "Project Springfield." Project Springfield combines pioneering "whitebox fuzzing" technology with cloud scale to help people find serious security issues in software across their organization. Prior to Microsoft, David earned a Ph.D. in computer science at University of California, Berkeley, working with David Wagner. David’s work on privacy for radio frequency identification earned the "Test of Time" award from the ACM Computer and Communications Security conference in 2014, and his current work focuses on software security.

Advances in automatic reasoning, combined with the shift to cloud services, have changed the software security lifecycle. As we develop software, deploy it into our production systems, look for bugs in that software, and seek to understand these bugs, new techniques help us augment our human abilities with reasoning that can scale up to make decisions in seconds and scale across a wide range of problems. Cloud services that bundle together the data, compute, and specialized talent required for these augmentations make it possible to deploy these techniques across an organization. I will illustrate with examples from different phases of the software security lifecycle, including the "Project Springfield" bug finding service at Microsoft. Finally I will show how the trend toward cloud infrastructure will enable deeper impact on how we manage security for code we build, code we buy, or code we transform.

Advances in automatic reasoning, combined with the shift to cloud services, have changed the software security lifecycle. As we develop software, deploy it into our production systems, look for bugs in that software, and seek to understand these bugs, new techniques help us augment our human abilities with reasoning that can scale up to make decisions in seconds and scale across a wide range of problems. Cloud services that bundle together the data, compute, and specialized talent required for these augmentations make it possible to deploy these techniques across an organization. I will illustrate with examples from different phases of the software security lifecycle, including the "Project Springfield" bug finding service at Microsoft. Finally I will show how the trend toward cloud infrastructure will enable deeper impact on how we manage security for code we build, code we buy, or code we transform.

Available Media

Leveraging the Power of Automated Reasoning in Security Analysis of Web Applications and Beyond

4:30 pm5:00 pm

Nestan Tsiskaridze, University of California, Santa Barbara

Nestan Tsiskaridze, University of California, Santa Barbara

Nestan is a Postdoctoral Researcher at the Department of Computer Science at the University of California, Santa Barbara. Her research interests include Automated Reasoning, Satisfiability Modulo Theories, Formal Verification, and Security. Her current work focuses on developing symbolic analysis techniques and tools for automatic identification of vulnerabilities in Java bytecode. Previously she worked on developing techniques to aid checking of software security properties, automating solving of string constraints; and on automated reverse engineering of integrated circuits for security analysis.

She was a Postdoctoral Researcher and a Visiting Assistant Professor at the University of Iowa and a Postdoctoral Researcher at Princeton University.

She received a PhD in Computer Science from the University of Manchester, United Kingdom. Her dissertation presented a novel approach for Linear Programing—the Conflict Resolution method.

The modern world critically depends on the security and safety of software. We seek to ensure customer confidence and protect privacy, intellectual property, and national security. As threats to software security have become more sophisticated, so too have the techniques developed to ensure security.

This talk focuses on novel opportunities to automate bug detection and security exploit generation provided by advances in symbolic execution and automated constraint solving. It discusses how symbolic execution can benefit from novel techniques in Satisfiability Modulo Theories (SMT), a subfield of automated theorem proving that in the past 10 years has revolutionized the discipline. The talk presents a recent highly successful application of SMT solvers in support of the security analysis of Web applications and how these new capabilities open opportunities for automating such analysis beyond the Web.

The modern world critically depends on the security and safety of software. We seek to ensure customer confidence and protect privacy, intellectual property, and national security. As threats to software security have become more sophisticated, so too have the techniques developed to ensure security.

This talk focuses on novel opportunities to automate bug detection and security exploit generation provided by advances in symbolic execution and automated constraint solving. It discusses how symbolic execution can benefit from novel techniques in Satisfiability Modulo Theories (SMT), a subfield of automated theorem proving that in the past 10 years has revolutionized the discipline. The talk presents a recent highly successful application of SMT solvers in support of the security analysis of Web applications and how these new capabilities open opportunities for automating such analysis beyond the Web.

This is a joint work with Clark Barrett (NYU/Stanford University), Morgan Deters (NYU), Tianyi Liang (The University of Iowa), Andrew Reynolds (The University of Iowa/EPFL), and Cesare Tinelli (The University of Iowa).

Available Media

Startups + Industry: How Everyone Can Win

5:00 pm5:30 pm

Hudson Thrift, Uber

Hudson Thrift, Uber

Hudson Thrift leads security operations at Uber, where he is responsible for strategy, efficiency, and execution across the engineering security organization. Hudson is a co-founder and former COO of Kaprica Security, a mobile security company serving large enterprise and government customers, and acquired by Samsung Electronics in 2016. He holds a bachelor of science degree in computer science from Carnegie Mellon University and is a member of the school’s renowned Plaid Parliament of Pwning (PPP) hacking team.

Companies need better, not more, security products. The usual choices when looking for a new solution are buy, build, or use open source, but what if there are other options? In this spirit, the engineering security team at Uber has been exploring how to engage with early stage startups to bring new product features, and cause slight roadmap pivots, to address Uber’s security needs. The results have been cost effective, time efficient, and have created a better product for everyone.

This talk will discuss how to identify appropriate problems that can be solved with this model, how to find the right kind of startup, and how to work with them under your specific requirements. We’ll explore the concept from an abstract point of view as well as in practice using a case study from the Uber team.

Companies need better, not more, security products. The usual choices when looking for a new solution are buy, build, or use open source, but what if there are other options? In this spirit, the engineering security team at Uber has been exploring how to engage with early stage startups to bring new product features, and cause slight roadmap pivots, to address Uber’s security needs. The results have been cost effective, time efficient, and have created a better product for everyone.

This talk will discuss how to identify appropriate problems that can be solved with this model, how to find the right kind of startup, and how to work with them under your specific requirements. We’ll explore the concept from an abstract point of view as well as in practice using a case study from the Uber team.

Available Media

6:00 pm–7:30 pm
Conference Reception
East Hall, Sponsored by Google
7:30 pm–10:30 pm
Birds-of-a-Feather Sessions
Birds-of-a-Feather sessions (BoFs) are informal gatherings of peers interested in discussing a particular topic and are a great way to maximize the value of your time at a conference. You can always sign up to lead a BoF on site if you're inspired by something you heard in one of the sessions or in the hallway track and want to talk further. See the BoFs page for more information.

Tuesday, January 31, 2017

Continental Breakfast in the Exhibit Foyer
Visit the Enigma Sponsor Showcase!
8:55 am–9:00 am
9:00 am–10:00 am

Behaviors and Detection

Session Chair: Carrie Gates, Dell

Behaviors and Patterns of Bulletproof and Anonymous Hosting Providers

9:00 am9:30 am

Dhia Mahjoub, Principal Engineer, OpenDNS Research Labs

Dhia Mahjoub, Principal Engineer, OpenDNS Research Labs

Dhia Mahjoub is the Principal Engineer of OpenDNS Research Labs (now part of Cisco) with more than 10 years of technology research experience in network protocols, graph theory, sensor networks, and security. He builds large scale threat detection systems, leads research projects, and provides expert advice on strategic directions. Dhia holds a PhD in Computer Science from Southern Methodist University with a specialty in graph theory applied on Wireless Sensor Networks. He has presented at conferences worldwide including APWG eCrime, Botconf, Black Hat, Defcon, Virus Bulletin, ShmooCon, Kaspersky SAS, Infosecurity Europe, BruCon, Hack.lu, FloCon, and RSA.

Bulletproof and anonymous hosting providers are key enabling factors of ransomware, phishing, and other cybercrime operations. Bulletproof hosters shield criminal content from abuse complaints and takedowns, whereas anonymous offshore hosters preserve privacy and free speech for their customers. Despite being conceptually different, the distinction between both classes tends to blur in practice. These hosters leverage multiple factors in their operations: the anonymity of the internet when establishing their businesses, heterogeneous laws and norms that exist in cross-border online spaces, and jurisdictions with little or no legislation to enforce laws against cyber criminals. Focusing threat intelligence efforts on these services and the actors that provide them is an important step to identifying and removing illegal and malicious content on the Internet.

Bulletproof and anonymous hosting providers are key enabling factors of ransomware, phishing, and other cybercrime operations. Bulletproof hosters shield criminal content from abuse complaints and takedowns, whereas anonymous offshore hosters preserve privacy and free speech for their customers. Despite being conceptually different, the distinction between both classes tends to blur in practice. These hosters leverage multiple factors in their operations: the anonymity of the internet when establishing their businesses, heterogeneous laws and norms that exist in cross-border online spaces, and jurisdictions with little or no legislation to enforce laws against cyber criminals. Focusing threat intelligence efforts on these services and the actors that provide them is an important step to identifying and removing illegal and malicious content on the Internet. As an example, we choose The Netherlands, one of the world's top transit and hosting spaces, and through our research we bring together findings from the network and the field to shed light on criminal hosting in the Dutch IP space. This talk will be useful to threat analysts, security researchers, and law enforcement.

This is a joint work with Sarah Brown (Security Links/NATO).

Available Media

StreamAlert: A Serverless, Real-time Intrusion Detection Engine

9:30 am10:00 am

Jack Naglieri, Security @ Airbnb

Jack Naglieri, Security @ Airbnb

Jack Naglieri is a security engineer at Airbnb, with a passion for DevOps, Security, and Infrastructure. His exposure to information security began as an incident responder for Verisign. After graduation from George Mason University, he moved to the San Francisco Bay area and spent two years at Yahoo as an incident responder. He then transitioned into a security engineering role, with the challenge of deploying security monitoring tools at scale. A year and a half later, he joined Airbnb, and is now open-sourcing a framework that enables real-time data analysis at scale—StreamAlert.

StreamAlert is a serverless, real-time intrusion detection engine. By building upon technologies such as AWS (platform-as-a-service), Terraform (infrastructure-as-code) and Chef (configuration-as-code), any Security organization can quickly deploy StreamAlert and have it scale to terabytes per hour. StreamAlert enables you to write both simple and complex rules against any of your application, system, or network logs. StreamAlert will remove all of the headaches typically associated with availability, scalability, segmentation, and maintenance (patching, upgrades, etc...). This lets your team focus on their core competency—catching attackers.

StreamAlert is a serverless, real-time intrusion detection engine. By building upon technologies such as AWS (platform-as-a-service), Terraform (infrastructure-as-code) and Chef (configuration-as-code), any Security organization can quickly deploy StreamAlert and have it scale to terabytes per hour. StreamAlert enables you to write both simple and complex rules against any of your application, system, or network logs. StreamAlert will remove all of the headaches typically associated with availability, scalability, segmentation, and maintenance (patching, upgrades, etc...). This lets your team focus on their core competency—catching attackers.

Available Media

Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
10:30 am–12:00 pm

Neuroscience and Security

Session Chair: Daniela Oliveira, University of Florida

Neural and Behavioral Insights on Trust

10:30 am11:00 am

Uma R. Karmarkar, Assistant Professor, Harvard Business School; Affiliate of the Harvard Center for Brain Science; currently a Visiting Scholar at the Haas School of Business, University of California, Berkeley

Uma R. Karmarkar, Assistant Professor, Harvard Business School; Affiliate of the Harvard Center for Brain Science; Visiting Scholar at the Haas School of Business, University of California, Berkeley

Uma R. Karmarkar is an Assistant Professor in the Marketing Unit at the Harvard Business School. She holds dual PhDs in Neuroscience (University of California, Los Angeles) and in Consumer Behavior (Stanford Graduate School of Business). Prior to entering the field of consumer behavior, she held an NIH-supported postdoctoral fellowship in Neuroscience at the University of California, Berkeley.

Dr. Karmarkar’s research on this falls into two major streams. The first investigates how individuals use the limited information available in uncertain contexts to make choices. The second examines how the timing and context-dependent framing of information influences perceptions of value and decisions. In pursuing this work, she combines methods from consumer psychology and behavioral economics together with neuroimaging, allowing her to study both conscious and unconscious processes.

Trust is a psychological factor that can gate channels of communication, persuasion, and collaboration. Here, I offer an overview of some of the neural and psychological mechanisms involved in coding for trust and coding for distrust. Trust can be conceptualized as two types of functions. The first is a factor in a relationship with another agent that is often socially developed through one or more interactions. The second is as an individual perception that contributes to certainty or confidence in the face of uncertainty. My research relates to the latter, and how people use incomplete information to handle uncertain or ambiguous decisions. I will show how the relationship between information and feelings of certainty is important for persuasion, and can lead to both optimistic and pessimistic biases in individual decision-making.

Trust is a psychological factor that can gate channels of communication, persuasion, and collaboration. Here, I offer an overview of some of the neural and psychological mechanisms involved in coding for trust and coding for distrust. Trust can be conceptualized as two types of functions. The first is a factor in a relationship with another agent that is often socially developed through one or more interactions. The second is as an individual perception that contributes to certainty or confidence in the face of uncertainty. My research relates to the latter, and how people use incomplete information to handle uncertain or ambiguous decisions. I will show how the relationship between information and feelings of certainty is important for persuasion, and can lead to both optimistic and pessimistic biases in individual decision-making.

Available Media

What Does the Brain Tell Us about Usable Security?

11:00 am11:30 am

Anthony Vance, Brigham Young University

Anthony Vance, Brigham Young University

Anthony Vance is an Associate Professor of Information Systems in the Marriott School of Management of Brigham Young University. He has earned Ph.D. degrees in Information Systems from Georgia State University, USA; the University of Paris—Dauphine, France; and the University of Oulu, Finland. He is currently an associate editor at MIS Quarterly. His research focuses on behavioral and neuroscience applications to information security, and has been published in MIS Quarterly, Information Systems Research, Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI),and other outlets.

A popular saying in information security is, “Given a choice between dancing pigs and security, users will pick dancing pigs every time.” However, this statement is unfair—insights from neuroscience show that users’ seemingly lazy or careless security behaviors are not necessarily careless at all, but are a natural consequence of how the brain works.  

In this talk, we’ll show how neuroscience can be used to better understand users’ security behavior and design more usable security UI. To illustrate, we’ll share insights from of a series of experiments that combine functional magnetic resonance imaging (fMRI) and traditional usability testing. The results reveal the substantial effect of neurobiology on users’ security behavior, and suggest practical ways to improve security UI.

A popular saying in information security is, “Given a choice between dancing pigs and security, users will pick dancing pigs every time.” However, this statement is unfair—insights from neuroscience show that users’ seemingly lazy or careless security behaviors are not necessarily careless at all, but are a natural consequence of how the brain works.  

In this talk, we’ll show how neuroscience can be used to better understand users’ security behavior and design more usable security UI. To illustrate, we’ll share insights from of a series of experiments that combine functional magnetic resonance imaging (fMRI) and traditional usability testing. The results reveal the substantial effect of neurobiology on users’ security behavior, and suggest practical ways to improve security UI.

This is a joint work with Bonnie Anderson, Jeff Jenkins, and Brock Kirwan of the Neurosecurity Lab (https://neurosecurity.net/) at Brigham Young University.

Available Media

Brains Can Be Hacked. Why Should You Care?

11:30 am12:00 pm

Tamara Bonaci, University of Washington

Tamara Bonaci, University of Washington

Tamara Bonaci received her PhD from the University of Washington in 2015, focusing on security and privacy issues of emerging biomedical technologies. She is a faculty member at the same university, teaching a sequence of security courses. Tamara is also a part of a pre-public UW spinoff focusing on biometric methods. Tamara’s current research interests focus on security and privacy of biomedical systems, human-computer interaction, and electrophysiological signals.

What happens if a surgical robot, used to perform a life-saving medical procedure, gets compromised and is used to harm a patient on an operating table, a surgeon performing a procedure, or both of them? What happens if a brain-computer interface, used either by severely disabled people or by early adopters, gets compromised and starts allowing anyone interested to listen in on its user's preferences, prejudices, or secrets? A lot—and sci-fi literature is full of interesting but rather unsettling examples.

So a better question to ask is: what can we done to prevent these attacks from happening? The answer is: a lot, and in this talk I will show that many of the mitigation strategies that we can apply rely on users' uniqueness in the way they interact with the system. 

What happens if a surgical robot, used to perform a life-saving medical procedure, gets compromised and is used to harm a patient on an operating table, a surgeon performing a procedure, or both of them? What happens if a brain-computer interface, used either by severely disabled people or by early adopters, gets compromised and starts allowing anyone interested to listen in on its user's preferences, prejudices, or secrets? A lot—and sci-fi literature is full of interesting but rather unsettling examples.

So a better question to ask is: what can we done to prevent these attacks from happening? The answer is: a lot, and in this talk I will show that many of the mitigation strategies that we can apply rely on users' uniqueness in the way they interact with the system. 

Available Media

Lunch in the East Hall
Visit the Enigma Sponsor Showcase!
1:00 pm–3:00 pm

Security Helping Society

Session Chair: Chenxi Wang, Twistlock Inc.

Won't Somebody Please Think of the Journalists?

1:00 pm1:30 pm

Tom Lowenthal, Staff Technologist, Committee to Protect Journalists

Tom Lowenthal, Staff Technologist, Committee to Protect Journalists

Tom Lowenthal is a technologist and activist committed to combating our contemporary cyberpunk mass-surveillance dystopia. By day, they're the staff technologist for the tech program at the Committee to Protect Journalists. By night, they practice robust self-care because mental health is important and burnout can be a killer. Tom's also a fellow at Stanford's Center for Internet and Society; They've previously worked at the Tor Project and Mozilla. They're a big believer in individual privacy, self-determination, and practical usable tools.

When researching security/privacy and developing tools, it is tempting to focus on the abstract technical merits of a problem. In practice, attacks are not graded for difficulty, only success. Why spend the time and expense of a zero-day exploit which bypasses ASLR to achieve remote code execution when spearphishing is so effective? The biggest barriers to widespread computer security are not technical. Wide deployment of privacy-preserving tools and trustworthy computers isn't limited by cutting-edge challenges in cryptography or formal methods. The obstacles are getting everyday tools to implement secure development best-practices, incorporate end-to-end crypto, and offer multi-factor authentication. The problem is fighting an endless public relations war about whether we should have to invent the impossible to create back-doors or design tools which protect their users except when the user is trying to do something bad.

When researching security/privacy and developing tools, it is tempting to focus on the abstract technical merits of a problem. In practice, attacks are not graded for difficulty, only success. Why spend the time and expense of a zero-day exploit which bypasses ASLR to achieve remote code execution when spearphishing is so effective? The biggest barriers to widespread computer security are not technical. Wide deployment of privacy-preserving tools and trustworthy computers isn't limited by cutting-edge challenges in cryptography or formal methods. The obstacles are getting everyday tools to implement secure development best-practices, incorporate end-to-end crypto, and offer multi-factor authentication. The problem is fighting an endless public relations war about whether we should have to invent the impossible to create back-doors or design tools which protect their users except when the user is trying to do something bad.

Here's the trick: think and talk about journalists. Talking about journalism as a first-class use case changes the mental calculus. It allows for focus on the real technical challenges of developing safe systems, and bypasses poorly-thought-out objections. Even better, thinking about the needs of journalists as first class users helps make design choices which better protect all users.

Available Media

Security in the Wild for Low-Profile Activists

1:30 pm2:00 pm

Zeynep Tufekci, University of North Carolina and The New York Times

Zeynep Tufekci, University of North Carolina and the New York Times

Zeynep Tufekci is an associate professor at the University of North Carolina and a contributing opinion writer at the New York Times. Her book titled "Twitter and Tear Gas: The Power and Fragility of Networked Protests" will be published by Yale University Press.

Why are many activists more worried about what their mom knows than being tortured? And what does this mean for their digital communication habits and methods? Will they add an extra five minutes to their log-on routine--even if their lives are on the line? Why not?  Why are they likely to trust SMS second-factor authentication rather than a more secure "security key"? How could we change their mind? Why are content policies part of security? Why do so many activists ask me more about how to be public and how to prove they took the pictures they did, than ask me about hiding their tracks?  Why do activists really need to hide their tracks when it comes to personal information, but not necessarily political communication?

Why are many activists more worried about what their mom knows than being tortured? And what does this mean for their digital communication habits and methods? Will they add an extra five minutes to their log-on routine--even if their lives are on the line? Why not?  Why are they likely to trust SMS second-factor authentication rather than a more secure "security key"? How could we change their mind? Why are content policies part of security? Why do so many activists ask me more about how to be public and how to prove they took the pictures they did, than ask me about hiding their tracks?  Why do activists really need to hide their tracks when it comes to personal information, but not necessarily political communication? Can governments defeat end-to-end security with just a rumor?  I draw on more than a decade of studying digital habits and needs of activists in a variety of countries, ranging from developed countries to fairly repressive ones to share what I know about security and safety in the networked public sphere, especially for non-high profile activists and ordinary people. 

Available Media

Why Philanthropy Is Critical for Cybersecurity

2:00 pm2:30 pm

Eli Sugarman, Program Officer, William and Flora Hewlett Foundation

Eli Sugarman, Program Officer, William and Flora Hewlett Foundation

Eli Sugarman is a Program Officer at the William and Flora Hewlett Foundation and manages the Cyber Initiative. The goal of the Cyber Initiative, a five-year $65 million grant-making effort, is to build a more robust cyber field and improve cybersecurity policy-making worldwide.

Previously, he was a consultant and strategist to private sector and nonprofit leaders internationally. He has served as a Foreign Affairs Officer at the U.S. Department of State, where he focused on international security issues. A graduate of Middlebury College, he also holds a JD from Stanford University Law School.

Philanthropy has a critical role to play in improving cybersecurity worldwide. As new technologies affect every aspect of our lives, the applicable laws, norms and policies—as well as the decision-makers that shape them—are struggling to keep up. High-profile breaches—at Sony Pictures, the Office of Personnel Management, and the Democratic National Committee, among many others—underscore the magnitude of the risks we face and the need for informed cybersecurity policies.

Philanthropy has a critical role to play in improving cybersecurity worldwide. As new technologies affect every aspect of our lives, the applicable laws, norms and policies—as well as the decision-makers that shape them—are struggling to keep up. High-profile breaches—at Sony Pictures, the Office of Personnel Management, and the Democratic National Committee, among many others—underscore the magnitude of the risks we face and the need for informed cybersecurity policies.

Yet despite its critical importance, funding to develop long-term cybersecurity policy for the benefit of the public is practically non-existent. The funding gap is, moreover, structural. Government and industry are directing significant resources to cybersecurity, but their efforts are and will remain focused on countering immediate threats and triaging new breaches. Unlike government or industry, philanthropy can be a neutral player not motivated by profit, politics, or self-interest.

There is critical work to be done for the safety of the public—work that government cannot and the private sector will not fund. What’s needed is flexible support from institutions that have the latitude to take a long-term, strategic approach—the kind of funding, in other words, that philanthropy is uniquely positioned to provide.

Available Media

Security for Vulnerable Populations—On the Interplay of Weapons of Influence and Life Domains in Predicting Older Adults' Susceptibility to Spear-Phishing Emails

2:30 pm3:00 pm

Daniela Oliveira, Associate Professor, University of Florida

Daniela Oliveira, Associate Professor, University of Florida

Daniela Oliveira is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Florida. She received her BS and MS degrees in Computer Science from the Federal University of Minas Gerais in Brazil. She then earned her PhD in Computer Science from the University of California, Davis. Her main research interest is interdisciplinary computer security, where she employs successful ideas from other fields to make computer systems more secure. Her current research interests include adding uncertainty to OS behavior to increase attackers’ work factor, understanding and addressing spear phishing susceptibility, cross-layer and personalized security, and understanding developer’s security blind spots. She received a National Science Foundation CAREER Award in 2012 for her innovative research into operating systems' defense against attacks using virtual machines and the 2014 NSF Presidential Early Career Award for Scientists and Engineers (PECASE) by President Obama. She is a National Academy of Sciences Kavli Frontiers of Science Fellow and a National Academy of Engineers Frontiers of Engineering Symposium Alumni. Her research has been sponsored by the National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), and MIT Lincoln Laboratory.

Older adults constitute a vulnerable population for online social engineering attacks because of their cognition decline with age, including deception sensitivity. They are the fastest-growing segment of the U.S. population, control over half of the financial wealth, and occupy many positions of power.

Older adults constitute a vulnerable population for online social engineering attacks because of their cognition decline with age, including deception sensitivity. They are the fastest-growing segment of the U.S. population, control over half of the financial wealth, and occupy many positions of power.

Spear-phishing emails are used as a central component in many cyber-attacks. Successful emails employ psychological weapons of influence (e.g., scarcity), and relevant life domains (e.g., health) to deceive victims. This talk discusses older adults’ susceptibility to spear-phishing emails as a function of weapon of influence and life domain. A 21-day study was conducted with 83 participants (young and older adults) with data collection at the participants’ homes. Our results show that older users, especially older women, were more susceptible to phishing than younger users. While younger users were susceptible to a few weapons of influence and life domains, older users were susceptible to a variety of them. Further, there was a discrepancy, particularly among older users, between self-reported susceptibility awareness and actual susceptibility. Our results could lead to a well-grounded next generation of phishing defense solutions and training, which would take into account user age, weapons of influence, and life domains, to be effective.

Available Media

Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
3:30 pm–5:30 pm

Trustworthy Computing

Session Chairs: Alex Stamos, Facebook, and Kurt Opsahl, Electronic Frontier Foundation

Beyond Warm & Fuzzy: Ethics as a Value Prop

3:30 pm4:00 pm

Erin Kenneally, Program Manager, Trust & Innovation, U.S. Department of Homeland Security, Cyber Security Division

Erin Kenneally, Program Manager, Trust & Innovation; U.S. Dept. of Homeland Security, Cyber Security Division

Erin Kenneally is a Program Manager in the Cyber Security Division for the Homeland Security Advanced Research Projects Agency (HSARPA) at the DHS Science & Technology Directorate. Her portfolio comprises trusted data sharing, privacy and Information Communication Technology (ICT) ethics, including managing the IMPACT (Information Marketplace for Policy and Analysis of Cyber-risk and Trust) Program. Prior to joining CSD, Kenneally was Founder and CEO of Elchemy, Inc., and served as Technology-Law Specialist at the International Computer Science Institute (ICSI) and the Center for Internet Data Analysis (CAIDA) and Center for Evidence-based Security Research (CESR) at the University of California, San Diego.

Erin is a licensed attorney specializing in information technology law, including privacy technology, data protection, trusted information sharing, technology policy, cybercrime, data ethics, and emergent IT legal risks. She holds Juris Doctorate and Masters of Forensic Sciences degrees, and is a graduate of Syracuse University and The George Washington University.

Industry is grappling, arguably unsuccessfully, with core sociotechnical tensions between individual and collective rights and interests in privacy, security, innovation, and autonomy. This is manifest in issues associated with smart-X (homes, health wearables, vehicles), predictive services, and precision measurements, for example.  Because these issues force new applications and interpretations of our traditional social mooring—law, economics and security—we need to rely on ethics as a common ordering force to address the challenges facing industry in achieving customer privacy and autonomy, business innovation and profit, and public and private cyber security. While the concept of corporate social responsibility is not new, the notion of collective industry ethics is shallow at best.

Industry is grappling, arguably unsuccessfully, with core sociotechnical tensions between individual and collective rights and interests in privacy, security, innovation, and autonomy. This is manifest in issues associated with smart-X (homes, health wearables, vehicles), predictive services, and precision measurements, for example.  Because these issues force new applications and interpretations of our traditional social mooring—law, economics and security—we need to rely on ethics as a common ordering force to address the challenges facing industry in achieving customer privacy and autonomy, business innovation and profit, and public and private cyber security. While the concept of corporate social responsibility is not new, the notion of collective industry ethics is shallow at best. This talk aims to advance the dialogue and collective action by discussing some of the core elements of ethics on which industry doesn't compete but without which it can individually fail.

Available Media

Legislative Engineering: Design Privacy Laws, Don't Just Draft Them

4:00 pm4:30 pm

Chris Conley, Policy Attorney, ACLU of Northern California

Chris Conley, Policy Attorney, ACLU of Northern California

Chris is a policy attorney at the ACLU of Northern California, focused on the intersection of civil liberties (including but not limited to private and free speech) and emerging technologies. He has a background in computer science and tries to use an engineering approach to addressing complex technological issues through legislation, litigation, public education, and other forms of advocacy. His current focus is on electronic searches and surveillance by state and local entities.

Law is code, but too much of it is bad code. And just like bad software, bad legal code creates a host of problems, from hidden (by accident or design) biases to vague language that leads to unexpected and even manipulable results. Nowhere is this problem more acute than in the realm of laws addressing modern technology, many of which are drafted by legislators or advocates who fail to understand the present capacities and limitations of the technology, let alone its future potential.

Law is code, but too much of it is bad code. And just like bad software, bad legal code creates a host of problems, from hidden (by accident or design) biases to vague language that leads to unexpected and even manipulable results. Nowhere is this problem more acute than in the realm of laws addressing modern technology, many of which are drafted by legislators or advocates who fail to understand the present capacities and limitations of the technology, let alone its future potential.

This talk will discuss two software engineering techniques, component-based architecture and modeling, as potential ways to improve future technology legislation. Carefully-designed and vetted existing legislation, such as the California Electronic Communications Privacy Act, can provide future legislators with "libraries"—from individual definitions to full model statutes—to incorporate into future work without adding complexity and inconsistency by reinventing the wheel. And developing robust models of the interplay between existing law, such as the various statutes that relate to search warrants, can help lawmakers at least conduct thought experiments on the effects of proposed changes. Can using techniques like these become the norm rather than the exception?

Available Media

The Paper Ballot Is Not Enough

4:30 pm5:00 pm

Ben Adida, VP Engineering, Clever

Ben Adida, VP Engineering, Clever

Ben Adida is a software engineer with a passion for using technology to empower individuals. He has extensive academic and industry experience in identity and payments, voting technology, health IT, security and privacy, and Web architecture.

Ben is VP Engineering at Clever, which securely connects more than 50% of US classrooms to educational software. In his free time, Ben works on Helios, a publicly available, truly verifiable, online voting system. Ben holds a PhD in Cryptography and Information Security from MIT.

Over the last 15 years, election integrity advocates have pushed the paper ballot as the unifying solution to various election audit problems. As long as we have the paper ballot, we thought, we can always recount. The US election of 2016 has given us ample evidence that a paper ballot recount is a lot less likely to occur than we had imagined. We cannot continue to rely solely on paper recounts to ensure the integrity of our elections.

Over the last 15 years, election integrity advocates have pushed the paper ballot as the unifying solution to various election audit problems. As long as we have the paper ballot, we thought, we can always recount. The US election of 2016 has given us ample evidence that a paper ballot recount is a lot less likely to occur than we had imagined. We cannot continue to rely solely on paper recounts to ensure the integrity of our elections.

Instead, we should build election systems that inherently provide evidence of their integrity, not via a post-hoc audit process, but by virtue of running the election itself. We should be able to challenge election results not because an election is close, but because there is hard objective evidence of questionable integrity. We'll focus on end-to-end cryptographic techniques and how they can work in practice, and we'll also touch on how important voter registration is, and what we can do to increase our confidence in the maintenance of voter rolls.

Available Media

What Cybersecurity Can Learn from the Secret Service

5:00 pm5:30 pm

Nathaniel Gleicher, Head of Cybersecurity Strategy, Illumio

Nathaniel Gleicher, Head of Cybersecurity Strategy, Illumio

Nathaniel Gleicher is the Head of Cybersecurity Strategy at Illumio, and is trained as a computer scientist and a lawyer. Nathaniel is a regular speaker at leading industry events, and his writing has appeared in industry publications, the popular press, and academic journals. Prior to Illumio, Nathaniel investigated and prosecuted domestic and international cybercrime at the U.S. Department of Justice and served as Director for Cybersecurity Policy on the National Security Council at the White House. He has also taught computer programming, designed and developed custom e-commerce and database solutions, and built and secured computer networks.

It’s easy to break into networks, and it always will be, because a bad actor only has to be right once to get inside. It should be hard to stay hidden once they're in, because each move could expose them. But today it isn't. Intruders spent more than a year inside the DNC and six months inside OPM. The 2013 Yahoo hack wasn't discovered until stolen data appeared for sale in 2016. We've spent the last decade trying to shift our defenses to the interior, but all the defense in depth in the world won't help us if we don't solve this puzzle. The Secret Service offers a compelling model. There's a reason that jumping the wrought iron fence at the White House is easy, but avoiding capture once inside is almost impossible. The Secret Service accepts porous boundaries and has used control of their environment to defend the President for more than a century.

It’s easy to break into networks, and it always will be, because a bad actor only has to be right once to get inside. It should be hard to stay hidden once they're in, because each move could expose them. But today it isn't. Intruders spent more than a year inside the DNC and six months inside OPM. The 2013 Yahoo hack wasn't discovered until stolen data appeared for sale in 2016. We've spent the last decade trying to shift our defenses to the interior, but all the defense in depth in the world won't help us if we don't solve this puzzle. The Secret Service offers a compelling model. There's a reason that jumping the wrought iron fence at the White House is easy, but avoiding capture once inside is almost impossible. The Secret Service accepts porous boundaries and has used control of their environment to defend the President for more than a century. This talk will examine how the Secret Service protects the President, how their approach can improve cybersecurity, and what we need to do to get there.>/p>

Available Media

5:30 pm–7:00 pm
Conference Reception
East Hall, Sponsored by Netflix
7:00 pm–10:00 pm
Birds-of-a-Feather Sessions
Birds-of-a-Feather sessions (BoFs) are informal gatherings of peers interested in discussing a particular topic and are a great way to maximize the value of your time at a conference. You can always sign up to lead a BoF on site if you're inspired by something you heard in one of the sessions or in the hallway track and want to talk further. See the BoFs page for more information.

Wednesday, February 1, 2017

Continental Breakfast in the Exhibit Foyer
8:55 am–9:00 am
9:00 am–10:00 am

Spam, Abuse, and Criminal Profit

Session Chair: Damon McCoy, New York University

How WhatsApp Reduced Spam while Launching End-to-End Encryption

9:00 am9:30 am

Matt Jones, Software Engineer, WhatsApp

Matt Jones, Software Engineer, WhatsApp

Matt Jones leads the anti-spam engineering team at WhatsApp, focusing on scalable ways to detect and prevent abuse for 1 billion users. Prior to that, he spent seven years at Facebook working on anti-abuse systems, from reverse-engineering malware and building high-throughput classification systems to defending new products and leading teams in California and London. Along the way, he's protected against malicious ISPs, compromised telephone carriers, and numerous spammers. Before Facebook, Matt earned his BS in Computer Science at Stanford University.

End-to-end encryption, which protects message content so that only the sender and recipient can access it, is gaining popularity in messaging applications. At the same time, there is some concern about potential deleterious effects on spam detection systems. At WhatsApp we have successfully launched such "e2e" encryption for over 1 billion people—while also reducing the amount of spam they receive. This talk will discuss techniques we've found successful for preventing spam without access to message content, and some of the challenges we faced along the way. It should help dispel concerns that e2e encryption necessarily means reduced effectiveness of spam detection.

End-to-end encryption, which protects message content so that only the sender and recipient can access it, is gaining popularity in messaging applications. At the same time, there is some concern about potential deleterious effects on spam detection systems. At WhatsApp we have successfully launched such "e2e" encryption for over 1 billion people—while also reducing the amount of spam they receive. This talk will discuss techniques we've found successful for preventing spam without access to message content, and some of the challenges we faced along the way. It should help dispel concerns that e2e encryption necessarily means reduced effectiveness of spam detection.

Available Media

Drawing the Foul: Operation of a DDoS Honeypot

9:30 am10:00 am

Damian Menscher, Security Reliability Engineer, Google

Damian Menscher, Security Reliability Engineer, Google

Damian Menscher is responsible for DDoS defense at Google, where he studied hundreds of DDoS attacks over the past decade. He uses his experience to design automated defenses for common attacks. Prior to joining Google, Damian completed a PhD in computational particle physics.

Botnet growth allows DDoS attacks to outpace Moore's Law, and novel attack styles create additional challenges. Meanwhile, most defenders share limited technical information about attacks, often limiting reports to exaggerated bandwidth estimates. We propose information gathering via the operation of a DDoS honeypot -- a victim site likely to attract interesting attacks -- and present our findings from the operation of KrebsOnSecurity.com.

Botnet growth allows DDoS attacks to outpace Moore's Law, and novel attack styles create additional challenges. Meanwhile, most defenders share limited technical information about attacks, often limiting reports to exaggerated bandwidth estimates. We propose information gathering via the operation of a DDoS honeypot -- a victim site likely to attract interesting attacks -- and present our findings from the operation of KrebsOnSecurity.com.

Available Media

Break with Refreshments in the East Hall
Visit the Enigma Sponsor Showcase!
10:30 am–12:00 pm

Usability and Privacy

Session Chair: Eric Rozier, Iowa State University

Putting Privacy Notices to the Test

10:30 am11:00 am

Lorrie Faith Cranor, Professor of Computer Science and of Engineering and Public Policy, Carnegie Mellon University

Lorrie Faith Cranor, Professor of Computer Science and of Engineering and Public Policy, Carnegie Mellon University

Lorrie Faith Cranor is a Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University where she is director of the CyLab Usable Privacy and Security Laboratory (CUPS) and co-director of the MSIT-Privacy Engineering masters program. In 2016 she served as Chief Technologist at the US Federal Trade Commission. She is also a co-founder of Wombat Security Technologies, Inc. She is a fellow of both ACM and IEEE. She practices yoga, plays soccer, and runs after her three children.

We all know that hardly anybody ever reads privacy notices or security warnings, and when people try to read them, they tend to be long and extremely difficult to understand. In this talk I will start by discussing why privacy notices are important, explain why they are largely failing to inform people, and discuss some of the approaches companies and researchers are taking in an attempt to make privacy notices more useful. Then I’ll present a theory about the cognitive processes that take place when someone encounters a privacy notice or security warning. Finally, I will share several examples in which my students conducted user studies to test the effectiveness of privacy notices or security warnings. I will show some examples of notices that don’t seem to be very effective, as well as some examples of how notices can be improved through an iterative design and testing process.

We all know that hardly anybody ever reads privacy notices or security warnings, and when people try to read them, they tend to be long and extremely difficult to understand. In this talk I will start by discussing why privacy notices are important, explain why they are largely failing to inform people, and discuss some of the approaches companies and researchers are taking in an attempt to make privacy notices more useful. Then I’ll present a theory about the cognitive processes that take place when someone encounters a privacy notice or security warning. Finally, I will share several examples in which my students conducted user studies to test the effectiveness of privacy notices or security warnings. I will show some examples of notices that don’t seem to be very effective, as well as some examples of how notices can be improved through an iterative design and testing process.

Available Media

Understanding and Designing for End Users' Security Expectations

11:00 am11:30 am

Franziska Roesner, University of Washington

Franziska Roesner, University of Washington

Franziska (Franzi) Roesner is an Assistant Professor in Computer Science and Engineering at the University of Washington, where she co-directs the Security and Privacy Research Lab. Her research focuses on understanding and improving computer security and privacy for end users of existing and emerging technologies, including the web, smartphones, and emerging augmented reality and IoT platforms. Her work on application permissions in modern operating systems received the Best Practical Paper Award at the IEEE Symposium on Security and Privacy, her early work on security and privacy for augmented reality was featured on the cover of the Communications of the ACM magazine, and her defense for tracking by social media widgets on the web was incorporated into the Electronic Frontier Foundation’s Privacy Badger tool. She received her PhD from the University of Washington in 2014 and her BS from the University of Texas at Austin in 2008.

As technology designers, we must do two things to build systems that are both usable and meaningfully secure. First, we must work to understand the actual security and privacy needs and expectations of our target user groups. Second, we must think not only about how to communicate better with users superficially, but about how to (re)design our systems more fundamentally to better match their needs and expectations. In this talk, I will describe a set of studies illustrating this approach, bringing together usable security and system design. Specifically, I will touch on lessons from our work studying and designing for sensitive communications among journalists/sources and lawyers/clients.

As technology designers, we must do two things to build systems that are both usable and meaningfully secure. First, we must work to understand the actual security and privacy needs and expectations of our target user groups. Second, we must think not only about how to communicate better with users superficially, but about how to (re)design our systems more fundamentally to better match their needs and expectations. In this talk, I will describe a set of studies illustrating this approach, bringing together usable security and system design. Specifically, I will touch on lessons from our work studying and designing for sensitive communications among journalists/sources and lawyers/clients.

Available Media

Privacy and Security Practices of Individuals Coping with Intimate Partner Abuse

11:30 am12:00 pm

Sunny Consolvo, Google

Sunny Consolvo, Google

Sunny Consolvo leads Google's Security & Privacy User Experience team. Sunny and her team spend most of their time focusing on usable privacy and security. Sunny received her Ph.D. in Information Science from the University of Washington. She is a member of the Editorial Board for IEEE Pervasive Computing and the PACM on Interactive, Mobile, Wearable, and Ubiquitous Technologies (IMWUT). She became a Certified Information Privacy Professional (US) in 2013.

This talk will present results of a qualitative study of the digital privacy and security practices and challenges of survivors of intimate partner abuse (IPA). We propose a framework for organizing survivors' technology practices and challenges into three phases: physical control, escape, and life apart. This framework is intended to help technology creators consider how survivors of IPA can leverage new and existing technologies. Overall, our results suggest that the usability of and control over privacy and security functions should be or continue to be high priorities for technology creators seeking ways to better support survivors of IPA.

This talk will present results of a qualitative study of the digital privacy and security practices and challenges of survivors of intimate partner abuse (IPA). We propose a framework for organizing survivors' technology practices and challenges into three phases: physical control, escape, and life apart. This framework is intended to help technology creators consider how survivors of IPA can leverage new and existing technologies. Overall, our results suggest that the usability of and control over privacy and security functions should be or continue to be high priorities for technology creators seeking ways to better support survivors of IPA.

Lunch in the East Hall
1:00 pm–3:00 pm

Rise of AI in Security and Privacy

Session Chair: Yan Shoshitaishvili, University of California, Santa Barbara

Deviation from the Norms: The Necessity and Challenges of a Norms-based Approach to Digital Offense

1:00 pm1:30 pm

Andrea Little Limbago, Chief Social Scientist, Endgame

Andrea Little Limbago, Chief Social Scientist, Endgame

Dr. Andrea Little Limbago is the Chief Social Scientist at Endgame, directing and contributing to the company’s technical content and blog. She has a background in quantitative social science, and writes extensively on the geo-politics of the cyber domain, policy, and data science. She has previously worked in academia (NYU) and government (Joint Warfare Analysis Center). Andrea holds a PhD in Political Science from the University of Colorado at Boulder, where she taught a variety of international relations and foreign policy courses, and a Bachelor’s degree from Bowdoin College.

One of the few areas in which the tech and policy communities generally agree is the need for norms to guide acceptable behavior in the digital domain. Given the increasingly vocal demand for—and noted cases of—greater retaliatory capabilities within the private sector, this is a necessary discussion and directly impacts security and privacy. With everything from cyber militias to full-fledged hacking back capabilities entering the discourse, there is a significant need for policy innovation and creativity that can set the foundation for the broader establishment of global norms. However, there are significant hurdles to norm implementation and a clear lack of comprehension of those factors that impact norm diffusion, including the technical difficulty of verification, outdated policies, as well as collective action problems and expected utility challenges. Despite these hurdles, the US must continue to attempt to shape global digital norms.

One of the few areas in which the tech and policy communities generally agree is the need for norms to guide acceptable behavior in the digital domain. Given the increasingly vocal demand for—and noted cases of—greater retaliatory capabilities within the private sector, this is a necessary discussion and directly impacts security and privacy. With everything from cyber militias to full-fledged hacking back capabilities entering the discourse, there is a significant need for policy innovation and creativity that can set the foundation for the broader establishment of global norms. However, there are significant hurdles to norm implementation and a clear lack of comprehension of those factors that impact norm diffusion, including the technical difficulty of verification, outdated policies, as well as collective action problems and expected utility challenges. Despite these hurdles, the US must continue to attempt to shape global digital norms. Otherwise, other states will fill that void. And to get it right, tech community collaboration is required to avoid another Wassenaar situation. This presentation will discuss the opportunities and challenges of norm diffusion, while embedding the discussion in the recent discourse on offense-based behavior, which has significant implications for both security and privacy.

Available Media

FeatureSmith: Learning to Detect Malware by Mining the Security Literature

1:30 pm2:00 pm

Tudor Dumitras, University of Maryland, College Park

Tudor Dumitras, University of Maryland, College Park

Tudor Dumitraș is an Assistant Professor in the Electrical and Computer Engineering Department at the University of Maryland, College Park. His research focuses on Big Data approaches to problems in system security and dependability. In his previous role at Symantec Research Labs, he built the Worldwide Intelligence Network Environment (WINE)—a platform for experimenting with Big Data techniques. He received an Honorable Mention in the NSA competition for the Best Scientific Cybersecurity Paper of 2012. He also received the 2011 A. G. Jordan Award from the ECE Department at Carnegie Mellon University, the 2009 John Vlissides Award from ACM SIGPLAN, and the Best Paper Award at ASP-DAC '03. Tudor holds a PhD degree from Carnegie Mellon University.

Companies facing rampant attacks and data breaches have started turning to artificial intelligence techniques, such as machine learning, for security tasks. A machine learning classifier automatically learns models of malicious activity from a set of known-benign and known-malicious observations, without the need for a precise description of the activity prepared in advance. However, the effectiveness of these techniques primarily depends on the feature engineering process, which is usually a manual task based on human knowledge and intuition. Can we automate this process? Can we build an intelligent system that not only learns from examples, but can also help us build other intelligent systems?

Companies facing rampant attacks and data breaches have started turning to artificial intelligence techniques, such as machine learning, for security tasks. A machine learning classifier automatically learns models of malicious activity from a set of known-benign and known-malicious observations, without the need for a precise description of the activity prepared in advance. However, the effectiveness of these techniques primarily depends on the feature engineering process, which is usually a manual task based on human knowledge and intuition. Can we automate this process? Can we build an intelligent system that not only learns from examples, but can also help us build other intelligent systems?

We developed a system, called FeatureSmith, that engineers features for malware detectors by synthesizing the knowledge described in thousands of research papers. As a demonstration, we trained a machine learning classifier with automatically engineered features for detecting Android malware and we achieved a performance comparable to that of a state-of-the-art detector for Android malware, which uses manually engineered features. In addition, FeatureSmith can suggest informative features that are absent from the manually engineered set and can link the features generated to human-understandable concepts that describe malware behaviors.

Available Media

Classifiers under Attack

2:00 pm2:30 pm

David Evans, University of Virginia

David Evans, Professor, University of Virginia

David Evans is a Professor of Computer Science at the University of Virginia and leader of the Security Research Group. He is the author of an open computer science textbook, a children's book on combinatorics and computability, and teacher of one of the world's most popular MOOCs. He won the Commonwealth of Virginia’s Outstanding Faculty Award and was Program Co-Chair for the 31st (2009) and 32nd (2010) IEEE Symposia on Security and Privacy. He has SB, SM and PhD degrees in Computer Science from MIT and has been a faculty member at the University of Virginia since 1999.

Machine learning classifiers are widely used in security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can often be thwarted by motivated adversaries who can construct evasive variants which are misclassified as benign. The main reason for this is that classifiers are trained on samples collected from previous attacks, which often differ from benign samples in superficial and easily-modified ways. Further, many machine learning techniques, including deep neural networks, are inherently fragile. In this talk, I’ll highlight the reasons most classifiers can be evaded by motivated adversaries and demonstrate some successful evasion techniques, including ones that can be fully automated. Then, I’ll talk about methods that could be used to make classifiers less vulnerable to evasion and to evaluate the robustness of a deployed classifiers in the presence of adversaries.

Machine learning classifiers are widely used in security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can often be thwarted by motivated adversaries who can construct evasive variants which are misclassified as benign. The main reason for this is that classifiers are trained on samples collected from previous attacks, which often differ from benign samples in superficial and easily-modified ways. Further, many machine learning techniques, including deep neural networks, are inherently fragile. In this talk, I’ll highlight the reasons most classifiers can be evaded by motivated adversaries and demonstrate some successful evasion techniques, including ones that can be fully automated. Then, I’ll talk about methods that could be used to make classifiers less vulnerable to evasion and to evaluate the robustness of a deployed classifiers in the presence of adversaries.

Available Media

Adversarial Examples in Machine Learning

2:30 pm3:00 pm

Nicolas Papernot, Google PhD Fellow at The Pennsylvania State University

Nicolas Papernot, Google PhD Fellow, The Pennsylvania State University

Nicolas Papernot is a PhD student in Computer Science and Engineering working with Dr. Patrick McDaniel at the Pennsylvania State University. His research interests lie at the intersection of computer security and deep learning. He is supported by a Google PhD Fellowship in Security. In 2016, he received his MS in Computer Science and Engineering from the Pennsylvania State University and his MS in Engineering Sciences from the École Centrale de Lyon.

Machine learning models, including deep neural networks, were shown to be vulnerable to adversarial examples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software.

In fact, the feasibility of misclassification attacks based on adversarial examples has been shown for image, text, and malware classifiers. Furthermore, adversarial examples that affect one model often affect another model, even if the two models are very different. This effectively enables attackers to target remotely hosted victim classifiers with very little adversarial knowledge.

Machine learning models, including deep neural networks, were shown to be vulnerable to adversarial examples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software.

In fact, the feasibility of misclassification attacks based on adversarial examples has been shown for image, text, and malware classifiers. Furthermore, adversarial examples that affect one model often affect another model, even if the two models are very different. This effectively enables attackers to target remotely hosted victim classifiers with very little adversarial knowledge.

This talk covers adversarial example crafting algorithms operating under varying threat models and application domains, as well as defenses proposed to mitigate such attacks. A practical tutorial will be given throughout the talk, allowing participants to familiarize themselves with adversarial example crafting.

Available Media

Break with Refreshments in the East Hall
3:30 pm–5:30 pm

Security in .gov

Session Chair: Eric Mill, 18F, GSA

Cyber Security: A DARPA Perspective

3:30 pm4:30 pm

John Launchbury, Director, Information Innovation Office (I2O)

John Launchbury, Director, Information Innovation Office (I2O)

Dr. John Launchbury is the Director of the Information Innovation Office (I2O) at DARPA. In this role he develops strategy and works with I2O program managers to develop new programs and transition program products.

Before joining DARPA, Dr. Launchbury was chief scientist of Galois, Inc., which he founded in 1999 to address challenges in information assurance through the application of functional programming and formal methods. Under his leadership, the company experienced strong growth and was recognized for thought leadership in high-assurance technology development.

Prior to founding Galois, Dr. Launchbury was a full professor at the OGI School of Science and Engineering at OHSU (Oregon). He earned awards for outstanding teaching and gained international recognition for his work on the analysis and semantics of programming languages, the Haskell programming language in particular.

Dr. Launchbury received first-class honors in mathematics from Oxford University, holds a Ph.D. in computing science from the University of Glasgow and won the British Computer Society's distinguished dissertation prize. In 2010, Dr. Launchbury was inducted as a Fellow of the Association for Computing Machinery (ACM).

Cyber threats against our information systems have grown in sophistication and number, yet progress in the cyber security of best-of-breed systems has been significant over the last few years, giving us hope that we are no longer facing an impossible task. This talk will present an overview of the state of current cyber systems, the anatomy of a cyber attack, and discuss the cyber R&D portfolio in DARPA I2O.

Cyber threats against our information systems have grown in sophistication and number, yet progress in the cyber security of best-of-breed systems has been significant over the last few years, giving us hope that we are no longer facing an impossible task. This talk will present an overview of the state of current cyber systems, the anatomy of a cyber attack, and discuss the cyber R&D portfolio in DARPA I2O.

Available Media

National Scale Cyber Security

3:30 pm4:00 pm

Dr Ian Levy, National Cyber Security Centre, UK

Dr Ian Levy, National Cyber Security Centre, UK

Ian Levy is the Technical Director of the UK’s new National Cyber Security Centre, and was previously Technical Director Cyber Security and Resilience at GCHQ. He is the UK Government’s "chief cyber security geek." Together, the NCSC and the new National Cyber Security Strategy mark a fundamental change in the UK’s approach to cybersecurity. That approach isn’t just about protecting Government or critical infrastructure or big industry. It’s about a national scale approach to cybersecurity brining unique challenges but, more importantly, opportunities to do things differently.

In October 2016, the UK National Cyber Security Centre was formally launched, charged with better defending the UK at large from all forms of cyberattack. The National Cyber Security Strategy was launched shortly afterwards and—strangely for a government strategy—talked about some of the national scale technical things that the Centre would lead. The NCSC is determined to get objective evidence and data to better inform cybersecurity decisions and investments, as well as making people’s lives better by reducing both the number of attacks they see and the impact of those that succeed. Ian will talk about some of these interventions and what the NCSC has learned on the way—all supported by real data and analysis.

In October 2016, the UK National Cyber Security Centre was formally launched, charged with better defending the UK at large from all forms of cyberattack. The National Cyber Security Strategy was launched shortly afterwards and—strangely for a government strategy—talked about some of the national scale technical things that the Centre would lead. The NCSC is determined to get objective evidence and data to better inform cybersecurity decisions and investments, as well as making people’s lives better by reducing both the number of attacks they see and the impact of those that succeed. Ian will talk about some of these interventions and what the NCSC has learned on the way—all supported by real data and analysis.

Available Media

Green Sprouts: Encouraging Signs of Life from the Department of Defense’s 2016 Security Strategy

4:30 pm5:00 pm

Lisa Wiswell, Bureaucracy Hacker, Defense Digital Service 

Lisa Wiswell, Bureaucracy Hacker, Defense Digital Service

Lisa Wiswell has worked for the better part of the past decade with the Department of Defense to shift its culture to interact more positively with the hacker community—and to applaud them for their impactful work. She presently works at the Defense Digital Service hacking the Department of Defense bureaucracy and its antiquated and restrictive policies and processes. Previously, she was an Obama Administration appointee, supporting senior DoD leaders by formulating and implementing strategies to improve DoD’s ability to operate in digital space, and worked at the Defense Advanced Research Projects Agency overseeing a portfolio of cyberwarfare initiatives.

Not that long ago, the Department of Defense started to evolve from security through obscurity to more open practices that welcome contributions from the outside world. A new “Hack the Pentagon” bug bounty pilot proved that outside hackers could help secure DoD systems. The DoD then launched a vulnerability disclosure policy so that researchers could point out general security issues. And a recent “Hack the Army” contest has further proven that bug bounties are an effective tool in the DoD’s security toolkit.

This talk will discuss 2016’s rocky road to get to where we are today. Though there is still work to be done, there are signs of life to report. Progress at the DoD will help legitimize practices for other sectors and finally provide more clarity on the gray areas of the Computer Fraud & Abuse Act. Security researchers and prosecutors alike will have a better understanding of what is and isn’t legal in the hacking realm.

Not that long ago, the Department of Defense started to evolve from security through obscurity to more open practices that welcome contributions from the outside world. A new “Hack the Pentagon” bug bounty pilot proved that outside hackers could help secure DoD systems. The DoD then launched a vulnerability disclosure policy so that researchers could point out general security issues. And a recent “Hack the Army” contest has further proven that bug bounties are an effective tool in the DoD’s security toolkit.

This talk will discuss 2016’s rocky road to get to where we are today. Though there is still work to be done, there are signs of life to report. Progress at the DoD will help legitimize practices for other sectors and finally provide more clarity on the gray areas of the Computer Fraud & Abuse Act. Security researchers and prosecutors alike will have a better understanding of what is and isn’t legal in the hacking realm.

Available Media

Understanding and Securing DoD Weapon Systems

5:00 pm5:30 pm

Lt. Col. Tim Booher, DIUx and CYBERCOM

Lt. Col. Tim Booher, DIUx and CYBERCOM

Mr. Timothy Booher joined DARPA as a program manager in February 2014. His principal R&D focus is on cyberspace technology.

Prior to joining DARPA, Mr. Booher was the deputy technical director for the Air Force Red Team at the Air Force Rapid Capabilities Office and the deputy director for technical policy integration for special programs at the Office of the Undersecretary of Defense for Policy. He has led assessments of advanced technology including sensors for intelligence surveillance and reconnaissance, aircraft survivability, directed energy and cyberspace operations. He has also held positions at Headquarters U.S. Air Force, the Air Force Research Laboratory, the Air Force’s 16th Special Operations Squadron and the Active Materials and Structures Laboratory at the Massachusetts Institute of Technology (MIT).

Mr. Booher holds a Master of Science degree in operations research from the Air Force Institute of Technology and a Bachelor of Science degree in aeronautical and astronautical engineering from MIT.

DARPA Cyber Grand Challenge (CGC) participants demonstrated recent advances in cyber autonomy to include automatic patching and vulnerability discovery. Relying on teams of hackers doesn't scale, presents no general conclusions, and presents high costs without the guarantee of increased weapons system survivability. DARPA, DIUx, CYBERCOM, the Air Force and Navy are investing in automated tools developed for CGC for application in embedded, real time, and heterogeneous mission systems. In addition, they plan to demonstrate a suite of advanced technologies to include high fidelity multi-resolution modeling, high-integrity processing, high-fidelity situational awareness, systems hardening and data provenance.

DARPA Cyber Grand Challenge (CGC) participants demonstrated recent advances in cyber autonomy to include automatic patching and vulnerability discovery. Relying on teams of hackers doesn't scale, presents no general conclusions, and presents high costs without the guarantee of increased weapons system survivability. DARPA, DIUx, CYBERCOM, the Air Force and Navy are investing in automated tools developed for CGC for application in embedded, real time, and heterogeneous mission systems. In addition, they plan to demonstrate a suite of advanced technologies to include high fidelity multi-resolution modeling, high-integrity processing, high-fidelity situational awareness, systems hardening and data provenance.

Available Media