All the times listed below are in Pacific Standard Time (PST).
Monday, February 1, 2021
7:00 am–7:15 am
Opening Remarks, Day 1
Program Co-Chairs: Lea Kissner, Apple, and Daniela Oliveira, University of Florida
Session Chair: Joseph Lorenzo Hall, Internet Society
Scott Shapiro, Yale University
As cyber-attacks have become increasingly common (e.g., STUXNET, DNC hack, NotPetya), lawyers have struggled to determine how these 21st Century weapons fit into the traditional laws of war. Despite the enormous amount of attention paid to cyber-conflict over the past decade, the legal community is more confused than ever. Put bluntly, we are still very far away from agreeing on whether cyberwar is legal and, if so, under which circumstances. In this talk, I try to answer these questions by situating current debates within the long history of international law. The modern laws of war have changed dramatically from their initial formulations in the seventeenth century. By seeing how the right of war has evolved over the last four hundred years, I show how cyber conflicts fit into this evolution and the conditions under which states may use cyberweapons to disrupt the digital networks of their adversaries.
Scott Shapiro is the Charles F. Southmayd Professor of Law and Professor of Philosophy at Yale University. His areas of interest include jurisprudence, international law, criminal law and cybersecurity. He is the author of Legality (2011), The Internationalists (2017) (with Oona Hathaway) and editor of The Oxford Handbook of Jurisprudence and Philosophy of Law (2002) (with Jules Coleman). He earned B.A. and Ph.D. degrees in philosophy from Columbia University and a J.D. from Yale Law School. Shapiro is an editor of Legal Theory and the Stanford Encyclopedia of Philosophy. He is also the founding director of the Yale CyberSecurity Lab, which provides cutting-edge cybersecurity and information technology teaching facilities. His next book, entitled Insecurity, details the history and technology of internet hacking.
Jack Cable, Security Researcher and Student, Stanford University
Foreign adversaries interfered in America's 2016 election, and we can expect to see similar activities in 2020. War waged via attacks on state voter registration databases, hack and leak operations targeting political campaigns, and disinformation campaigns against the American people proved that America's elections were not prepared for nation-state adversaries.
The security community, together with election officials, policymakers, and social scientists, can help confront our nation's vulnerabilities. Just as research and advocacy led to an increased usage of voter-verified paper ballots and risk-limiting audits across the nation, so too can we help build resilience across other election technologies and the American people. By addressing the full-stack problem of election security—from the technology used to register voters, cast ballots, and report results, the infrastructure of political campaigns, and most importantly the people that ultimately judge an election's validity—we can hope to preserve the sanctity of our electoral processes.
Jack Cable is a security researcher and an undergraduate student at Stanford University, currently working as an election security technical advisor at the Cybersecurity and Infrastructure Security Agency (CISA) as well as a research assistant tracking disinformation at the Stanford Internet Observatory. Jack is a top ranked bug bounty hacker, having reported over 350 vulnerabilities to hundreds of companies. Prior to CISA, Jack worked for the Defense Digital Service on the Hack the Pentagon program. Jack also serves as an advisory committee member for the IT-ISAC, aiding election vendors in building bridges with security researchers by establishing vulnerability disclosure policies. At Stanford, Jack studies computer science and launched Stanford’s bug bounty program, one of the first in higher education.
Sunny Consolvo, Google
"Security and politics should be separate... If you're a candidate, you should win or lose on your best day, based on who you are. Not because your email got popped and posted online by a [nation-state cybersecurity team]." –A study participant
Data breaches and hacking in the 2016 election cycle increased awareness of security vulnerabilities among political workers, but campaigns have not yet adequately adapted to the threats. Improving campaign security is a nonpartisan issue that is an important part of preserving democratic institutions. We will share research we conducted to understand the security practices, challenges, and perceptions of people involved in politics, with a focus on U.S. campaigns across the political spectrum. We will provide recommendations to help the audience learn how to help improve the security of campaigns and help protect democracy.
Sunny Consolvo is a researcher at Google where her focus is on security, privacy, and anti-abuse topics. She led Google's Security & Privacy UX team for several years. In recent work, she has investigated the security and privacy practices of people involved with political campaigns, survivors of intimate partner abuse, people facing financial insecurity, and explored the technology experiences of women in South Asia. Prior to Google, Sunny worked at Intel Labs Seattle where she investigated how to use mobile technologies to encourage wellness and to help people be more aware of the privacy implications of sensing and inference systems.
Sunny was recently elected to the SIGCHI Academy. She has received a test-of-time award, three 10-year impact awards, and many best paper awards. She has been featured in the press, including Wired and Scientific American Mind. Sunny received her Ph.D. from the University of Washington. She is a member of the Editorial Board for IEEE Pervasive Computing, the Steering Committee for the Symposium on Usable Privacy and Security, and she became a Certified Information Privacy Professional (US) in 2013.
Dr. Christian Folini, Program Chair Swiss Cyber Storm Conference, OWASP CRS Co-Lead
The Swiss tale with online voting serves as a typical example of the iterative development of highly critical IT systems and the growing involvement of scientist as a necessary step for a government that is willing to learn from past mistakes.
Switzerland has been experimenting with online voting for over 15 years. Several generations of electronic voting systems have been implemented and almost all of them died along the way because of their profound security problems or when the money ran out.
In 2019, Swiss Post published the source code of its online voting system, the last system that was still in the race. Several highly critical findings were discovered in a matter of weeks and the system was stopped right before the national elections.
In 2020, the government rebooted the process and invited two dozen international researchers into an intense dialogue that lasted several months. The resulting report a the base for the renewed regulation that will pave the way forward in 2021.
Christian Folini is a security engineer and open source enthusiast. He holds a Ph.D. in medieval history and enjoys defending castles across Europe. Unfortunately, defending medieval castles is not a big business anymore and he turned to defending web servers, which he finds equally challenging. He brings more than ten years of experience with ModSecurity configuration in high-security environments, DDoS defense, and threat modeling.
Christian Folini is the author of the second edition of the ModSecurity Handbook and the best-known teacher on the subject. He co-leads the OWASP ModSecurity Core Rule Set project and serves as the program chair of the "Swiss Cyber Storm" conference, the prime security conference in Switzerland.
He stood in the first line of the defenders when the Swiss Post Online Voting system was put up for a public intrusion test in 2019. The following year, he moderated a dialogue between the Swiss government and a group of two dozens researchers on online voting. This dialogue resulted in a report with several hundred pages.
Christian Folini is a frequent speaker at conferences. When speaking, he tries to use his background in the humanities to explain hardcore technical topics to audiences of different backgrounds.
9:15 am–9:30 am
9:30 am–11:00 am
Session Chair: Rodrigo Branco, Amazon
Nicole Fern, Tortuga Logic
Hardware is the trust anchor for many systems ranging from IoT devices to datacenter infrastructure, but hardware is a double-edged sword. Hardware can provide security services to protect devices, but hardware features often open up systems to attack. This talk will provide an overview of system-level threats hardware architectures are designed to address along with real-world examples of how hardware features added to increase performance and configurability leave systems vulnerable. Insight into the challenges chip vendors and system designers face when trying to detect and prevent hardware vulnerabilities will be provided along with an overview of methodologies and solutions that provide promise in this space.
Dr. Nicole Fern is a Senior Hardware Security Engineer at Tortuga Logic whose primary role is providing security expertise and defining future features and applications for the product line. Before joining Tortuga Logic she was a postdoc at UC Santa Barbara. Her research focused on the topics of hardware verification and security.
Sanghyun Hong, University of Maryland, College Park and Maryland Cybersecurity Center (MC2)
The widespread adoption of machine learning (ML) incentivizes potential adversaries who wish to manipulate systems that include ML components. In consequence, research in the field of adversarial machine learning studies attack surfaces such as training data with bad samples (data poisoning) and predictions manipulated by imperceptible perturbations (adversarial examples). However, most of the prior work focuses on the "soundness of mind" by looking at ML as a mathematical concept, and it overlooks the security threats caused by practical hardware attacks such as fault injection or side-channel attacks.
In this talk, we will propose a new perspective: we view ML as a computational tool running on hardware, a potentially "vulnerable body". We will introduce the emerging research on the vulnerabilities of ML models to practical hardware attacks. These attacks can cause unexpected damage, and ultimately, they shed new light on the dangers of hardware-based attack vectors.
First, we review the impact of fault-injection attacks. We show that, by flipping a single bit in the memory representation of a deep neural network (DNN), we can degrade the prediction accuracy by 90% or more. Contrary to the conventional wisdom in the ML community---that DNNs are resilient to parameter perturbations---we find that this vulnerability is widespread in modern DNNs. An adversary can exploit the vulnerability in practice with a software-induced fault attack, Rowhammer; we demonstrate that, even if the attacker randomly flips bits in memory, the attacker inflicted the accuracy drop more than 10% within a minute.
Second, we review the impact of side-channel attacks. We show that a typical cache side-channel attacker can reverse-engineer the architecture details of a DNN model. To this end, the attacker exploits the computational regularities in ML frameworks: DNN layers are processed sequentially, and the time it takes to process them depends on the architecture configurations. In practice, our attacker was able to steal two essential components of deep learning systems: a data preprocessing pipeline and a custom DNN architecture. They are potentially proprietary, as their development requires substantial resources. Nevertheless, the leakage from a cache side-channel, while the victim processes a single input, is sufficient for reconstructing them with 0% error.
Finally, we conclude by emphasizing the vulnerability of ML to hardware attacks is as yet an under-studied topic; thus, we encourage the community to re-examine security properties guaranteed by previous works with a new angle. Separately, some properties of ML particularly make the exploitation of hardware attacks easy; therefore, we need to consider additional ML-level defenses that account for robust properties. We believe this is the best moment to pursue the ancient wisdom: "a sound mind in a sound body"
Note: we have our website that outlines our research: http://hardwarefail.ml
Sanghyun Hong is a Ph.D. candidate in Computer Science at the University of Maryland, College Park (UMD), advised by Professor Tudor Dumitras. His research interests span the security and privacy of machine learning (ML). In his dissertation research, he exposed the vulnerability of deep learning algorithms to hardware attack vectors, such as Rowhammer or side-channel attacks. He also worked on identifying hidden properties within deep learning algorithms, such as overthinking and gradient-level disparity, whose quantification led to defensive mechanisms against backdoor or data poisoning attacks, respectively. His research outcomes are published in security and ML conferences: USENIX, ICLR, ICML, and NeurIPs. He is a recipient of the Ann G. Wylie Dissertation Fellowship and is currently a Future Faculty Fellow in A. James Clark School of Engineering at UMD. He is on the academic job market this year.
Julian Rrushi, Oakland University
We present research on the discovery of the physics of cyberspace, meaning the laws that govern the dynamics of cyberspace. We discover physical properties of cyberspace in aggregate from a probabilistic examination via statistical physics and dynamical systems theory of the underlying computer hardware architectures, operating system designs, code execution, and networking. We encode the inner workings of hardware architectures and operating systems into mathematical models that resemble physical matter such as gasses and fluids, while representing the physics of cyberspace with high fidelity. We then explore statistical physics and dynamical systems theory similarly to how they were used to study gasses and fluids. We develop new physical-matter inspired approaches at the intersection of statistical physics and dynamical systems theory, in a quest for the physics of cyberspace. We guide this research with experimental data that we obtain via instrumentation of hardware emulation and operating system virtualization.
Dr. Julian Rrushi is an assistant professor of engineering at Oakland University, Michigan. Julian completed a B.S. in Computer Science, an M.S. in Information Technology, and a Ph.D. in Computer Science at the University of Milan. He researches operating systems, hardware architectures, and artificial intelligence to break new ground in computer security. Julian has received a Young Faculty Award from DARPA to research the physics of cyberspace.
11:00 am–11:30 am
11:30 am–1:00 pm
Session Chair: Melanie Ensign, Discernible, Inc.
Mitch Negus, University of California, Berkeley
Nuclear safeguards protect nuclear material from being diverted from legitimate activities, like nuclear power applications or nuclear physics research, to more sinister purposes. International regulations dictate the safeguards protocols that must be adhered to, and in general, nations meet these obligations as necessary. However, reluctance on the part of participating nations and their businesses to yield more information than is absolutely necessary stands as an obstacle against bringing the full force of modern data analytics to bear in safeguards applications. Privacy-preserving computation, including secure multiparty computation (MPC), presents a solution. Garbled circuits, a specific class of MPC algorithms, allow multiple parties (in this case an international inspector and a nation-state or nuclear facility) to jointly perform calculations without revealing their respective inputs. Having previously shown that garbled circuits can successfully identify anomalies in time series data, this work pivots to showing that these anomaly detection circuits can distinguish anomalous events in categorically specific nuclear safeguards relevant time series.
Mitch Negus is PhD student in Department of Nuclear Engineering at UC Berkeley. He researches privacy-preserving computation as it applies to nuclear safeguards, specifically developing software tools to allow safeguards-relevant conclusions to be reached without revealing the underlying data. Mitch is also interested in nuclear energy, loves teaching, and he gets excited to learn and share best practices in reproducible scientific computing.
Mitch moved to Berkeley after graduating from the University of Massachusetts Amherst with a BS in Physics. He earned his MS modeling nuclear reactors before transitioning to his current work.
Gone, But Not "Forgotten"—Technical & Practical Challenges In Operationalizing Modern Privacy Rights
Kelly Huang, VP Product, Ethyca
The intersection of data protection law and engineering is still young. In many businesses, engineers building systems for laws like CCPA and GDPR suffer from a lack of institutional knowledge on privacy, ambiguity around system requirements, and even ambiguity regarding legal terms - just what is “erasure”? In this presentation, Kelly Huang, VP of Product at privacy startup Ethyca, will give attendees an insight into how modern data privacy laws get operationalized in complex systems, and in so doing, help lift the veil on the many outstanding questions that persist regarding privacy in the engineering community.
Kelly is the VP of Product at Ethyca, a data privacy startup that enables all companies to empower their customers to take control of their personal data. She is a startup veteran with over a decade of software development and product strategy experience in fintech, ecommerce, AAA media, and healthcare IT. She has been responsible for the product vision and monetization strategy for B2C and B2B2C digital products at multinational Fortune 100 companies. Kelly is passionate about building great products with responsible intent and is an amateur pizza connoisseur.
Simson Garfinkel, US Census Bureau
Differential Privacy was invented in 2006 to protect the privacy of people who respond to a national census. The U.S. 2020 Census of Population and Housing will mark the first time that differential privacy will be used for its existential purpose. Bringing leading-edge privacy technology from the lab into practice required a significant amount of scientific and technical development, and it presented organizational challenges as well to one of the world’s largest statistical organization. Nevertheless, in three years the Census Bureau assembled a team, developed a reference implementation, transitioned that implementation to Amazon Web Services, redesigned the implementation’s framework, creating a system that made it easy to perform experiments, performed an end-to-end test, used the implementation to re-release data from the 2010 Census, developed new algorithms to address the data quality concerns of stakeholders, and released multiple reference implementations of the code base.
Simson L. Garfinkel is the Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau. He has published research articles in the areas of computer security, digital forensics and privacy. He is a fellow of both the Association for Computing Machinery and the Institute for Electrical and Electronics Engineers, and was awarded a PhD in Computer Science from MIT in 2005. He was previously an Associate Professor at the Naval Postgraduate School and a Computer Scientist at the National Institute of Standards and Technology.
1:00 pm–1:15 pm
1:15 pm–2:45 pm
Humans Making Decisions
Session Chair: Joe Calandrino, Federal Trade Commission
Patrick Kelley, Google
The last five years have been tumultuous for privacy.
Growing awareness of data breaches, scandals around data use such as Cambridge Analytica, and the introduction of new legislation, including the wide reaching General Data Protection Regulation (GDPR) have led to privacy being an ever present part of headlines and conversation. However, in our longitudinal, brand-blind, international public opinion surveys of nearly 70,000 respondents across 22 countries, we see remarkably stable attitudes around most privacy concepts.
Understanding global privacy attitudes is key to building privacy-protecting services and products, shaping new legislation, and simply keeping apace on how the public thinks about these topics. We record public attitudes on topics such as government surveillance, freedom of expression, cyberbullying, whistleblowing, extremist content, and many other topics.
For the first time we will be presenting all five waves of survey responses, opening up this important data resource, to the audience at ENIGMA, showing the stability of this data, highlighting the areas where we have seen change, defining the range of opinions between different countries, and finally emphasizing where the public is still optimistic about the future of privacy, and of their ability to protect their own data.
Patrick Gage Kelley is a researcher at Google focusing on security, privacy, and anti-abuse topics. He has worked on projects on the use and design of standardized, user-friendly privacy displays, passwords, location-sharing, mobile apps, encryption, and technology ethics. Patrick’s work on redesigning privacy policies in the style of nutrition labels was included in the 2009 Annual Privacy Papers for Policymakers event on Capitol Hill. Most recently, Apple recently revived this work with their newly announced App Privacy Labels.
Previously, he was a professor of Computer Science at the University of New Mexico and faculty at the UNM ARTSLab and received his Ph.D. from Carnegie Mellon University working with the Mobile Commerce Lab and the CyLab Usable Privacy and Security (CUPS) Lab. He was an early researcher at Wombat Security Technologies, now a part of Proofpoint, and has also been at NYU, Intel Labs, and the National Security Agency.
Pardis Emami-Naeini, University of Washington
IoT consumers are concerned about the privacy and security of their smart devices, but they cannot do much about it at the time of purchase. This is due to the unavailability of such information when making a purchase decision, at least in the US and most countries. In the past few years, Finland, UK, and Singapore have started designing a label for IoT devices to inform consumers about the privacy and security practices of smart devices at the point of sale. We believe US consumers have the same right to know how their devices handle their information. Therefore, we decided to bring this much-needed transparency to consumers at the time of purchase. By conducting a series of studies and incorporating inputs from thousands of consumers and experts, we developed an informative and usable privacy and security label for IoT devices. In addition, to help manufacturers easily create these labels for their products, we developed a tool to generate the human and machine readable formats of the labels.
Pardis Emami-Naeini is currently a postdoctoral scholar at University of Washington. She received a B.Sc. degree in computer engineering from Sharif University of Technology, and M.Sc. and Ph.D. degrees in computer science from Carnegie Mellon University (CMU) under Prof. Lorrie Cranor and Prof. Yuvraj Agarwal. As part of her doctoral research, she developed a usable privacy and security label for smart devices to inform consumers’ Internet of Things-related purchase decisions. Pardis was selected as a Rising Star in electrical engineering and computer science in October 2019, and she was awarded the 2019-2020 CMU CyLab presidential fellowship.
Sanchari Das, Assistant Professor, University of Denver
Phishing is one of the most well-known cybersecurity threats. However, every day, we witness reports on several phishing attacks despite the availability of solutions, such as training and automatic warnings. Such attacks are detrimental not only to an individual but also to the data of associated people or organizations. Phishing training has been widely adopted by academia and industry without verification that such testing works. The most significant predictor that an employee will fall for a phishing attempt in real life is that they failed the training. This means that people who are not resilient against phishing before training are not resilient during or after training. We propose a radically different human-centered approach, where the interaction focuses on distinguishing the new from the familiar, and the functionality of unfamiliar websites is limited. Our research and open code pilot implements risk-limiting on websites and concurrent risk communication, taking a cue from safety engineering. The goal is to prevent instant catastrophic losses due to phishing attacks. To accomplish this, we build a functional crumple zone that slows decision time, allowing individuals who would otherwise fall for phishing to walk away unscathed. Our work not only focuses on humans but also provides the technical tools that will help build a phishing-resilient workspace.
Sanchari Das is an Assistant Professor at the Department of Computer Science in the Ritchie School of Engineering and Computer Science at the University of Denver. Her research lab - Security and Privacy Research in New-Age Technology (SPRINT) focuses on computer security, privacy, education, human-computer interaction, social computing, accessibility, and sustainability of new-age technologies.
She received her Ph.D. from Indiana University Bloomington under the supervision of Dr. L. Jean Camp. Her dissertation focused on understanding users' risk mental models to help in secure decision-making for authentication technologies. She has also worked on projects related to social media privacy, privacy policies, the economics of security, IoT device security, electronic waste security, the security of AR/VR/MR devices, and others.
She is also working as a User Experience Consultant for the secure technologies at Parity Technology and as a Global Privacy Adviser at XRSI.org.
Earlier she has completed Masters in Security Informatics from Indiana University Bloomington, Masters in Computer Applications from Jadavpur University, Bachelors in Computer Applications from The Heritage Academy. She has previously worked as a Security and Software Engineer for American Express, Infosys Technologies, and HCL Technologies.
Sanchari's research work has been published in several top-tier academic venues, including CHI, FC, SOUPS, etc. She has also presented at several security conferences, including BlackHat, RSA, BSides, Enigma, and others. These works have also received media coverage in CNET, The Register, VentureBeat, PC Magazine, Iron Geek, and other venues.
Tuesday, February 2, 2021
7:00 am–7:05 am
Opening Remarks, Day 2
Program Co-Chairs: Lea Kissner, Apple, and Daniela Oliveira, University of Florida
9:05 am–9:20 am
9:20 am–10:50 am
Vulnerabilities and Attacks
Session Chair: Andre Gregio, Federal University of Parana, Brazil
Maddie Stone, Google Project Zero
0-day exploitation occurs when an attacker abuses a vulnerability that the defenders don't yet know about. This makes it very hard to protect against 0-day exploits and also makes 0-day vulnerabilities highly valuable. So how do we protect against the exploitation of unknown vulnerabilities? It starts with understanding everything we can about 0-day exploits.
Each time a 0-day exploit is detected in-the-wild, it's the failure case for attackers. Therefore as defenders, we should use these "failures" as an opportunity to learn as much as we can about the vulnerabilities targeted, the exploitation methods used, the techniques for discovering the vulnerabilities, and more. As a security and technical community, we can then use this data to prioritize what vulnerability research to undertake, gaps in our detection methods, exploit mitigations that will have the most return on mitigation, and overall, how to make it harder for attackers to exploit 0-days to harm users.
This talk synthesizes what we can learn from the 0-days that were exploited in-the-wild in 2020. For each of these 0-days, Project Zero performed a root cause analysis, which details the vulnerability exploited and the exploit methodology used. From these facts, we then developed ideas for better detections and systemic fixes, hypothesized on what methods the actors used to discover the vulnerability, and performed variant analysis. In this talk, we'll share what we've learned from these exploits used in 2020 and how to apply it in 2021 to make it that much harder.
Maddie Stone (@maddiestone) is a Security Researcher on Google Project Zero where she focuses on 0-days used in-the-wild. Previously, she was a reverse engineer and team lead on the Android Security team, focusing predominantly on pre-installed and off-Google Play malware. Maddie also spent many years deep in the circuitry and firmware of embedded devices. Maddie has previously spoken at conferences including Black Hat USA, REcon, OffensiveCon, KasperskySAS, and others. She holds a Bachelors of Science, with a double major in Computer Science and Russian, and a Masters of Science in Computer Science from Johns Hopkins University.
Hyrum Anderson, Microsoft
Researchers in adversarial machine learning (ML)—which studies the security of ML in the presence of an attacker—have written more than 2,300 papers in just the last 6 years. However, in a recent survey, almost 90% of corporations and government entities reported that they are not equipped to protect their ML models. The fact remains that there are significant gaps between academic advancements and industry needs and practices. First, aside from full-scale security products, ML models get scant security consideration. Second, most organizations who are concerned with ML security do not know how to begin. Third, the tools and techniques produced in academic research often don't translate to business needs.
To demonstrate these gaps, I will review some sobering lessons learned during a Machine Learning Red Team engagement at Microsoft on systems that rely on ML. They include the reality that "traditional" security measures like access control and permissions remain the most significant element for safeguarding ML systems. Furthermore, especially outside of security applications, awareness about ML vulnerabilities remains low and security posture is nearly zero.
Hyrum Anderson is Principal Architect in the Azure Trustworthy Machine Learning group at Microsoft. Prior to joining Microsoft, he was the Chief Scientist at Endgame, and conducted research in information security and situational awareness at FireEye, Mandiant, Sandia National Laboratories, and MIT Lincoln Laboratory. He received his Ph.D. in Electrical Engineering (signal processing + machine learning) from the University of Washington and BS+MS degrees from Brigham Young University. He is co-founder and co-chair for the Conference on Applied Machine Learning in Information Security and has spoken at numerous signal processing, machine learning, and security conferences, including RSA, DEFCon, and BlackHat.
Trey Herr, Ph.D., Atlantic Council
Society has a software problem. Since Ada Lovelace deployed the first computer program on an early mechanical device in the 1840s, software has spread to every corner of human experience. With that software come security flaws and a long tail of updates from vendors and developers. Unlike a physical system that is little modified once it has left the factory, software is subject to continual revision through updates and patches. Software supply chain security remains an underappreciated domain of national security policymaking. This talk explores 115 software supply chain attacks and vulnerability disclosures from the past decade to sum up where we are and how far we still have to go. Software supply chain attacks are popular, they are impactful, and are used to great effect by states, especially China and Russia. The implications for the technology industry and cybersecurity policymaking community are a crisis in waiting. The solution is not panic nor is it a moonshot, but rather a renewed focus on software supply chain security practices, new investment from public and private sectors, and revisions to public policy that emphasize raising the lowest common denominator of security behavior while countering the most impactful attacks.
Dr. Trey Herr is the Director of the Cyber Statecraft Initiative at the Atlantic Council. His team works on the role of the technology industry in geopolitics, cyber conflict, the security of the internet, and cyber safety. Previously, he was a Senior Security Strategist with Microsoft handling cloud computing and supply chain security policy as well as a fellow with the Belfer Cybersecurity Project at Harvard Kennedy School and a non-resident fellow with the Hoover Institution. He holds a Ph.D. in Political Science and BS in Musical Theatre and Political Science.
10:50 am–11:20 am
11:20 am–12:50 pm
Surviving and Thriving
Session Chair: Diane Hofsteader, Mozilla
"I Thought I Was Being Strong with a Complicated Person": The Tales of Intimate Gender-Based Online Abuse in the Global South
Sofía Celi, Cloudflare
This is a tale of a personal experience; but also one that is well-known to people identifying as women that have suffered from intimate gender-based digital violence. It is a personal tale because this research started from a personal experience of usage of digital tools in the hands of an ex-partner to censor, harass, and police behavior. Fueled by this, we launched research on how digital tools are used to enhance intimate gender-based abuse in the Global South.
Frequently, it is often thought that intimate gender-based online abuse is a problem from the Global North, a problem that only certain countries with high access to technology have, or that it is only done with the usage of stalkerware or some kind of malware. But these presumptions limit the experiences that regions from the Global South face, as it underestimates the tales from those regions. In this talk, we provide a deep dive into which methods abusers in the Global South (specifically, in Latin America) use to increase gender-based abuse, by providing clear definitions and experiences of them. We will also explore the societal aspects that support the usage of online tools to enhance abuse, and emphasize the little or no help given from a legal or policy perspective in those regions.
This talk will also highlight the importance of looking at experiences from different places around the world, as the solutions that we give are, oftentimes, only focused on a Global North perspective. In the specific case of intimate gender-based online abuse, the tales and experiences that come from the Global South show us that there is still work to be done to support them (from a technical perspective) and to further understand them.
Sofía Celi is a cryptography researcher and implementer at Cloudflare. She also currently leads the design and development of version 4 of the Off-the-Record (OTR) messaging protocol. She is very involved in helping end gender-based violence that uses digital tools in the Global South. She is interested in programming languages, compilers, usable security and privacy by design, post-quantum cryptography, and real-world applications of cryptography.
Leigh Honeywell, Tall Poppy
It is uncommon in security incident response to even acknowledge the profound feelings of violation that often accompany security incidents. This is somewhat understandable when we're talking about the systems at people's workplaces, but in working with individuals facing online harassment it's crystal clear that these feelings often edge into the territory of trauma psychology. As incident responders, too, we experience the effects of close contact with trauma—and need tools to manage our own feelings and experiences as a result.
There is a way forward—the body of knowledge established by caregivers and first responders who don't have much to do with computers. We as a field can learn to apply this knowledge—sometimes called "trauma stewardship"—to our work with individuals and systems affected by security incidents.
Leigh has more than a decade of experience in computer security incident response. Prior to co-founding Tall Poppy, she was a Technology Fellow at the ACLU's Project on Speech, Privacy, and Technology. Her industry career included running security incident response at Slack, protecting infrastructure running a million apps at Salesforce.com, shipping patches for billions of computers on a monthly basis at Microsoft, and analyzing malware at Symantec. Leigh has a Bachelor of Science from the University of Toronto where she majored in Computer Science and Equity Studies, and is a frequent keynote speaker at security and software conferences around the world.
Da Da: What Shark Conservation Teaches Us about Emotionality and Effective Communication Strategies for Security and Privacy
Melanie Ensign, Discernible Inc.
Governing strategies that induce or amplify crisis are common in the public and private sectors in order to relieve public anxiety and boost confidence in the wake of an incident, but they do little to address the underlying cause. This talk will discuss lessons learned from the shark science and conservation community that can be adopted by security and privacy professionals to influence public discourse without relying on emotional manipulation or fear.
Melanie Ensign is the founder and CEO of Discernible Inc., a specialized security and privacy communications consultancy. After managing security and privacy communications for some of the world's most notable brands including Facebook, Uber, and AT&T, she now helps even more organizations adopt effective communication strategies that improve operations and help reduce risk. She counsels executives and technical teams alike on how to cut through internal politics, dysfunctional inertia, and meaningless metrics. Melanie also leads the press department for DEF CON and has logged hundreds of hours diving with sharks all around the world. She holds a degree in communications from the University of Illinois-Chicago and a Master of Science from Boston University.
Wednesday, February 3, 2021
7:00 am–7:15 am
Opening Remarks, Day 3
Program Co-Chairs: Lea Kissner, Apple, and Daniela Oliveira, University of Florida
7:05 am–9:05 am
Session Chair: Christian Follini, Swiss Cyber Storm
Carmela Troncoso, EPFL
When talking about Contact Tracing Mobile apps, most of the discussion centers on the protocol and its properties. While this is indeed central to the security and privacy of the system, once the protocol is integrated in an app, and in a larger ecosystem including server and health services, more privacy mechanisms are needed to ensure that the privacy properties are kept end to end. When performing this integration, however, the privacy engineer has little control over many of the pieces that are key for operation. In this talk, I will describe how this lack of control results in increasing hurdles for privacy and how we overcame these obstacles in the case of contact tracing apps.
Carmela Troncoso is an assistant professor at EPFL (Switzerland) where she heads the SPRING Lab. Her research focuses on security and privacy. Carmela holds a Master's degree in Telecommunication Engineering from the University of Vigo (2006) and a Ph.D. in Engineering from the KU Leuven in 2011. Before arriving at EPFL, she was a faculty member at the IMDEA Software Institute in Spain for two years; the Security and Privacy Technical Lead at Gradiant, working closely with industry to deliver secure and privacy-friendly solutions to the market for four years. Her thesis, Design and Analysis Methods for Privacy Technologies, received the European Research Consortium for Informatics and Mathematics Security and Trust Management Best Ph.D. Thesis Award, and her work on Privacy Engineering received the CNIL-INRIA Privacy Protection Award in 2017. In 2020, she leads the DP3T effort towards designing and deploying privacy-preserving contact tracing applications. The design principles from DP3T are the basis of dozens of contact tracing apps deployed around the world and led Fortune Magazine to include Carmela in the 40 under 40 list of 2020.
Merry Ember Mou, Zoom Video Communications
In a remote-first world, end-to-end encrypted (E2EE) communications will become more common. The Zoom team published in May 2020 an incrementally deployable proposal for E2EE in their video conferencing product that describes not only moving user key generation to clients but also building a strongly-trusted and user-friendly concept of long-term identity. After all, E2EE is only as secure as the ends: if Alice thinks she is talking to her coworkers, but instead her competitors are participating in the meeting, encryption is not sufficient to protect her. This talk will go over and highlight some of the objectives and challenges of our multi-phased, multi-pronged approach to E2EE that has strong identity confirmation protocols, and minimal server-trust.
Integrating an E2EE implementation into an existing system like Zoom, which supports hundreds of millions of meeting participants every day, has required particular consideration of existing architectural constraints, existing user trust models, and user expectations in the UI/UX. Designing with these priorities becomes even more significant in subsequent phases of the E2EE plan, where we aim to establish a consistent and auditable identity designed to tie each user to their Zoom account/organization and their many devices' long-term keys. These user identities will be enforced by several mechanisms to minimize the reliance on server-side security, with the eventual goal of making server compromise of user identities detectable by external auditors. With each phase, we improve the properties of a user's displayed identity in a meeting and aim to make verifying the security of a Zoom meeting as intuitive for the user as possible.
Merry Ember Mou is a software engineer at Zoom working on end-to-end encryption. Their previous experience includes working on Keybase, building backend systems at a network security startup, and developing online community platform research. They hold a master's degree in computer science from MIT.
Mark Funk, Obscure Group
A summary of my experiences designing solutions for verifiable attestation, lab test results, and vaccination history to address public safety requirements for the COVID-19 outbreak. The journey towards designing an application, respecting safety and privacy, in a hypothetical world where lab tests and vaccines are available and freely accessible for all.
Mark runs a small security and privacy consultancy. Their experience involves nearly 15 years of product, systems, security, and privacy engineering...across a huge variety of product verticals.
Cooper Quintin, Electronic Frontier Foundation
4G based IMSI catchers such as the Hailstorm are becoming more popular with governments and law enforcement around the world, as well as spies, and even criminals. Until now IMSI catcher detection has focused on 2G IMSI catchers such as the Stingray which are quickly falling out of favor. In this talk, we will demonstrate a brand new software project to detect fake 4G base stations, with open source software and relatively cheap hardware. We will reveal what we have found so far using our methods. And finally, we will present a plan to dramatically limit the capabilities of IMSI catchers (with the long term goal of making them useless once and for all).
Cooper is a Senior Security Researcher with the EFF Threat Lab. He has worked on projects such as Privacy Badger and Canary Watch. With his colleagues at threat lab, he has helped discover state-sponsored malware and nation-state actors such as Dark Caracal and Operation Manul. He has also performed security training for activists, non-profit workers, and ordinary folks around the world. He also was a co-founder of the Hackbloc hacktivist collective and published several issues of the DIY hacker zine "Hack This Zine." In his spare time, he enjoys playing music, playing with his kid, and imagining a better future.
9:05 am–9:20 am
9:20 am–10:50 am
Session Chair: Ian Coldwater, Heroku
Tabitha Sable, Datadog
DevOps has rapidly become the dominant system administration paradigm in successful IT organizations across many industries. A shared-responsibility model, combined with the widespread use of containers, means that many sysadmin tasks are now being performed by software developers unfamiliar with the "missing stairs" of UNIX systems security. This has led to the widespread deployment of insecure systems, but these same tools and social constructs can enable greater uptake of modern system hardening practices. It's taking a little while, but we're getting there.
In this talk, Tabitha will present a brief overview of the journey, review current challenges in container systems security, and describe how people and technologies are helping us get "there." We can run more systems with lower risk and higher change velocity.
Tabitha Sable has been a hacker and cross-platform sysadmin since the turn of the century. As a Systems Security Engineer at Datadog, she has a broad commission to build tools and make friends: writing proof of concept exploits, coordinating the efforts of the infrastructure, security, and product teams, and frequently saying "I wonder what happens if we..." Outside of work, she can often be found organizing or participating in Capture the Flag contests and loves "pretty much anything with wheels." You can follow her on Twitter at @tabbysable.
Alex Gaynor, Alloy, Fish in a Barrel
The fact that C and C++ are not memory safe, leading to vulnerability classes such as use-after-free and buffer-overflow is not new. However, these languages remain in exceptionally wide use, even for new projects. For several years, Fish in a Barrel has been attempting to quantify how common memory-unsafety induced vulnerabilities are in major projects, and researching what tactics are effective at convincing developers to reconsider C and C++.
This talk presents our results: we show the empirical data which leads us to the conclusion that C and C++ are not tenable for modern secure development, including statistics across a large swath of projects. We also present what we've learned about how developers respond to this fact, in the frame of the Five Stages of Grief.
Alex is a software security engineer. He's a founder and principal at Fish in a Barrel, working on systemic solutions to classes of vulnerabilities. By day he's Chief Information Security Officer at Alloy, and previously of Mozilla and the United States Digital Service. Alex has a long history of contribution in open source, from building a JIT'd Ruby VM to serving on the Board of Directors of the Python Software Foundation. Alex lives in Washington, DC.
Chris Palmer, Google Chrome Security
Privilege separation and reduction ("sandboxing") has significantly improved software security, and in many applications is a baseline requirement for safe design. (In fact, there are still many applications that can and should adopt sandboxing.)
Although necessary, sandboxing is not sufficient by itself. The designs and implementations of real-world operating systems put a ceiling on the effectiveness and applicability of sandboxing. From years of experience shipping Chromium, we have learned that (1) Chromium is at or near the limit of how much safety it can practically provide with privilege separation and reduction; and (2) we still need to provide greater resilience.
Therefore, we must find and develop additional security mechanisms. Our primary approach is now working toward increased memory safety. Where sandboxing limits the value attackers gain from exploiting vulnerabilities, memory-safe(r) code can eliminate vulnerabilities altogether or make it infeasible to use them in an exploit chain.
This talk is about lessons learned in the real world. I'll discuss the nature and particulars of the OS limitations we face, what security gap they leave us with, and what we are doing to make Chromium's large codebase less memory-unsafe. I'll highlight some lessons we've learned that security engineers working on other projects can hopefully make use of.
I work at Google as a software security engineer on Chrome, where I work on hardening Chromium's underpinnings and securing the web platform runtime. (I was previously on the Secure UX sub-team, and before that, I worked on Web PKI.) I used to be on the Android team at Google. Previously, I was the Technology Director at the Electronic Frontier Foundation, a Principal Security Consultant at iSEC Partners (now NCC Group), and a web developer at a couple of small shops.
10:50 am–11:20 am
11:20 am–12:50 pm
Abusing People and Culture
Session Chair: Andrea Limbago, Interos
Kate Starbird, University of Washington
The past year has been a difficult one. A pandemic has taken millions of lives and disrupted "normal" routines across the globe. In the United States, we have experienced an unprecedented political situation with a sitting President refusing to concede after losing an election. Each of the events has been accompanied by uncertainty and anxiety, as well as massive amounts of false and misleading information. In this talk, I will explore some of the mechanics of online misinformation, explaining why we are particularly vulnerable right now—due in part to the nature of these crises, and in part to the current structure of our information systems. Using examples from both COVID-19 and Election2020, I will explain how we are living through a "perfect storm" for both misinformation and disinformation. And I will describe how disinformation, in particular, can be an existential threat to democratic societies. After laying out the problems, I aim to end on a more hopeful note, with a call to action for researchers and industry professionals to help "chip away" at this critical societal issue.
Kate Starbird is an Associate Professor at the Department of Human-Centered Design & Engineering (HCDE) at the University of Washington (UW). Dr. Starbird's research sits in the emerging field of crisis informatics—the study of how social media and other communication technologies are used during crisis events. Currently, her work focuses on the production and spread of online rumors, misinformation, and disinformation in the context of crises and other disruptive events. Starbird is a co-founder of the UW Center for an Informed Public.
Gianluca Stringhini, Boston University
Online hate on social media has become a serious problem. Aggression is often not the act of single individuals, but rather the result of coordinated activity between like-minded people who gather on polarized online communities, identify suitable targets, and carry out their attacks. Studying this phenomenon is hard, partly because it is not a purely technical problem, and partly because this malicious activity unfolds across multiple online services but the research community currently lacks effective tools to keep track of information that spans multiple platforms.
In this talk, I will present our research on studying the modus operandi of attackers that orchestrate and execute coordinated aggression attacks on social media. I will present two case studies, which are the results of an analysis of billions of social media posts using a mixture of quantitative and qualitative analysis. First, I will describe coordinated hate attacks against the authors of YouTube videos. Second, I will analyze Zoom bombing attacks, in which miscreants organize to disrupt those online meetings that have become central to our professional and personal lives. As part of this study, we find that most Zoom bombing meetings are called by insiders who ask for attackers to disrupt their own meetings (for example online lectures). Finally, I will discuss challenges in developing countermeasures against these phenomena, ranging from the need to protect from attacks in the presence of malicious insiders to the potential unintended consequences of suspending offending accounts on social media, which might result in pushing users to more extreme and unmoderated online communities.
Gianluca Stringhini is an assistant professor in the ECE Department at Boston University. Gianluca works in the area of data-driven security, analyzing large datasets to better understand complex malicious online operations and developing mitigation techniques to fight them. He was awarded multiple awards including an NSF CAREER Award in 2020 and a Facebook Secure the Internet Grant in 2018, and his research won multiple Best Paper Awards, including one at IMC 2018. He has published in top security conferences such as CCS, NDSS, and USENIX Security, as well as top measurement and Web conferences such as IMC, ICWSM, and WWW.
Does Your Threat Model Consider Country and Culture? A Case Study of Brazilian Internet Banking Security to Show That It Should!
Marcus Botacin, Federal University of Paraná (UFPR-Brazil)
Every attack has a story. Uncovering these stories is essential to identify the gaps that allowed the attack to occur and the countermeasures to prevent it from happening again. Over time, many security players tried to model these gaps and countermeasures in their threat models, but all these attempts present the same drawback: they generalize everything! However, not every threat is global. The threats I used to find in Brazil were distinct from those reported in the global news and their prevalence significantly differed from what was described in the literature. What was going on? The problem is that the Brazilian scenario presents characteristics that make it unique (e.g, the way Internet banks operate, the way the Internet access is provided and charged), and these factors significantly influence the way that threats are developed and how users are targeted. For instance, even before the Web-based systems, attackers exploited the early computerization of the Brazilian bank system to deploy phishing applications mimicking the Bank’s operations. The movement towards the Web generated a profusion of JAVA-based malware never seen elsewhere, as the Brazilian bank systems were JAVA-based. Recently, with the emergence of mobile devices, prepaid data plans with free Whatsapp access motivated the deployment of banks-powered Whatsapp-based banking transactions. Are we prepared to handle these scenarios or are we overlooking them? Furthermore, these likely-overlooked scenarios might not be limited to Brazil, but these attacks might have already been happening elsewhere. Therefore, I invite you to come with me to take a look at a dataset of more than 40 thousand unique malware samples collected in Brazil over 7 years to understand what we missed by not looking at regionalized threats. This talk is a call to action for more personalized threat models and security evaluations.
Marcus is a Ph.D. candidate at Federal University of Paraná (UFPR), Brazil. He is also a Computer Engineer and has a Master in Computer Science from the University of Campinas (UNICAMP), Brazil. His main research interests are malware analysis and reverse engineering, with years of experience in sandbox development.