All the times listed below are in Pacific Standard Time (PST).
Monday, February 1, 2021
7:00 am–7:15 am
Opening Remarks, Day 1
Program Co-Chairs: Lea Kissner, Apple, and Daniela Oliveira, University of Florida
7:15 am–9:15am
Securing Democracy
Session Chair: Joseph Lorenzo Hall, Internet Society
Is Cyber War Legal: A Four Hundred Year Retrospective
Scott Shapiro, Yale University
As cyber-attacks have become increasingly common (e.g., STUXNET, DNC hack, NotPetya), lawyers have struggled to determine how these 21st Century weapons fit into the traditional laws of war. Despite the enormous amount of attention paid to cyber-conflict over the past decade, the legal community is more confused than ever. Put bluntly, we are still very far away from agreeing on whether cyberwar is legal and, if so, under which circumstances. In this talk, I try to answer these questions by situating current debates within the long history of international law. The modern laws of war have changed dramatically from their initial formulations in the seventeenth century. By seeing how the right of war has evolved over the last four hundred years, I show how cyber conflicts fit into this evolution and the conditions under which states may use cyberweapons to disrupt the digital networks of their adversaries.
Scott Shapiro, Yale University
Scott Shapiro is the Charles F. Southmayd Professor of Law and Professor of Philosophy at Yale University. His areas of interest include jurisprudence, international law, criminal law and cybersecurity. He is the author of Legality (2011), The Internationalists (2017) (with Oona Hathaway) and editor of The Oxford Handbook of Jurisprudence and Philosophy of Law (2002) (with Jules Coleman). He earned B.A. and Ph.D. degrees in philosophy from Columbia University and a J.D. from Yale Law School. Shapiro is an editor of Legal Theory and the Stanford Encyclopedia of Philosophy. He is also the founding director of the Yale CyberSecurity Lab, which provides cutting-edge cybersecurity and information technology teaching facilities. His next book, entitled Insecurity, details the history and technology of internet hacking.
Security & U.S. Political Campaigns: A Study & Expert Roundtable
Sunny Consolvo, Google
"Security and politics should be separate... If you're a candidate, you should win or lose on your best day, based on who you are. Not because your email got popped and posted online by a [nation-state cybersecurity team]." –A study participant
Data breaches and hacking in the 2016 election cycle increased awareness of security vulnerabilities among political workers, but campaigns have not yet adequately adapted to the threats. Improving campaign security is a nonpartisan issue that is an important part of preserving democratic institutions. We will share research we conducted to understand the security practices, challenges, and perceptions of people involved in politics, with a focus on U.S. campaigns across the political spectrum. We will provide recommendations to help the audience learn how to help improve the security of campaigns and help protect democracy.
Sunny Consolvo, Google
Sunny Consolvo is a researcher at Google where her focus is on security, privacy, and anti-abuse topics. She led Google's Security & Privacy UX team for several years. In recent work, she has investigated the security and privacy practices of people involved with political campaigns, survivors of intimate partner abuse, people facing financial insecurity, and explored the technology experiences of women in South Asia. Prior to Google, Sunny worked at Intel Labs Seattle where she investigated how to use mobile technologies to encourage wellness and to help people be more aware of the privacy implications of sensing and inference systems.
Sunny was recently elected to the SIGCHI Academy. She has received a test-of-time award, three 10-year impact awards, and many best paper awards. She has been featured in the press, including Wired and Scientific American Mind. Sunny received her Ph.D. from the University of Washington. She is a member of the Editorial Board for IEEE Pervasive Computing, the Steering Committee for the Symposium on Usable Privacy and Security, and she became a Certified Information Privacy Professional (US) in 2013.
The Adventurous Tale of Online Voting in Switzerland
Dr. Christian Folini, Program Chair Swiss Cyber Storm Conference, OWASP CRS Co-Lead
The Swiss tale with online voting serves as a typical example of the iterative development of highly critical IT systems and the growing involvement of scientist as a necessary step for a government that is willing to learn from past mistakes.
Switzerland has been experimenting with online voting for over 15 years. Several generations of electronic voting systems have been implemented and almost all of them died along the way because of their profound security problems or when the money ran out.
In 2019, Swiss Post published the source code of its online voting system, the last system that was still in the race. Several highly critical findings were discovered in a matter of weeks and the system was stopped right before the national elections.
In 2020, the government rebooted the process and invited two dozen international researchers into an intense dialogue that lasted several months. The resulting report a the base for the renewed regulation that will pave the way forward in 2021.
Dr. Christian Folini, Program Chair Swiss Cyber Storm Conference, OWASP CRS Co-Lead
Christian Folini is a security engineer and open source enthusiast. He holds a Ph.D. in medieval history and enjoys defending castles across Europe. Unfortunately, defending medieval castles is not a big business anymore and he turned to defending web servers, which he finds equally challenging. He brings more than ten years of experience with ModSecurity configuration in high-security environments, DDoS defense, and threat modeling.
Christian Folini is the author of the second edition of the ModSecurity Handbook and the best-known teacher on the subject. He co-leads the OWASP ModSecurity Core Rule Set project and serves as the program chair of the "Swiss Cyber Storm" conference, the prime security conference in Switzerland.
He stood in the first line of the defenders when the Swiss Post Online Voting system was put up for a public intrusion test in 2019. The following year, he moderated a dialogue between the Swiss government and a group of two dozens researchers on online voting. This dialogue resulted in a report with several hundred pages.
Christian Folini is a frequent speaker at conferences. When speaking, he tries to use his background in the humanities to explain hardcore technical topics to audiences of different backgrounds.
The Full Stack Problem of Election Security
Jack Cable, Security Researcher and Student, Stanford University
Foreign adversaries interfered in America's 2016 election, and we can expect to see similar activities in 2020. War waged via attacks on state voter registration databases, hack and leak operations targeting political campaigns, and disinformation campaigns against the American people proved that America's elections were not prepared for nation-state adversaries.
The security community, together with election officials, policymakers, and social scientists, can help confront our nation's vulnerabilities. Just as research and advocacy led to an increased usage of voter-verified paper ballots and risk-limiting audits across the nation, so too can we help build resilience across other election technologies and the American people. By addressing the full-stack problem of election security—from the technology used to register voters, cast ballots, and report results, the infrastructure of political campaigns, and most importantly the people that ultimately judge an election's validity—we can hope to preserve the sanctity of our electoral processes.
Jack Cable, Stanford University
Jack Cable is a security researcher and an undergraduate student at Stanford University, currently working as an election security technical advisor at the Cybersecurity and Infrastructure Security Agency (CISA) as well as a research assistant tracking disinformation at the Stanford Internet Observatory. Jack is a top ranked bug bounty hacker, having reported over 350 vulnerabilities to hundreds of companies. Prior to CISA, Jack worked for the Defense Digital Service on the Hack the Pentagon program. Jack also serves as an advisory committee member for the IT-ISAC, aiding election vendors in building bridges with security researchers by establishing vulnerability disclosure policies. At Stanford, Jack studies computer science and launched Stanford’s bug bounty program, one of the first in higher education.
9:15 am–9:30 am
Break
9:30 am–11:00 am
Hardware
Session Chair: Rodrigo Branco, Amazon
Hardware: A Double-Edged Sword for Security
Nicole Fern, Tortuga Logic
Hardware is the trust anchor for many systems ranging from IoT devices to datacenter infrastructure, but hardware is a double-edged sword. Hardware can provide security services to protect devices, but hardware features often open up systems to attack. This talk will provide an overview of system-level threats hardware architectures are designed to address along with real-world examples of how hardware features added to increase performance and configurability leave systems vulnerable. Insight into the challenges chip vendors and system designers face when trying to detect and prevent hardware vulnerabilities will be provided along with an overview of methodologies and solutions that provide promise in this space.
Nicole Fern, Tortuga Logic
Dr. Nicole Fern is a Senior Hardware Security Engineer at Tortuga Logic whose primary role is providing security expertise and defining future features and applications for the product line. Before joining Tortuga Logic she was a postdoc at UC Santa Barbara. Her research focused on the topics of hardware verification and security.
A Sound Mind in a Vulnerable Body: Practical Hardware Attacks on Deep Learning
Sanghyun Hong, University of Maryland, College Park and Maryland Cybersecurity Center (MC2)
The widespread adoption of machine learning (ML) incentivizes potential adversaries who wish to manipulate systems that include ML components. In consequence, research in the field of adversarial machine learning studies attack surfaces such as training data with bad samples (data poisoning) and predictions manipulated by imperceptible perturbations (adversarial examples). However, most of the prior work focuses on the "soundness of mind" by looking at ML as a mathematical concept, and it overlooks the security threats caused by practical hardware attacks such as fault injection or side-channel attacks.
In this talk, we will propose a new perspective: we view ML as a computational tool running on hardware, a potentially "vulnerable body". We will introduce the emerging research on the vulnerabilities of ML models to practical hardware attacks. These attacks can cause unexpected damage, and ultimately, they shed new light on the dangers of hardware-based attack vectors.
First, we review the impact of fault-injection attacks. We show that, by flipping a single bit in the memory representation of a deep neural network (DNN), we can degrade the prediction accuracy by 90% or more. Contrary to the conventional wisdom in the ML community---that DNNs are resilient to parameter perturbations---we find that this vulnerability is widespread in modern DNNs. An adversary can exploit the vulnerability in practice with a software-induced fault attack, Rowhammer; we demonstrate that, even if the attacker randomly flips bits in memory, the attacker inflicted the accuracy drop more than 10% within a minute.
Second, we review the impact of side-channel attacks. We show that a typical cache side-channel attacker can reverse-engineer the architecture details of a DNN model. To this end, the attacker exploits the computational regularities in ML frameworks: DNN layers are processed sequentially, and the time it takes to process them depends on the architecture configurations. In practice, our attacker was able to steal two essential components of deep learning systems: a data preprocessing pipeline and a custom DNN architecture. They are potentially proprietary, as their development requires substantial resources. Nevertheless, the leakage from a cache side-channel, while the victim processes a single input, is sufficient for reconstructing them with 0% error.
Finally, we conclude by emphasizing the vulnerability of ML to hardware attacks is as yet an under-studied topic; thus, we encourage the community to re-examine security properties guaranteed by previous works with a new angle. Separately, some properties of ML particularly make the exploitation of hardware attacks easy; therefore, we need to consider additional ML-level defenses that account for robust properties. We believe this is the best moment to pursue the ancient wisdom: "a sound mind in a sound body"
Note: we have our website that outlines our research: http://hardwarefail.ml
Sanghyun Hong, University of Maryland, College Park and Maryland Cybersecurity Center (MC2)
Sanghyun Hong is a Ph.D. candidate in Computer Science at the University of Maryland, College Park (UMD), advised by Professor Tudor Dumitras. His research interests span the security and privacy of machine learning (ML). In his dissertation research, he exposed the vulnerability of deep learning algorithms to hardware attack vectors, such as Rowhammer or side-channel attacks. He also worked on identifying hidden properties within deep learning algorithms, such as overthinking and gradient-level disparity, whose quantification led to defensive mechanisms against backdoor or data poisoning attacks, respectively. His research outcomes are published in security and ML conferences: USENIX, ICLR, ICML, and NeurIPs. He is a recipient of the Ann G. Wylie Dissertation Fellowship and is currently a Future Faculty Fellow in A. James Clark School of Engineering at UMD. He is on the academic job market this year.
A Quest for the Physics of Cyberspace
Julian Rrushi, Oakland University
We present research on the discovery of the physics of cyberspace, meaning the laws that govern the dynamics of cyberspace. We discover physical properties of cyberspace in aggregate from a probabilistic examination via statistical physics and dynamical systems theory of the underlying computer hardware architectures, operating system designs, code execution, and networking. We encode the inner workings of hardware architectures and operating systems into mathematical models that resemble physical matter such as gasses and fluids, while representing the physics of cyberspace with high fidelity. We then explore statistical physics and dynamical systems theory similarly to how they were used to study gasses and fluids. We develop new physical-matter inspired approaches at the intersection of statistical physics and dynamical systems theory, in a quest for the physics of cyberspace. We guide this research with experimental data that we obtain via instrumentation of hardware emulation and operating system virtualization.
Julian Rrushi, Oakland University
Dr. Julian Rrushi is an assistant professor of engineering at Oakland University, Michigan. Julian completed a B.S. in Computer Science, an M.S. in Information Technology, and a Ph.D. in Computer Science at the University of Milan. He researches operating systems, hardware architectures, and artificial intelligence to break new ground in computer security. Julian has received a Young Faculty Award from DARPA to research the physics of cyberspace.
11:00 am–11:30 am
Break
11:30 am–1:00 pm
Privacy Tech
Session Chair: Melanie Ensign, Discernible, Inc.
No Data, No Problem—Giving Nuclear Inspectors Better Tools without Revealing State Secrets
Mitch Negus, University of California, Berkeley
Nuclear safeguards protect nuclear material from being diverted from legitimate activities, like nuclear power applications or nuclear physics research, to more sinister purposes. International regulations dictate the safeguards protocols that must be adhered to, and in general, nations meet these obligations as necessary. However, reluctance on the part of participating nations and their businesses to yield more information than is absolutely necessary stands as an obstacle against bringing the full force of modern data analytics to bear in safeguards applications. Privacy-preserving computation, including secure multiparty computation (MPC), presents a solution. Garbled circuits, a specific class of MPC algorithms, allow multiple parties (in this case an international inspector and a nation-state or nuclear facility) to jointly perform calculations without revealing their respective inputs. Having previously shown that garbled circuits can successfully identify anomalies in time series data, this work pivots to showing that these anomaly detection circuits can distinguish anomalous events in categorically specific nuclear safeguards relevant time series.
Mitch Negus, University of California, Berkeley
Mitch Negus is PhD student in Department of Nuclear Engineering at UC Berkeley. He researches privacy-preserving computation as it applies to nuclear safeguards, specifically developing software tools to allow safeguards-relevant conclusions to be reached without revealing the underlying data. Mitch is also interested in nuclear energy, loves teaching, and he gets excited to learn and share best practices in reproducible scientific computing.
Mitch moved to Berkeley after graduating from the University of Massachusetts Amherst with a BS in Physics. He earned his MS modeling nuclear reactors before transitioning to his current work.
Gone, But Not "Forgotten"—Technical & Practical Challenges In Operationalizing Modern Privacy Rights
Kelly Huang, VP Product, Ethyca
The intersection of data protection law and engineering is still young. In many businesses, engineers building systems for laws like CCPA and GDPR suffer from a lack of institutional knowledge on privacy, ambiguity around system requirements, and even ambiguity regarding legal terms - just what is “erasure”? In this presentation, Kelly Huang, VP of Product at privacy startup Ethyca, will give attendees an insight into how modern data privacy laws get operationalized in complex systems, and in so doing, help lift the veil on the many outstanding questions that persist regarding privacy in the engineering community.
Kelly Huang, VP Product, Ethyca
Kelly is the VP of Product at Ethyca, a data privacy startup that enables all companies to empower their customers to take control of their personal data. She is a startup veteran with over a decade of software development and product strategy experience in fintech, ecommerce, AAA media, and healthcare IT. She has been responsible for the product vision and monetization strategy for B2C and B2B2C digital products at multinational Fortune 100 companies. Kelly is passionate about building great products with responsible intent and is an amateur pizza connoisseur.
Implementing Differential Privacy for the 2020 Census
Simson Garfinkel, US Census Bureau
Differential Privacy was invented in 2006 to protect the privacy of people who respond to a national census. The U.S. 2020 Census of Population and Housing will mark the first time that differential privacy will be used for its existential purpose. Bringing leading-edge privacy technology from the lab into practice required a significant amount of scientific and technical development, and it presented organizational challenges as well to one of the world’s largest statistical organization. Nevertheless, in three years the Census Bureau assembled a team, developed a reference implementation, transitioned that implementation to Amazon Web Services, redesigned the implementation’s framework, creating a system that made it easy to perform experiments, performed an end-to-end test, used the implementation to re-release data from the 2010 Census, developed new algorithms to address the data quality concerns of stakeholders, and released multiple reference implementations of the code base.
Simson Garfinkel, US Census Bureau
Simson L. Garfinkel is the Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau. He has published research articles in the areas of computer security, digital forensics and privacy. He is a fellow of both the Association for Computing Machinery and the Institute for Electrical and Electronics Engineers, and was awarded a PhD in Computer Science from MIT in 2005. He was previously an Associate Professor at the Naval Postgraduate School and a Computer Scientist at the National Institute of Standards and Technology.
1:00 pm–1:15 pm
Break
1:15 pm–2:15 pm
Humans Making Decisions
Session Chair: Joe Calandrino, Federal Trade Commission
Privacy, Measurably, Isn't Dead
Patrick Gage Kelley, Google
You've heard that privacy is dead or dying, but we challenge that view with data collected from over 90,000 respondents across 25+ countries and 6 continents that we've been gathering annually since 2015. In this talk we'll feature six findings from our results, focused on the state of privacy today, attitudes about privacy in the future, how this varies around the world, and what we in the ENIGMA community can do to continue to globally support a future with privacy.
Patrick Kelley, Google
Patrick Gage Kelley is a researcher at Google focusing on security, privacy, and anti-abuse topics. He has worked on projects on the use and design of standardized, user-friendly privacy displays, passwords, location-sharing, mobile apps, encryption, and technology ethics. Patrick’s work on redesigning privacy policies in the style of nutrition labels was included in the 2009 Annual Privacy Papers for Policymakers event on Capitol Hill. Most recently, Apple recently revived this work with their newly announced App Privacy Labels.
Previously, he was a professor of Computer Science at the University of New Mexico and faculty at the UNM ARTSLab and received his Ph.D. from Carnegie Mellon University working with the Mobile Commerce Lab and the CyLab Usable Privacy and Security (CUPS) Lab. He was an early researcher at Wombat Security Technologies, now a part of Proofpoint, and has also been at NYU, Intel Labs, and the National Security Agency.
Privacy and Security Nutrition Labels to Inform IoT Consumers
Pardis Emami-Naeini, University of Washington
IoT consumers are concerned about the privacy and security of their smart devices, but they cannot do much about it at the time of purchase. This is due to the unavailability of such information when making a purchase decision, at least in the US and most countries. In the past few years, Finland, UK, and Singapore have started designing a label for IoT devices to inform consumers about the privacy and security practices of smart devices at the point of sale. We believe US consumers have the same right to know how their devices handle their information. Therefore, we decided to bring this much-needed transparency to consumers at the time of purchase. By conducting a series of studies and incorporating inputs from thousands of consumers and experts, we developed an informative and usable privacy and security label for IoT devices. In addition, to help manufacturers easily create these labels for their products, we developed a tool to generate the human and machine readable formats of the labels.
Pardis Emami-Naeini, University of Washington
Pardis Emami-Naeini is currently a postdoctoral scholar at University of Washington. She received a B.Sc. degree in computer engineering from Sharif University of Technology, and M.Sc. and Ph.D. degrees in computer science from Carnegie Mellon University (CMU) under Prof. Lorrie Cranor and Prof. Yuvraj Agarwal. As part of her doctoral research, she developed a usable privacy and security label for smart devices to inform consumers’ Internet of Things-related purchase decisions. Pardis was selected as a Rising Star in electrical engineering and computer science in October 2019, and she was awarded the 2019-2020 CMU CyLab presidential fellowship.
7:05 am–9:05 am
Panel
Contact Tracing
Moderator: Ben Adida, VotingWorks
Panelists: Mike Judd, CDC COVID-19 Exposure Notification Initiative; Ali Lange, Google; Tiffany C. Li, Boston University School of Law; Marcel Salathé, EPFL
With the unprecedented COVID-19 pandemic came an unprecedented technology and public health collaboration: Google and Apple joined forces with public health departments to deploy a mechanism for notifying people of their possible exposure to the virus. How was this decision made? What were the tradeoffs? How is it going? If we had to do it all over again, what might we do differently? And most importantly, what does the future of app-based contact tracing look like? Join us for a lively discussion bringing together policy, epidemiology, legal, and public health experts on one of the most important cross-disciplinary collaborations of our time.
Ben Adida, VotingWorks
Ben Adida is the Executive Director of VotingWorks, the only non-profit election equipment vendor in the United States, with a mission is to rebuild citizen trust in US elections with secure and affordable voting equipment. Prior to VotingWorks, Ben led product engineering teams in security-centric fields: K-12 single sign-on at Clever, payments at Square, and identity at Mozilla. Ben was previously on the Faculty at Harvard Medical School, where he researched security and privacy of personal health data. Ben holds a PhD from MIT's Cryptography and Information Security group, where his work focused on election security.
Mike Judd, CDC COVID-19 Exposure Notification Initiative
Mike is the Agency Lead for the CDC’s COVID-19 Exposure Notification Initiative. He is passionate about the design and implementation of data products and initiatives at the interface of public health and computer science. Prior to his involvement in the agency’s pandemic response, he focused on epidemiologic analysis, data engineering, data science, and IT modernization through his roles as a Foodborne Disease Surveillance Epidemiologist, Informatics Unit Lead, and Surveillance and Informatics Advisor. He is a graduate of the Emory University Rollins School of Public Health.
Ali Lange, Google
Ali Lange is Public Policy Manager based in Google's California office. Her expertise is in data governance and privacy in product development. Ali previously worked at the Center for Democracy & Technology and the United States Senate.
Tiffany C. Li, Boston University School of Law
Tiffany C. Li is a technology law professor at Boston University School of Law and a Fellow at Yale Law School’s Information Society Project. Li is an expert on privacy, artificial intelligence, and technology platform governance. She is regularly featured as a legal commentator in national and global news outlets, and her writing has appeared in popular publications including the Washington Post, the Atlantic, NBC News, and Slate. She also writes a recurring column on technology and privacy for MSNBC Daily. Li has a J.D. from Georgetown Law and a B.A. from UCLA.
Marcel Salathé, EPFL
Marcel Salathé is a digital epidemiologist working at the interface of health and computer science. He is an Associate Professor at EPFL where he heads the Digital Epidemiology Lab at the Campus Biotech in Geneva. In 2016, he founded the EPFL Extension School, whose mission is to provide high quality online education in digital technology, and where he is the Academic Director. He is the president of the steering committee of the national research program on COVID-19, a member of the Swiss National COVID-19 Science Taskforce where he led the "Digital Epidemiology" expert group, and of the DP-3T group whose work influenced the Apple & Google Exposure Notification API which powers decentralized digital contact tracing apps around the world.
Marcel spent a few years in the tech industry as web application developer. He was part of the renowned Y Combinator startup accelerator’s class of Winter 2014.
9:05 am–9:20 am
Break
9:20 am–10:50 am
Vulnerabilities and Attacks
Session Chair: Andre Gregio, Federal University of Parana, Brazil
The State of 0-Day in-the-Wild Exploitation
Maddie Stone, Google Project Zero
0-day exploitation occurs when an attacker abuses a vulnerability that the defenders don't yet know about. This makes it very hard to protect against 0-day exploits and also makes 0-day vulnerabilities highly valuable. So how do we protect against the exploitation of unknown vulnerabilities? It starts with understanding everything we can about 0-day exploits.
Each time a 0-day exploit is detected in-the-wild, it's the failure case for attackers. Therefore as defenders, we should use these "failures" as an opportunity to learn as much as we can about the vulnerabilities targeted, the exploitation methods used, the techniques for discovering the vulnerabilities, and more. As a security and technical community, we can then use this data to prioritize what vulnerability research to undertake, gaps in our detection methods, exploit mitigations that will have the most return on mitigation, and overall, how to make it harder for attackers to exploit 0-days to harm users.
This talk synthesizes what we can learn from the 0-days that were exploited in-the-wild in 2020. For each of these 0-days, Project Zero performed a root cause analysis, which details the vulnerability exploited and the exploit methodology used. From these facts, we then developed ideas for better detections and systemic fixes, hypothesized on what methods the actors used to discover the vulnerability, and performed variant analysis. In this talk, we'll share what we've learned from these exploits used in 2020 and how to apply it in 2021 to make it that much harder.
Maddie Stone, Google Project Zero
Maddie Stone (@maddiestone) is a Security Researcher on Google Project Zero where she focuses on 0-days used in-the-wild. Previously, she was a reverse engineer and team lead on the Android Security team, focusing predominantly on pre-installed and off-Google Play malware. Maddie also spent many years deep in the circuitry and firmware of embedded devices. Maddie has previously spoken at conferences including Black Hat USA, REcon, OffensiveCon, KasperskySAS, and others. She holds a Bachelors of Science, with a double major in Computer Science and Russian, and a Masters of Science in Computer Science from Johns Hopkins University.
The Practical Divide between Adversarial ML Research and Security Practice: A Red Team Perspective
Hyrum Anderson, Microsoft
Researchers in adversarial machine learning (ML)—which studies the security of ML in the presence of an attacker—have written more than 2,300 papers in just the last 6 years. However, in a recent survey, almost 90% of corporations and government entities reported that they are not equipped to protect their ML models. The fact remains that there are significant gaps between academic advancements and industry needs and practices. First, aside from full-scale security products, ML models get scant security consideration. Second, most organizations who are concerned with ML security do not know how to begin. Third, the tools and techniques produced in academic research often don't translate to business needs.
To demonstrate these gaps, I will review some sobering lessons learned during a Machine Learning Red Team engagement at Microsoft on systems that rely on ML. They include the reality that "traditional" security measures like access control and permissions remain the most significant element for safeguarding ML systems. Furthermore, especially outside of security applications, awareness about ML vulnerabilities remains low and security posture is nearly zero.
Hyrum Anderson, Microsoft
Hyrum Anderson is Principal Architect in the Azure Trustworthy Machine Learning group at Microsoft. Prior to joining Microsoft, he was the Chief Scientist at Endgame, and conducted research in information security and situational awareness at FireEye, Mandiant, Sandia National Laboratories, and MIT Lincoln Laboratory. He received his Ph.D. in Electrical Engineering (signal processing + machine learning) from the University of Washington and BS+MS degrees from Brigham Young University. He is co-founder and co-chair for the Conference on Applied Machine Learning in Information Security and has spoken at numerous signal processing, machine learning, and security conferences, including RSA, DEFCon, and BlackHat.
Breaking Trust – Shades of Crisis Across an Insecure Software Supply Chain
Trey Herr, Atlantic Council
Society has a software problem. Since Ada Lovelace deployed the first computer program on an early mechanical device in the 1840s, software has spread to every corner of human experience. With that software come security flaws and a long tail of updates from vendors and developers. Unlike a physical system that is little modified once it has left the factory, software is subject to continual revision through updates and patches. Software supply chain security remains an underappreciated domain of national security policymaking. This talk explores 115 software supply chain attacks and vulnerability disclosures from the past decade to sum up where we are and how far we still have to go. Software supply chain attacks are popular, they are impactful, and are used to great effect by states, especially China and Russia. The implications for the technology industry and cybersecurity policymaking community are a crisis in waiting. The solution is not panic nor is it a moonshot, but rather a renewed focus on software supply chain security practices, new investment from public and private sectors, and revisions to public policy that emphasize raising the lowest common denominator of security behavior while countering the most impactful attacks.
Trey Herr, Atlantic Council
Dr. Trey Herr is the Director of the Cyber Statecraft Initiative at the Atlantic Council. His team works on the role of the technology industry in geopolitics, cyber conflict, the security of the internet, and cyber safety. Previously, he was a Senior Security Strategist with Microsoft handling cloud computing and supply chain security policy as well as a fellow with the Belfer Cybersecurity Project at Harvard Kennedy School and a non-resident fellow with the Hoover Institution. He holds a Ph.D. in Political Science and BS in Musical Theatre and Political Science.
10:50 am–11:20 am
Break
11:20 am–12:20 pm
Surviving and Thriving
Session Chair: Diane Hosfelt, Mozilla
"I Thought I Was Being Strong with a Complicated Person": The Tales of Intimate Gender-Based Online Abuse in the Global South
Sofía Celi, Cloudflare
This is a tale of a personal experience; but also one that is well-known to people identifying as women that have suffered from intimate gender-based digital violence. It is a personal tale because this research started from a personal experience of usage of digital tools in the hands of an ex-partner to censor, harass, and police behavior. Fueled by this, we launched research on how digital tools are used to enhance intimate gender-based abuse in the Global South.
Frequently, it is often thought that intimate gender-based online abuse is a problem from the Global North, a problem that only certain countries with high access to technology have, or that it is only done with the usage of stalkerware or some kind of malware. But these presumptions limit the experiences that regions from the Global South face, as it underestimates the tales from those regions. In this talk, we provide a deep dive into which methods abusers in the Global South (specifically, in Latin America) use to increase gender-based abuse, by providing clear definitions and experiences of them. We will also explore the societal aspects that support the usage of online tools to enhance abuse, and emphasize the little or no help given from a legal or policy perspective in those regions.
This talk will also highlight the importance of looking at experiences from different places around the world, as the solutions that we give are, oftentimes, only focused on a Global North perspective. In the specific case of intimate gender-based online abuse, the tales and experiences that come from the Global South show us that there is still work to be done to support them (from a technical perspective) and to further understand them.
Sofía Celi, Cloudflare
Sofía Celi is a cryptography researcher and implementer at Cloudflare. She also currently leads the design and development of version 4 of the Off-the-Record (OTR) messaging protocol. She is very involved in helping end gender-based violence that uses digital tools in the Global South. She is interested in programming languages, compilers, usable security and privacy by design, post-quantum cryptography, and real-world applications of cryptography.
Da Da: What Shark Conservation Teaches Us about Emotionality and Effective Communication Strategies for Security and Privacy
Melanie Ensign, Discernible Inc.
Governing strategies that induce or amplify crisis are common in the public and private sectors in order to relieve public anxiety and boost confidence in the wake of an incident, but they do little to address the underlying cause. This talk will discuss lessons learned from the shark science and conservation community that can be adopted by security and privacy professionals to influence public discourse without relying on emotional manipulation or fear.
Melanie Ensign, Discernible Inc.
Melanie Ensign is the founder and CEO of Discernible Inc., a specialized security and privacy communications consultancy. After managing security and privacy communications for some of the world's most notable brands including Facebook, Uber, and AT&T, she now helps even more organizations adopt effective communication strategies that improve operations and help reduce risk. She counsels executives and technical teams alike on how to cut through internal politics, dysfunctional inertia, and meaningless metrics. Melanie also leads the press department for DEF CON and has logged hundreds of hours diving with sharks all around the world. She holds a degree in communications from the University of Illinois-Chicago and a Master of Science from Boston University.
7:05 am–9:05 am
Emerging Topics
Session Chair: Christian Folini, Swiss Cyber Storm
Detecting Fake 4G LTE Base Stations in Real Time
Cooper Quintin, Electronic Frontier Foundation
4G based IMSI catchers such as the Hailstorm are becoming more popular with governments and law enforcement around the world, as well as spies, and even criminals. Until now IMSI catcher detection has focused on 2G IMSI catchers such as the Stingray which are quickly falling out of favor. In this talk, we will demonstrate a brand new software project to detect fake 4G base stations, with open source software and relatively cheap hardware. We will reveal what we have found so far using our methods. And finally, we will present a plan to dramatically limit the capabilities of IMSI catchers (with the long term goal of making them useless once and for all).
Cooper Quintin, Electronic Frontier Foundation
Cooper is a Senior Security Researcher with the EFF Threat Lab. He has worked on projects such as Privacy Badger and Canary Watch. With his colleagues at threat lab, he has helped discover state-sponsored malware and nation-state actors such as Dark Caracal and Operation Manul. He has also performed security training for activists, non-profit workers, and ordinary folks around the world. He also was a co-founder of the Hackbloc hacktivist collective and published several issues of the DIY hacker zine "Hack This Zine." In his spare time, he enjoys playing music, playing with his kid, and imagining a better future.
Designing Verifiable Health Solutions For Global Pandemic
Mark Funk, Obscure Group
Hindsight may be 20/20, but uncertainty ran rampant when COVID-19 quickly spread throughout the US last year. Mark is going to share their work from this early period, where they designed an abstract solution for verifiable attestation, lab test results, and vaccination history to address public safety and privacy, in a hypothetical world where lab tests and vaccines would be available and freely accessible for all. While many technical solutions have emerged since then, we are still far from a contained epidemic and a vaccinated population in the US. Mark hopes their story will shed light on the complex considerations in this problem space, so others may be better prepared to evaluate the cost of introducing new technology versus continuing our current path.
Mark Funk, Obscure Group
Mark Funk (ex: Google, Square, etc) is a product, security, and privacy software engineer that has been designing, building, and reviewing solutions for around 15 years. While they currently run a small security and privacy consultancy called Obscure Group, they also provide pro-bono services to nonprofit organizations in the health and humanitarian sector. Their experience working directly with consumer hardware, social networking, fintech, e-commerce, and other product organizations allows them to bring their technical specialization and wide breadth of expertise to different verticals.
Contact Tracing Apps: Engineering Privacy in Quicksand
Carmela Troncoso, EPFL
When talking about Contact Tracing Mobile apps, most of the discussion centers on the protocol and its properties. While this is indeed central to the security and privacy of the system, once the protocol is integrated in an app, and in a larger ecosystem including server and health services, more privacy mechanisms are needed to ensure that the privacy properties are kept end to end. When performing this integration, however, the privacy engineer has little control over many of the pieces that are key for operation. In this talk, I will describe how this lack of control results in increasing hurdles for privacy and how we overcame these obstacles in the case of contact tracing apps.
Carmela Troncoso, EPFL
Carmela Troncoso is an assistant professor at EPFL (Switzerland) where she heads the SPRING Lab. Her research focuses on security and privacy. Carmela holds a Master's degree in Telecommunication Engineering from the University of Vigo (2006) and a Ph.D. in Engineering from the KU Leuven in 2011. Before arriving at EPFL, she was a faculty member at the IMDEA Software Institute in Spain for two years; the Security and Privacy Technical Lead at Gradiant, working closely with industry to deliver secure and privacy-friendly solutions to the market for four years. Her thesis, Design and Analysis Methods for Privacy Technologies, received the European Research Consortium for Informatics and Mathematics Security and Trust Management Best Ph.D. Thesis Award, and her work on Privacy Engineering received the CNIL-INRIA Privacy Protection Award in 2017. In 2020, she leads the DP3T effort towards designing and deploying privacy-preserving contact tracing applications. The design principles from DP3T are the basis of dozens of contact tracing apps deployed around the world and led Fortune Magazine to include Carmela in the 40 under 40 list of 2020.
Building E2EE and User Identity
Merry Ember Mou, Zoom Video Communications
In a remote-first world, end-to-end encrypted (E2EE) communications will become more common. The Zoom team published in May 2020 an incrementally deployable proposal for E2EE in their video conferencing product that describes not only moving user key generation to clients but also building a strongly-trusted and user-friendly concept of long-term identity. After all, E2EE is only as secure as the ends: if Alice thinks she is talking to her coworkers, but instead her competitors are participating in the meeting, encryption is not sufficient to protect her. This talk will go over and highlight some of the objectives and challenges of our multi-phased, multi-pronged approach to E2EE that has strong identity confirmation protocols, and minimal server-trust.
Integrating an E2EE implementation into an existing system like Zoom, which supports hundreds of millions of meeting participants every day, has required particular consideration of existing architectural constraints, existing user trust models, and user expectations in the UI/UX. Designing with these priorities becomes even more significant in subsequent phases of the E2EE plan, where we aim to establish a consistent and auditable identity designed to tie each user to their Zoom account/organization and their many devices' long-term keys. These user identities will be enforced by several mechanisms to minimize the reliance on server-side security, with the eventual goal of making server compromise of user identities detectable by external auditors. With each phase, we improve the properties of a user's displayed identity in a meeting and aim to make verifying the security of a Zoom meeting as intuitive for the user as possible.
Merry Ember Mou, Zoom Video Communications
Merry Ember Mou is a software engineer at Zoom working on end-to-end encryption. Their previous experience includes working on Keybase, building backend systems at a network security startup, and developing online community platform research. They hold a master's degree in computer science from MIT.
9:05 am–9:20 am
Break
9:20 am–10:20 am
Security Hardening
Session Chair: Bob Lord, Democratic National Committee
Quantifying Memory Unsafety and Reactions to It
Alex Gaynor, Fish in a Barrel
The fact that C and C++ are not memory safe, leading to vulnerability classes such as use-after-free and buffer-overflow is not new. However, these languages remain in exceptionally wide use, even for new projects. For several years, Fish in a Barrel has been attempting to quantify how common memory-unsafety induced vulnerabilities are in major projects, and researching what tactics are effective at convincing developers to reconsider C and C++.
This talk presents our results: we show the empirical data which leads us to the conclusion that C and C++ are not tenable for modern secure development, including statistics across a large swath of projects. We also present what we've learned about how developers respond to this fact, in the frame of the Five Stages of Grief.
Alex Gaynor, Alloy, Fish in a Barrel
Alex is a software security engineer. He's a founder and principal at Fish in a Barrel, working on systemic solutions to classes of vulnerabilities. He's previously been Chief Information Security Officer at Alloy and an engineer at Mozilla and the United States Digital Service. Alex has a long history of contribution in open source, from building a JIT'd Ruby VM to serving on the Board of Directors of the Python Software Foundation. Alex lives in Washington, D.C.
The Limits of Sandboxing and Next Steps
Chris Palmer, Google Chrome Security
Privilege separation and reduction ("sandboxing") has significantly improved software security, and in many applications is a baseline requirement for safe design. (In fact, there are still many applications that can and should adopt sandboxing.)
Although necessary, sandboxing is not sufficient by itself. The designs and implementations of real-world operating systems put a ceiling on the effectiveness and applicability of sandboxing. From years of experience shipping Chromium, we have learned that (1) Chromium is at or near the limit of how much safety it can practically provide with privilege separation and reduction; and (2) we still need to provide greater resilience.
Therefore, we must find and develop additional security mechanisms. Our primary approach is now working toward increased memory safety. Where sandboxing limits the value attackers gain from exploiting vulnerabilities, memory-safe(r) code can eliminate vulnerabilities altogether or make it infeasible to use them in an exploit chain.
This talk is about lessons learned in the real world. I'll discuss the nature and particulars of the OS limitations we face, what security gap they leave us with, and what we are doing to make Chromium's large codebase less memory-unsafe. I'll highlight some lessons we've learned that security engineers working on other projects can hopefully make use of.
Chris Palmer, Google Chrome Security
I work at Google as a software security engineer on Chrome, where I work on hardening Chromium's underpinnings and securing the web platform runtime. (I was previously on the Secure UX sub-team, and before that, I worked on Web PKI.) I used to be on the Android team at Google. Previously, I was the Technology Director at the Electronic Frontier Foundation, a Principal Security Consultant at iSEC Partners (now NCC Group), and a web developer at a couple of small shops.
10:20 am–10:50 am
Break
10:50 am–12:20 pm
Abusing People and Culture
Session Chair: Andrea Limbago, Interos
Online Rumors, Misinformation and Disinformation: The Perfect Storm of COVID-19 and Election2020
Kate Starbird, University of Washington
The past year has been a difficult one. A pandemic has taken millions of lives and disrupted "normal" routines across the globe. In the United States, we have experienced an unprecedented political situation with a sitting President refusing to concede after losing an election. Each of the events has been accompanied by uncertainty and anxiety, as well as massive amounts of false and misleading information. In this talk, I will explore some of the mechanics of online misinformation, explaining why we are particularly vulnerable right now—due in part to the nature of these crises, and in part to the current structure of our information systems. Using examples from both COVID-19 and Election2020, I will explain how we are living through a "perfect storm" for both misinformation and disinformation. And I will describe how disinformation, in particular, can be an existential threat to democratic societies. After laying out the problems, I aim to end on a more hopeful note, with a call to action for researchers and industry professionals to help "chip away" at this critical societal issue.
Kate Starbird, University of Washington
Kate Starbird is an Associate Professor at the Department of Human-Centered Design & Engineering (HCDE) at the University of Washington (UW). Dr. Starbird's research sits in the emerging field of crisis informatics—the study of how social media and other communication technologies are used during crisis events. Currently, her work focuses on the production and spread of online rumors, misinformation, and disinformation in the context of crises and other disruptive events. Starbird is a co-founder of the UW Center for an Informed Public.
Computational Methods to Understand and Mitigate Online Aggression
Gianluca Stringhini, Boston University
Online hate on social media has become a serious problem. Aggression is often not the act of single individuals, but rather the result of coordinated activity between like-minded people who gather on polarized online communities, identify suitable targets, and carry out their attacks. Studying this phenomenon is hard, partly because it is not a purely technical problem, and partly because this malicious activity unfolds across multiple online services but the research community currently lacks effective tools to keep track of information that spans multiple platforms.
In this talk, I will present our research on studying the modus operandi of attackers that orchestrate and execute coordinated aggression attacks on social media. I will present two case studies, which are the results of an analysis of billions of social media posts using a mixture of quantitative and qualitative analysis. First, I will describe coordinated hate attacks against the authors of YouTube videos. Second, I will analyze Zoom bombing attacks, in which miscreants organize to disrupt those online meetings that have become central to our professional and personal lives. As part of this study, we find that most Zoom bombing meetings are called by insiders who ask for attackers to disrupt their own meetings (for example online lectures). Finally, I will discuss challenges in developing countermeasures against these phenomena, ranging from the need to protect from attacks in the presence of malicious insiders to the potential unintended consequences of suspending offending accounts on social media, which might result in pushing users to more extreme and unmoderated online communities.
Gianluca Stringhini, Boston University
Gianluca Stringhini is an assistant professor in the ECE Department at Boston University. Gianluca works in the area of data-driven security, analyzing large datasets to better understand complex malicious online operations and developing mitigation techniques to fight them. He was awarded multiple awards including an NSF CAREER Award in 2020 and a Facebook Secure the Internet Grant in 2018, and his research won multiple Best Paper Awards, including one at IMC 2018. He has published in top security conferences such as CCS, NDSS, and USENIX Security, as well as top measurement and Web conferences such as IMC, ICWSM, and WWW.
Does Your Threat Model Consider Country and Culture? A Case Study of Brazilian Internet Banking Security to Show That It Should!
Marcus Botacin, Federal University of Paraná (UFPR-Brazil)
Every attack has a story. Uncovering these stories is essential to identify the gaps that allowed the attack to occur and the countermeasures to prevent it from happening again. Over time, many security players tried to model these gaps and countermeasures in their threat models, but all these attempts present the same drawback: they generalize everything! However, not every threat is global. The threats I used to find in Brazil were distinct from those reported in the global news and their prevalence significantly differed from what was described in the literature. What was going on? The problem is that the Brazilian scenario presents characteristics that make it unique (e.g, the way Internet banks operate, the way the Internet access is provided and charged), and these factors significantly influence the way that threats are developed and how users are targeted. For instance, even before the Web-based systems, attackers exploited the early computerization of the Brazilian bank system to deploy phishing applications mimicking the Bank’s operations. The movement towards the Web generated a profusion of JAVA-based malware never seen elsewhere, as the Brazilian bank systems were JAVA-based. Recently, with the emergence of mobile devices, prepaid data plans with free Whatsapp access motivated the deployment of banks-powered Whatsapp-based banking transactions. Are we prepared to handle these scenarios or are we overlooking them? Furthermore, these likely-overlooked scenarios might not be limited to Brazil, but these attacks might have already been happening elsewhere. Therefore, I invite you to come with me to take a look at a dataset of more than 40 thousand unique malware samples collected in Brazil over 7 years to understand what we missed by not looking at regionalized threats. This talk is a call to action for more personalized threat models and security evaluations.
Marcus Botacin, Federal University of Paraná
Marcus is a Ph.D. candidate at Federal University of Paraná (UFPR), Brazil. He is also a Computer Engineer and has a Master in Computer Science from the University of Campinas (UNICAMP), Brazil. His main research interests are malware analysis and reverse engineering, with years of experience in sandbox development.