Enigma 2023 Conference Program

All sessions will be held in Santa Clara Ballroom unless otherwise noted.

Attendee Files 
Enigma 2023 Attendee List (PDF)

Tuesday, January 24, 2023

7:30 am–8:45 am

Continental Breakfast

Mezzanine East/West

8:45 am–9:00 am

Opening Remarks, Day 1

Program Co-Chairs: Melanie Ensign, Discernible, and Joseph Lorenzo Hall, Internet Society

9:00 am–10:00 am

Panel

The Emperor's New Clothes: Location Data and the Promise of Anonymization

Tuesday, 9:00 am10:00 am

Moderator: Aaron Alva, Federal Trade Commission
Panelists: Brandi M. Bennett; Jennifer King, Stanford University Institute for Human-Centered Artificial Intelligence; Beryl Lipton, Electronic Frontier Foundation

Available Media

For years, organizations have banked on the promise of "anonymization" techniques and technologies to reduce the risk and liability of collecting or processing geolocation data. But for many technologists, the math just isn't mathing. This session will explore the regulatory and technical advancements driving location data into the privacy spotlight.

Aaron Alva, Federal Trade Commission

Aaron Alva is a technologist and tech advisor at the Federal Trade Commission. At the FTC, he advises the Director of the Bureau of Consumer Protection and case teams on a variety of technology issues including data minimization and deletion, multifactor authentication, health privacy, and ad tech. Before the FTC, Aaron conducted research on a variety of topics ranging from cloud forensics to digital evidence admissibility to surgical robotics security. He is also a recipient of the NSF CyberCorps Scholarship for service for his joint Masters in Information Management and J.D. at the University of Washington.

Brandi Bennett[node:field-speakers-institution]

Brandi is an experienced technology and data privacy attorney who has worked for everything from baby startups to large multinational media and entertainment companies. Her privacy experience covers a wide variety of laws and jurisdictions from the EU to APAC to LATAM and North America. She was the first in-house counsel at a geolocation-based advertising network and analytics company, helping build their privacy program and working with NAI, DAA, and IAB codes and the EU Data Protection Directive. She helped a dating app (the only dating app she has a profile on, thank you very much) through GDPR, launched a Chinese social app globally (not TikTok) that ended up failing to launch, and took Endeavor and UFC through CCPA. She has also been the lead privacy counsel for Legends, which operates stadiums, ecommerce, and brick and mortar retail for major sports and entertainment franchises globally. Brandi is licensed in California and a graduate of the University of Denver Sturm College of Law (2008).

Jennifer King, Stanford University Institute for Human-Centered Artificial Intelligence

Dr. Jennifer King is the Privacy and Data Policy Fellow at the Stanford University Institute for Human-Centered Artificial Intelligence. An information scientist by training, Dr. King is a recognized expert and scholar in information privacy. Sitting at the intersection of human-computer interaction, law, and the social sciences, her research examines the public’s understanding and expectations of online privacy as well as the policy implications of emerging technologies. Most recently, her research explored alternatives to notice and consent (with the World Economic Forum), the impact of California’s new privacy laws, and dark patterns. Her past work includes projects focusing on social media, genetic privacy, mobile application platforms, the Internet of Things (IoT), and digital surveillance. Her scholarship has been recognized for its impact on policymaking by the Future of Privacy Forum, and she has been an invited speaker before the Federal Trade Commission at several Commission workshops. She has been featured in numerous publications and outlets, including The New York Times, the Washington Post, the Los Angeles Times, Wired, Recode, National Public Radio, CNBC, Bloomberg, CNET, Vox, Consumer Reports, NBC News, MIT Technology Review, among others.

Dr. King completed her doctorate in Information Management and Systems at the University of California, Berkeley School of Information. Prior to joining HAI, Dr. King was the Director of Consumer Privacy at the Center for Internet and Society at Stanford Law school from 2018 to 2020. Before coming to Stanford, she was a co-director of the Center for Technology, Society, and Policy, a graduate student-led research center at UC Berkeley, and was a privacy researcher at the Samuelson Law, Technology, and Public Policy Clinic at Berkeley Law. She was a member of the California State Advisory Board on Mobile Privacy Policies and the California State RFID Advisory Board. She received her Master’s in Information Management and Systems also from the University of California, Berkeley’s School of Information, and her undergraduate degree in Political Science and Sociology from the University of California, Irvine. Prior to entering academia, she worked in security and in product management for several Internet companies, most notably Yahoo!.

Beryl Lipton, Electronic Frontier Foundation

Beryl Lipton, Investigative Researcher, focuses her work on government transparency, law enforcement surveillance technology, and other uses of technology by government actors. She has extensive experience using Freedom of Information laws and large-scale public records campaigns in her research.

At EFF, Beryl supports the Atlas of Surveillance, The Foilies, and The Catalog of Carceral Surveillance, among other projects. She enjoys teaching others about the strengths and limitations of public records laws and discussing the potential and real harms of the surveillance state.

Prior to her work with EFF in 2020, she spent seven years as a projects editor at MuckRock, where she focused on prison privatization and other public-private partnerships. She is a board member for Spare Change News, the Boston area street newspaper, and contributes to Gannett New York, where she has spearheaded the collection and release of police misconduct records throughout New York state.

10:00 am–10:30 am

Break with Refreshments

Mezzanine East/West

10:30 am–12:00 pm

Mixed Signals: There Can't Be Only One Threat Model

Session Chair: Stephan Somogyi, Google and Columbia Journalism School

How Geopolitical Change Causes People to Become (More) Vulnerable

Tuesday, 10:30 am11:00 am

Lucy Simko, George Washington University

Available Media

Computer security and privacy is critical for vulnerable and marginalized populations. Vulnerable populations might face more determined adversaries or higher consequences for security and privacy breaches. Vulnerability can be caused by change, like undergoing life changing events including environmental or manmade crises that can affect all humans at various stages in their lives. It is therefore of critical importance for computer security and privacy to understand the interactions between change and computer security and privacy. In this talk, I will explore this relationship based on four populations that experienced change: (1) refugees who moved to the united states; (2) Sudanese activist in the 2018-2019 Sudanese revolution; (3) those affected by the Covid-19 pandemic; (4) and those experiencing a Hurricane in the mainland United States in 2021. Based on my research with these populations, I describe three major themes that characterize the relationship between change and computer security and privacy: (1) Change creates different elements of one's threat model---actors, threats, assets---as well as different technical needs. (2) During a period of change, people may change how they prioritize computer security and privacy in response to other emergent needs. (3) When technology design is misaligned with the needs and uses of marginalized populations, it causes those populations to have to work harder to maintain security and privacy, exacerbating existing systemic inequalities during times of change. Based on this understanding of how change interacts with computer security and privacy, I recommend how we can design for change: by centering the needs of those affected by change and marginalization, reducing vulnerability for all.

Lucy Simko, George Washington University

Dr. Lucy Simko is a research scientist at the George Washington University. Her work focuses on the security and privacy-related needs and practices of populations during times of change. Her research has been published at IEEE Security & Privacy, PETS, CHI, and IDC, and she is the recipient of a National Science Foundation Graduate Research Fellowship. She was recently a remote intern at the Max Planck Institute for Security and Privacy, where she investigated the use of technology during hurricanes. Lucy recently earned her PhD in Computer Science from the University of Washington’s Paul G. Allen School of Computer Science and Engineering.

Technology Abuse Clinics for Survivors of Intimate Partner Violence

Tuesday, 11:00 am11:30 am

Lana Ramjit, Cornell Tech

Available Media

Perpetrators of intimate partner violence (IPV) use digital technologies to cause lasting physical, and financial, and psychological harm. Advocates and scholars have established the prevalence of technology abuse, but IPV agencies often lack the capacity, training and resources needed to recognize and respond to technology abuse.

The Clinic to End Tech Abuse (CETA) and the Technology Enabled Coercive Control Initiative (TECCI) are services that partner with IPV agencies to offer free, consultative services to survivors of IPV who are experiencing tech abuse. Collectively, they have served nearly 700 survivors in the New York City and Seattle areas by identifying points of compromise on their devices and developing technology-specific safety plans. However, both clinics have struggled to keep pace with demands for their services.

In this talk, we present a vision for how a network of localized, community-based technology abuse clinics might be able to meet the challenges of scale. We also share our new toolkit, a resource based on our collected experiences which we intend as a guide for other stakeholders interested in creating and sustaining their own technology abuse clinics.

Lana Ramjit, Cornell Tech

Lana Ramjit is a Computing Innovations Postdoctoral Fellow at Cornell Tech. She serves as the Director of Operations for the Clinic to End Tech Abuse (CETA), a program which provides free consultative services to survivors of intimate partner violence experiencing technology-facilitated abuse. At CETA, she oversees the partnerships, volunteers, policy initiatives, and resource development, in addition to providing direct advocacy support for survivors. She earned her PhD in Computer Science from UCLA in 2021 and prior to that, a BA in Computer Science from Columbia University.

Working on the Frontlines: Privacy and Security with Vulnerable Populations

Tuesday, 11:30 am12:00 pm

Sarah Aoun

Available Media

What does it mean to work on the frontlines of privacy and security? Working with vulnerable populations and high-risk individuals on privacy and security is incredibly challenging, and can have real implications for people's safety. More often than not, security and privacy researchers can lack context and understanding of what this looks like in practice. In this talk, we will explore the ways in which journalists and activists around the world are targeted for their work, and what it means to be a public interest technologist working to protect them.

Sarah Aoun

Sarah Aoun is privacy and security researcher. For the past decade, her work has primarily focused on providing privacy and security for vulnerable populations around the world. Most recently, she was the CTO and Vice President of Security at the Open Technology Fund, an organization that funds projects and technology focused on countering censorship and surveillance. She led the organization’s efforts to identify and track emerging threats, and to advance the field of information security for journalists, human rights defenders, and high risk individuals. Sarah has worked as an independent operational security and counter surveillance trainer, and has trained countless activists, journalists, and high-risk individuals. She has served as a cybersecurity consultant for dozens of US and international NGOs, helping them implement strategic and operational oversight for information security management, minimizing risks, and establishing stronger safety norms. She was a Ford-Mozilla Open Web Fellow (2017-2018), an Internet Freedom Fellow (2016-2017), and was a technical advisor for the Internet Freedom Festival, the Human Rights Foundation, Global Journalist Security, and Reset.

12:00 pm–1:15 pm

Lunch

Terra Courtyard

1:15 pm–2:00 pm

Fireside Chat with Simson Garfinkel on Quantum Technologies

Tuesday, 1:15 pm2:00 pm

Moderator: Joseph Lorenzo Hall, Internet Society
Speaker: Simson Garfinkel

Available Media

"Quantum" is all the rage; from post-quantum cryptography, to strange looking computational apparatuses doing quantum computations, to visions of future quantum communications networks sending quantum information around the world. Where classical computation involves bits and bytes, quantum technologies operate on a different level, using both the precision and fuzziness of quantum mechanics. Despite the fanfare, we may be overly focusing on certain aspects of quantum technologies, and potentially missing serious implications that may arrive at our doorstep much sooner than any significant quantum computer or quantum communication network. Join Joseph Lorenzo Hall in a fireside chat with Simson Garfinkel, co-author with Chris Hoofnagle of the 2021 book, Law and Policy for the Quantum Age, from Cambridge University Press.

Simson Garfinkel

Simson Garfinkel has written extensively on computer security, privacy, and digital forensics. A fellow of the AAAS, the ACM and IEEE, Garfinkel holds a Ph.D. in Computer Science from MIT and a Master of Science in Journalism from Columbia University.

2:00 pm–2:10 pm

Short Break

Mezzanine East/West

2:10 pm–3:40 pm

Advancing the Goal Posts

Session Chair: Kyle Tobener, Copado

The Edge of Developed Practice in Searching Encrypted Data

Tuesday, 2:10 pm2:40 pm

Kenn White, MongoDB

Available Media

After 20+ years of academic research in cryptography, it is now possible—and practical—to search fully encrypted data. But the demands of high-performance distributed systems present unique challenges unaddressed by most research models of encrypted search, particularly for running rich, expressive queries. This talk will break down the major milestones along the journey from academia to the modern day developer ecosystem. We describe a technique called Structured Encryption which addresses the problem of encrypting structured data in such a way that it can be efficiently and privately queried. We will unpack common misconceptions in the security community around private search, touching briefly on several current approaches, with the balance of the talk focused on an implementation of Structured Encryption and lessons learned from recent work to open source and natively integrate the capability into one of the most widely used databases in the world.

Kenn White, MongoDB

Kenneth White is a security engineer whose work focuses on networks and global systems. He is co-founder and Director of the Open Crypto Audit Project and led formal security reviews on TrueCrypt and OpenSSL and is a member of the Black Hat Review Board. He currently leads applied encryption engineering in MongoDB's global product group. He has directed R&D and security Ops in organizations ranging from startups to nonprofits to defense agencies to the Fortune 50. His work on applied signal analysis has been published in the Proceedings of the National Academy of Sciences. He created software powering the largest clinical trial & cardiac safety research networks in the world. His work on network security and forensics has been cited by the Wall Street Journal, Reuters, Wired, and the BBC. He is @kennwhite where he tweets about trust, dogs, biscuits, and really good curries.

Adventures in Authentication and Authorization

Tuesday, 2:40 pm3:10 pm

Ian Haken, Netflix

Available Media

Zero-trust architectures for microservice ecosystems rely on strong authentication between services, but if you’re looking to implement authentication in your environment there’s an overwhelming number of options: OAuth, mutual TLS, JWTs, macaroons, biscuits, HTTP request signatures, and more. And once you’ve picked one, a robust zero-trust ecosystem needs an authorization system on top of it where there are even more options to choose from. In this presentation I’m going to describe our journey through implementing ubiquitous authentication and authorization in our microservice ecosystem: the requirements informing our technology choices, the pain points and hurdles we encountered along the way, and how we accomplished the somewhat surprising solution of using multiple technologies instead of just one.

Ian Haken, Netflix

Ian Haken is a staff security software engineer at Netflix where has been working since 2016. His work includes development of tools and services that defend the Netflix platform such as the implementation of authentication and authorization solutions, access control management platforms, and cryptographic services. He has also contributed to open source and community projects such as BetterTLS and SPIFEE. Prior to working in information security, he received his PhD in mathematics from the University of California, Berkeley in 2014 with a focus in computability theory and algorithmic information theory.

Navigating the Sandbox Buffet

Tuesday, 3:10 pm3:40 pm

Maxime Serrano, Figma, Inc.

Available Media

Buggy software is a fact of life, and preventing all security vulnerabilities is near impossible. Organizations often have no choice but to run potentially risky software, such as parsing, thumbnailing, or compression libraries, within their infrastructure to do even basic work. Such software is frequently implemented in memory-unsafe languages, by third-parties, and is seldom designed to handle hostile user input. It’s no surprise that security news is replete with bug names like “ImageTragick.”

Fortunately, sandboxing can be a powerful defense in these scenarios. In the past, sandbox technologies were often expensive, immature, and operationally fickle, so only well-resourced organizations could leverage them effectively at scale. But today, we live in a democratized era of the sandbox buffet, where there is a surfeit of different ways to virtualize, contain, and jail processing. With so many options, it’s easier than ever to pick the right combination of sandboxing techniques that provide the most appropriate set of tradeoffs for isolating your workloads.

Maxime Serrano, Figma

Max is an engineer on the production security team at Figma, where he’s contributed to various initiatives improving Figma’s security posture across different areas of security, from securing the core infrastructure and product to mitigating spam on the platform. Previously, he worked on the production security team at Dropbox, including in particular on the sandboxing system used in their previewing and content processing pipelines.

3:40 pm–4:10 pm

Break with Refreshments

Mezzanine East/West

4:10 pm–5:40 pm

Skeletons in the Closet

Session Chair: Cameron Dixon, Cybersecurity & Infrastructure Security Agency

The Dirty Laundry of the Web PKI

Tuesday, 4:10 pm4:40 pm

Emily Stark, Google

Available Media

When you type “https://example.com” in your web browser, how do you know that you’re establishing a secure connection to the real example.com? This question is foundational to the web security model, and the answer rests in the web public key infrastructure (PKI). In the web PKI, trusted certificate authorities (CAs) issue certificates that authenticate websites. Sadly, the web PKI – which is so foundational to the communication, collaboration, commerce, and cat memes that we all use the web for everyday – is shockingly antiquated, overcomplicated, and crufty. In this talk, I’ll describe some icky inner secrets of how the web PKI works, exposing the fragile security infrastructure on which the web is built. I’ll also outline some properties that we should try to achieve in a leaner next-generation server authentication model for the web.

Emily Stark, Google

Emily is a software engineer and manager working on the Google Chrome web browser. She leads Chrome’s secure transport team, which provides a foundation of trustworthy, understandable encrypted and authenticated connections for the web. She works on HTTPS adoption, certificate verification, ecosystem improvements like Certificate Transparency, the TLS stack, and connection security UX (such as site identity in the address bar and certificate warnings). She also leads a cross-functional team of usable security experts who provide consulting and security reviews across Chrome. Emily holds a bachelor’s degree from Stanford University and a master’s degree from MIT, both in computer science.

When Malware Changed Its Mind: How "Split Personalities" Affect Malware Analysis and Detection

Tuesday, 4:40 pm5:10 pm

Tudor Dumitras, University of Maryland, College Park

Available Media

We are presenting the first large-scale study of malware samples that change their behavior when executed on different hosts or at different times, using data from 5.6 million hosts from around the world. Researchers and practitioners have been aware of this problem for over a decade, but prior to our work the behavior variability had not been measured at scale. We demonstrate how malware with such "split personalities" may confound the current techniques for malware analysis and detection. More importantly, we illustrate the unique insights that the security industry can gain by monitoring malware behavior ethically and at scale, on real hosts.

Tudor Dumitras, University of Maryland, College Park

Tudor Dumitraș is an Associate Professor in the Electrical & Computer Engineering Department at the University of Maryland, College Park. His research focuses on data-driven security: he studies real-world adversaries empirically, he builds machine learning systems for detecting attacks and predicting security incidents, and he investigates the security of machine learning in adversarial environments. In his previous role at Symantec Research Labs he built the Worldwide Intelligence Network Environment (WINE) - a data analytics platform for security research. His work on the effectiveness of certificate revocations in the Web PKI was featured in the Research Highlights of the Communications of the ACM in 2018, and his measurement of the duration and prevalence of zero-day attacks received an Honorable Mention in the NSA competition for the Best Scientific Cybersecurity Paper of 2012. Dumitraș frequently collaborates with the security industry, to help them incorporate cutting-edge research into their products. As the Program Committee Chair for RAID 2022, he helped introduce the Best Practical Paper award, which aims raise awareness of research advances among security professionals.

Spotting the Differences: Quirks of Machine Learning (in) Security

Tuesday, 5:10 pm5:40 pm

Fabrício Ceschin, Federal University of Paraná (UFPR), Brazil

Available Media

Machine Learning (ML) has been widely applied to cybersecurity and is currently considered state-of-the-art for solving many open issues in that field. However, it is challenging to evaluate how good the produced solutions are, since security challenges may not appear in other areas, as security problems could incur infeasible solutions for real-world applications. For instance, a phishing detection model that does not consider a non-stationary distribution would not work given that 68% of phishing emails blocked by Gmail are different daily. In this talk, I will discuss some of the challenges of applying ML to cybersecurity, which include: (i) dataset problems, such as dataset definition, where defining the right size is key to creating a representative model of the task being performed, and class imbalance, where the distribution between classes differs substantially; (ii) adversarial machine learning and concept drift/evolution, where attackers constantly develop adversarial samples to avoid detection leading to changes in the concept in the data, and turning defense solutions obsolete due to the volatility of security data; and (iii) evaluation problems, such as delayed labels, where new data do not have ground-truth labels available right after collection, producing a gap between the data collection, their labeling process, and models training/testing. My goal is to point directions to future cybersecurity researchers and practitioners applying ML to their problems. Finally, for each challenge described, I will show how existing solutions may fail under certain circumstances, and propose possible solutions to fix them when appropriate.

Fabrício Ceschin, Federal University of Paraná (UFPR), Brazil

Fabrício is a Ph.D. student (Federal University of Paraná, Brazil), Master in Computer Science (Federal University of Paraná, Brazil, 2017), and Computer Scientist (Federal University of Paraná, Brazil, 2015). His research interests include machine learning, adversarial machine learning, and data streams applied to cyber security. Fabrício published papers in top venues (IEEE Security & Privacy, IEEE TBIOM, ACM ESWA, and others) and already reviewed papers for multiple venues (USENIX Security & Privacy 2022, IEEE Security & Privacy, DIMVA, ARES, ECML, and others). He was awarded the Google Research Awards for the Latin America program in 2017. Fabrício also received the USENIX Enigma 2019 student travel grant and won the Machine Learning Security Evasion Competition (MLSEC) twice (2020 and 2021).

Wednesday, January 25, 2023

7:30 am–8:55 am

Continental Breakfast

Mezzanine East/West

8:55 am–9:00 am

Opening Remarks, Day 2

Program Co-Chairs: Melanie Ensign, Discernible, and Joseph Lorenzo Hall, Internet Society

9:00 am–10:00 am

Fireside Chat: The State of Memory Safety

Wednesday, 9:00 am10:00 am

Moderator: Amira Dhalla, Consumer Reports
Panelists: Yael Grauer, Consumer Reports; Alex Gaynor, Federal Trade Commission; Josh Aas, Internet Security Research Group and Prossimo

Available Media

The most ubiquitous kind of vulnerability that plagues modern computing is the memory safety vulnerability—where the underlying programming language doesn't inherently protect data structures in memory. When "memory unsafe" code fails, it can allow attackers access to arbitrary pieces of system memory, and potentially execution of malicious code. Over the years, writing code in memory safe languages, "sandboxing" memory unsafe code, and raising awareness around memory unsafety have been important steps in more generally protecting computerized and networked systems against this class of threats. But where are we at? What is the current state of memory unsafety? Join Yael and Amira from Consumer Reports, who are currently working on a report surveying the landscape of memory safety, in a fireside conversation with Alex Gaynor and Josh Aas, two key informants on the quest to squash this pernicious class of bugs.

Amira Dhalla, Consumer Reports

Amira Dhalla has spent over a decade in technology working on issues related to digital rights, privacy, security, and equity with global organizations and communities. Together, she works with educators and activists to design participatory curriculum and resources to make emerging technology more inclusive, open, and safe. She currently works at Consumer Reports as the Director of Impact Partnerships and Programs with a focus on digital privacy and security. Amira works on projects that improve the cybersecurity and privacy of products and tools in the marketplace, while also tackling topics like discriminatory technologies, deceptive design, and trust and safety.

Yael Grauer, Consumer Reports

Yael Grauer works at Consumer Reports managing Security Planner, a free, easy-to-use guide to staying safer online. Her background is in investigative tech reporting, and she has covered privacy and security for over a decade for various tech publications. She has extensively researched the privacy and security (or lack thereof) of consumer VPNs, teamed up with EFF to research street-level surveillance, broken stories on Zoom misleading users about end-to-encryption, looked into questionable claims about blockchain voting, and investigated surveillance against China’s Uyghur minority. Yael serves on the Board of Directors of the CyberMed Summit, the world’s only clinically-oriented healthcare cybersecurity conference. She also runs a do-it-yourself data broker opt-out list, and is currently writing a book on investigations for No Starch Press.

Alex Gaynor, Federal Trade Commission

Alex Gaynor is a software security engineer. He's a founder and principal at Fish in a Barrel, working on systemic solutions to classes of vulnerabilities. He's previously been Chief Information Security Officer at Alloy and an engineer at Mozilla and the United States Digital Service. Alex has a long history of contribution in open source, from building a JIT'd Ruby VM to serving on the Board of Directors of the Python Software Foundation. Alex lives in Washington, D.C.

Josh Aas, Internet Security Research Group and Prossimo

Josh Aas co-founded and currently runs Internet Security Research Group (ISRG), the nonprofit entity behind Let's Encrypt, the world's largest certificate authority helping to secure more than 290 million websites. He also spearheaded ISRG’s latest projects, one focused on bringing memory-safe code to security-sensitive software, called Prossimo, and Divvi Up, a privacy-respecting metrics service. Josh worked in Mozilla’s platform engineering group for many years, improving the Firefox web browser. He also worked for Mozilla in a senior strategy role, helping to find solutions for some of the Web's most difficult problems. He has deep expertise in software security and ecosystem dynamics, as well as organizational leadership.

10:00 am–10:30 am

Break with Refreshments

Mezzanine East/West

10:30 am–12:00 pm

But ... I Saw It on the Internet

Session Chair: Isedua Oribhabor, Access Now

How the Internet Improves Humanity

Wednesday, 10:30 am11:00 am

Eric Goldman, Santa Clara University School of Law

Available Media

Given the time and energy we spend worrying about the Internet‘s flaws, it’s easy to forget how the Internet makes our lives better in many ways. This talk will highlight one under appreciated aspect: how the Internet has the capacity to improve the human species by increasing pro-social interactions and reducing anti-social ones. The talk will also show how this scenario probably won’t be realized because of misguided regulatory efforts to “fix” the Internet.

Characterizing and Measuring Misleading and Harmful Online Ad Content at Scale

Wednesday, 11:00 am11:30 am

Eric Zeng, Carnegie Mellon University

Available Media

Like it or not, online advertising is a core part of the internet, sustaining many free websites and services. However, it is also a potent vector for abuse; malicious actors can use the infrastructure of ad networks to serve scams, malware, and other misleading content to millions of users. Research on online advertising in the computer security and privacy community tends to focus on privacy and web security issues. However, in this talk I argue that additional research and transparency is needed for the content of online ads, where social engineering techniques and dark patterns are regularly used to mislead users, direct them to scams, and achieve other harmful ends.

To illustrate the importance of studying ad content for protecting users, I will present three case studies of problematic ad content that uncover deeper security, privacy and safety issues in the online advertising ecosystem. First, I will present a user study of people’s perceptions of display advertising on the web, characterizing the types of ad content that people like and dislike, and why. Second, I will present a large-scale study of advertising on news and media websites, showing how some news outlets rely on revenue from clickbait and deceptive advertising via native ad networks. Lastly, I will present a longitudinal study of deceptive political advertising during the 2020 U.S. Elections, such as misleading polls and petitions and political clickbait, and how these ads are targeted at partisan news sources.

Eric Zeng, Carnegie Mellon University

Eric Zeng is a Postdoctoral Researcher at the CyLab Security and Privacy Institute at Carnegie Mellon University. He recently graduated with a PhD from the Paul G. Allen School of Computer Science & Engineering at the University of Washington. His research examines security and privacy issues experienced by end users of technology, and how to design tools and create policies to make it easier for people to stay safe on the internet. His work has covered topics such as deceptive online advertising, privacy in smart homes, encrypted email, and online misinformation.

A New Process to Tackle Misinformation on Social Media: Prevalence-Based Gradation

Wednesday, 11:30 am12:00 pm

Kamesh Shekar, The Dialogue

Available Media

While misinformation and disinformation is not a new threat, it is accelerated by social media platforms. High-stakes information like election-related information, health-related information, etc., has critical consequences on individuals and communities in real life, but it is muddled with mis/disinformation. Platforms use various technological measures and predictive machine learning tools to detect unlawful content like child sexual images, pornography, dis/misinformation. These technological measures have their merits to an extent, especially where platforms can act faster and at scale. At the same time, we increasingly see content falling through the crack due to false negatives and getting struck or taken down due to false positives.

One of the critical reasons social media posts fall through the cracks is that platforms are presently confined to content-level intervention in the absence of process-level clarity and intervention within the content moderation pipeline. This lack of process-level intervention causes platforms to utilise resources and time inefficiently.

Against this backdrop, in this talk, I discuss/propose a novel process-level intervention that would refine the content moderation pipeline and enable the efficient use of tools and resources across its entirety. I propose a “prevalence-based gradation process” (PBG) – a system that uses prevalence as an integral element for hard moderation to tackle mis/disinformation. The talk will also show how the PBG process would act as a means through which social media platforms can evaluate content using ex-ante measures and exercise optimal corrective action in a calibrated format adjusted according to the exposure level of the information.

Kamesh Shekar, The Dialogue

Kamesh Shekar leads the Privacy and Data Governance Vertical at The Dialogue and was a Fellow at the Internet Society. His area of research covers informational privacy, surveillance technology, intermediary liability, safe harbour, issue of mis/disinformation on social media, AI governance etc. Prior to this, Kamesh worked as a communication associate at Dvara Research. Kamesh holds a PGP in Public Policy from Takshashila Institution and holds an MA in media and cultural studies, and a BA in social sciences from the Tata Institute of Social Sciences.

12:00 pm–1:15 pm

Lunch

Terra Courtyard

1:15 pm–2:00 pm

Panel

Session Chair: Joseph Lorenzo Hall, Internet Society

The Not Fake News Hour

Wednesday, 1:15 pm1:30 pm

Riana Pfefferkorn, Stanford Internet Observatory

Available Media

Security and privacy can be kind of a bummer. Take a break and come watch a fake professor from a real university cover current events in tech policy, “SNL Weekend Update”-style. It’s not fake, it’s not an hour, and it’s probably not news to anybody attending Enigma.

Riana Pfefferkorn, Stanford Internet Observatory

Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory. She investigates the U.S. and other governments' policies and practices for forcing decryption and/or influencing the security design of online platforms and services, devices, and products, both via technical means and through the courts and legislatures. Riana also studies novel forms of electronic surveillance and data access by U.S. law enforcement and their impact on civil liberties. This is her third time speaking at Enigma.

Invited Talk

Metric Perversity and Bad Decision-Making

Wednesday, 1:30 pm2:00 pm

Lea Kissner

Available Media

People keep making terrible decisions. Shockingly often, bad decisions are because of perverse metrics—metrics that are technically correct but drive people to bad decisions. This is an especially bad problem for security, privacy, abuse, and related fields: we have adversaries, we have unknowns, and we have many small risks with incredibly high impacts. In this talk, we'll go through the types of perverse metrics—and how to avoid them.

Lea Kissner[node:field-speakers-institution]

Lea was the CISO at Twitter and has done privacy, security, and abuse-fighting work at companies including consulting at Zoom, being the Global Lead of Privacy Technology at Google, and the CPO of Humu. They hold a Ph.D. in computer science (cryptography) from Carnegie Mellon.

2:00 pm–2:10 pm

Short Break

Mezzanine East/West

2:10 pm–3:40 pm

It Takes a Village

Session Chair: Christina Morillo, Trimarc Security

What Public Interest AI Auditors Can Learn from Security Testing: Legislative and Practical Wins

Wednesday, 2:10 pm2:40 pm

Justin Brookman, Consumer Reports

Available Media

Public interest researchers (such as journalists, academics, or even concerned citizens) testing for algorithmic bias and other harms can take much away from security testing practices. The Computer Fraud and Abuse Act, while intended to dissuade hacking, can be a legal barrier for public interest security testing (although recently much of this has been cleared up by the courts). Similarly, researchers trying to test for algorithmic bias and other harm in the AI space run into similar CFAA barriers when tinkering with algorithms. AI researchers can look to legal and practical techniques that security researchers have done in the past. This includes apply for DMCA exemptions for narrowly tailored objectives, promoting the use of bug bounty programs but for AI harm, and more. We provide practical and policy recommendations that stem from security researchers that AI testing experts can advocate for in attempts to remove legal and practical barriers that prevent this kind of research.

Justin Brookman, Consumer Reports

Justin Brookman is the Director of Consumer Privacy and Technology Policy for Consumer Reports. Justin is responsible for helping the organization continue its groundbreaking work to shape the digital marketplace in a way that empowers consumers and puts their data privacy and security needs first. This work includes using CR research to identify critical gaps in consumer privacy, data security, and technology law and policy. Justin also builds strategies to expand the use and influence of the Digital Standard, developed by CR and partner organizations to evaluate the privacy and security of products and services.

Prior to joining CR, Brookman was Policy Director of the Federal Trade Commission’s Office of Technology Research and Investigation. At the FTC, Brookman conducted and published original research on consumer protection concerns raised by emerging technologies such as cross-device tracking, smartphone security, and the internet of things. He also helped to initiate and investigate enforcement actions against deceptive or unfair practices, including actions against online data brokers and digital tracking companies.

He previously served as Director of Consumer Privacy at the Center for Democracy & Technology, a digital rights nonprofit, where he coordinated the organization’s advocacy for stronger protections for personal information in the U.S. and Europe.

Why Is Our Security Research Failing? Five Practices to Change!

Wednesday, 2:40 pm3:10 pm

Marcus Botacin, Texas A&M University

Available Media

Everybody complains about the current cybersecurity state. Regardless of the reason, nobody is fully satisfied. I don't think "security is broken". In fact, we have "more" security nowadays than at any time in the past. However, it is true that the field has a lot to progress. Therefore, in this talk, I assume the methodological position that security research is failing and I present my discoveries when trying to understand which aspects of security could have been failing and thus might be enhanced. I support my investigation with results from a published systematic literature review of 400+ papers from the last 20 years of published malware research in the most reputable venues. I identified more than 20 challenges and pitfalls in security research and categorized them into 5 high-level categories that will be discussed in this talk: (i) the lack of diversity in study types; (ii) researchers not looking to the market and industry when needed; (iii) researchers focus too much on the industry and market; (iv) the lack of guidelines in the field; and (v) the reproducibility crisis that cybersecurity and almost all fields face. I close the talk with suggestions that one might adopt to mitigate these problems. My recommendations are divided according to the multiple stakeholders in the field and they range from (i) researchers developing more longitudinal studies with representative populations; to (ii) the field establishing more guidelines for experiment development; and (iii) venues clearly welcoming diversified study types.

Marcus Botacin, UFPR/TAMU

Marcus is a Computer Science Assistant Professor at Texas A&M University (TAMU). Marcus holds a Computer Science PhD (Federal University of Paraná, Brazil, 2021), Master in Computer Science (University of Campinas, Brazil, 2017), and a Computer Engineering Bachelor (University of Campinas, Brazil, 2015). His main research interests are malware analysis, reverse engineering, and the science of security. Marcus’ master dissertation was awarded by the Brazilian Computer Society (SBC) the best security research work developed in Brazil in 2017. Marcus published papers in top-venues (ACM TOPS, DIMVA, IEEE TDSC, and others) and he is currently a PC member for USENIX Security 2022 and 2023. Marcus was also awarded a student grant to attend USENIX Enigma 2019 and he was a USENIX Enigma 2021 speaker.

Understanding Trust and Security Processes in the Open Source Software Ecosystem

Wednesday, 3:10 pm3:40 pm

Dominik Wermke, CISPA Helmholtz Center for Information Security

Available Media

Open source software has an important role in our everyday-lives: as foundation, glue, or tooling, open source constitutes many important links in the software supply chain. But the openness of this ecosystem brings unique (security) challenges, including code submissions from unknown entities, limited developer-hours & tooling to review commits or dependencies, and the necessity to vet included open source components. Through the results from interview studies with contributors of open source projects, companies that use open source components, maintainers that distribute their packages on open source packages repos, as well as developers that create reproducible software, we examined the security and trust processes and considerations in the open source supply chain, especially those that are not directly visible on a data level and can only be understood through engagement with the open source community.

During this talk, I will introduce the different aspects and challenges of security and trust in the open source ecosystem to a wider audience, highlight interviews as a collaborative, less harmful approach for open source research that positively engages with the community and creates excitement for academic research, and share practical advice on how to improve security in the software supply chain by enabling stakeholders such as maintainers and contributors.

Dominik Wermke, CISPA Helmholtz Center for Information Security

Dominik Wermke is a Usable Security & Privacy researcher at CISPA Helmholtz Center for Information Security, a Computer Science PhD student at the Leibniz University Hannover, and a visiting scholar at the GWUSEC lab at George Washington University. His research enables developers and administrators to deploy secure, privacy-respecting, and trust-worthy software that benefits the security of hundreds of dependent code bases, thousands of real-world deployments, and millions of end users. His research leverages this multiplicative effect by supporting open source maintainers, developers, and system administrators in the shared endeavor towards a more secure and trustworthy software ecosystem. His work has appeared in the top-tier security venues such as IEEE S&P, USENIX Security, and ACM CCS, as well as field-specific venues such as SOUPS and ACSAC.

3:40 pm–4:10 pm

Break with Refreshments

Mezzanine East/West

4:10 pm–5:40 pm

Blow the Whistle

Session Chair: Ted Han, Mozilla

Protecting Whistleblower Information

Wednesday, 4:10 pm4:40 pm

Shanna Devine, Office of Whistleblower Ombuds in the U.S. House of Representatives

Available Media

From the recent Twitter data privacy breakdowns to the role of Big Brother, whistleblowers are society’s eyes and ears to abuses of power that betray the public trust. Likewise, Congress relies on these courageous individuals to sound the alarm around threats to our civil liberties and conduct effective oversight.

The Office of the Whistleblower Ombuds, an independent and nonpartisan support office for the House of Representatives, will discuss the ways in which it advises Congress on best practices for working with whistleblowers from the public and private sectors – and how whistleblowers can also take precautions to protect their communications.

Shanna Devine, Office of the Whistleblower Ombuds, U.S. House of Representatives

Shanna Devine serves as the inaugural Director of the Office of the Whistleblower Ombuds. In that capacity, she is responsible for advising the House community on best practices for working with whistleblowers from the public and private sectors.

Prior to joining the House, Shanna was already partnering with Congress in a bipartisan capacity to strengthen working relationships with whistleblowers. Her experience began in 2008 with the Government Accountability Project, a nonprofit whistleblower support organization. As legislative director, she led campaigns that resulted in bipartisan passage of landmark whistleblower laws, including Congress’ unanimous passage of the Whistleblower Protection Enhancement Act of 2012. As an investigator, she worked with whistleblowers to safely verify their allegations and channel their knowledge to Congress for effective oversight. She routinely conducted trainings on best practices for working with whistleblowers, and in 2015 and 2016 she played a strategic role in the establishment of the bipartisan Senate and House Whistleblower Protection Caucuses.

Shanna earned her Master of Public Management degree from the University of Maryland’s School of Public Policy. She earned her bachelor’s degree in Critical Theory and Social Justice from Occidental College.

So You're Going to Blow the Whistle? Communications Dos and Don'ts for Whistleblowers.

Wednesday, 4:40 pm5:10 pm

Emily Schwartz, Bryson Gilette

Available Media

Deciding to be a whistleblower is not something most people prepare for but it is undoubtedly a life changing experience. This presentation will talk through the communications basics for whistleblowers: what to consider, communications must haves, what not to do, and how to begin to tell your story.

Emily Schwartz, Bryson Gillette

Emily Schwartz is a strategic communications professional who has worked at the intersection of technology, politics, and policy for nearly 15 years. She led the communications rollout for Facebook whistleblower Frances Haugen including preparing her for her public reveal on '60 Minutes' and multiple government testimonies. Prior to joining Bryson Gillette, Emily led global data sharing policy for Uber, advising the company on regulatory issues regarding privacy and geolocation data. Emily has worked with major brands, startups, and nonprofits to advise them on issues ranging from global product rollouts to crisis communication management.

Open Source Software and Journalism: Field Notes from a Global Perspective

Wednesday, 5:10 pm5:40 pm

Harlo Holmes, Freedom of the Press Foundation

Available Media

Drawing from the past five years working with public interest and investigative journalists, Harlo Holmes will share a variety of field notes she has gathered regarding the particular challenges journalists face using open source technologies on the job; including the complexities that have arisen before, during, and after COVID-19 lockdown across the globe.

Harlo Holmes, Freedom of the Press Foundation

Harlo Holmes is the Chief Information Security Officer (CISO) and Director of Digital Security at Freedom of the Press Foundation. She strives to help individual journalists in various media organizations become confident and effective in securing their communications within their newsrooms, with their sources, and with the public at large. She is a media scholar, software programmer, and activist; and was a regular contributor to the open source mobile security collective The Guardian Project.

5:40 pm–7:10 pm

Conference Reception

Sponsored by Google

Mezzanine East/West

Thursday, January 26, 2023

7:30 am–8:55 am

Continental Breakfast

Mezzanine East/West

8:55 am–9:00 am

Opening Remarks, Day 3

Program Co-Chairs: Melanie Ensign, Discernible, and Joseph Lorenzo Hall, Internet Society

9:00 am–10:00 am

Panel

Privacy Policies, by Lawyers, for Lawyers. What about Everyone Else?

Thursday, 9:00 am10:00 am

Moderator: Hannah Poteat
Panelists: Holly Hogan, Automattic; Miju Han, Google; Emily Jones, Simmons & Simmons; Gabriela Zanfir-Fortuna, Future of Privacy Forum

Available Media

We still need privacy policies even if nobody reads them, but they fall short as effective communication and education tools. What else can organizations do to ensure customers and users are adequately informed about data use and data rights? Time for privacy to show creative flex.

Hannah Poteat[node:field-speakers-institution]

Hannah Poteat did data protection before data protection was cool. For 25 years, first with a long career in information security and then as an attorney, Hannah has built a career predicated on the idea that mere compliance with security and data protection requirements is the bare minimum: we can always innovate. Most recently, as Senior Manager and Lead Privacy Counsel at Twilio, Hannah led the Privacy Risk team and advised on the nexus of international, domestic, and sectoral privacy and telecommunications laws. This entailed advising on all matters of global privacy law enterprise-wide, including harmonizing Twilio’s binding corporate rules with GDPR, CPRA, LGPD, APPI, and other global privacy frameworks; developing company-wide global privacy policies, procedures, and strategy; guiding incident response and maintaining breach response policy; and coordinating employee privacy during a global pandemic. Prior to Twilio, Hannah built the privacy program at GitHub, moving GitHub from the downfall of Safe Harbor to the advent of GDPR. She is currently taking a break before taking on a new adventure in building privacy programs. When not arguing over what a “sale” is, Hannah is probably driving a Jeep off-road over very large rocks. Hannah is CIPP/US certified and someday will get around to that CIPP/E. Probably.

Holly Hogan, Automattic

Holly is the General Counsel of Automattic Inc. where she leads the legal team in helping Automattic/WordPress.com make the web open and accessible, one website at a time. She navigates novel legal issues for a global internet company as it has grown in revenue, size, and impact. Holly has expertise in designing legal and business solutions that empower teams to scale globally, make good decisions, and get the job done; building and leading teams in and out of the legal department; and un-complicating international laws with legal strategies that keep operations running smoothly. Prior to Automattic, Holly was a partner at K&L Gates LLP and represented companies as a litigator in a diverse range of cases—from patent and contract disputes to First Amendment cases. She started her career as a prosecutor. Holly is a graduate of Harvard Law School, and has a B.A. in Politics (magna cum laude) from the University of San Francisco.

Miju Han, Google

When transitioning from data scientist to product manager, Miju spent a long time thinking about how to leverage data in products. That led to the beginnings of GitHub Advanced Security, which has helped developers and security teams patch over ten million vulnerabilities to date. She then helped HackerOne realize its vulnerability intelligence product suite before moving over to Twitter after the teenager bitcoin hack to start its privacy and security product practice. Miju is currently focused on privacy full-time at Google, where she is responsible for how Google Ads uses data.

Emily Jones, Simmons & Simmons LLP

Emily is a partner at international law firm Simmons & Simmons and leads the firm’s recently launched US office in San Francisco. The new office does not practice US law and Emily is an English-qualified attorney who has worked in Silicon Valley for over five years advising fast-growth technology, fintech, and healthcare companies on data privacy, cyber security, and technology laws as they grow their businesses globally. Emily has been advising on data privacy and security issues for over 16 years and works with companies launching new products and services especially involving emerging technologies. Her experience includes carrying out GDPR compliance health checks, responding to regulatory investigations, data security incidents, and data subject requests and negotiating with customers and vendors. She also helps clients to find commercial solutions to address evolving international data transfer issues. She is CIPP/E certified and regularly speaks at conferences on data privacy and security matters.

Gabriela Zanfir-Fortuna, Future of Privacy Forum

Dr. Gabriela Zanfir-Fortuna is Vice President for Global Privacy for the Washington DC-based Future of Privacy Forum, where she leads the work on Global privacy and data protection developments, coordinating FPF's offices in Brussels, Tel Aviv and Singapore. She is also an Associated Researcher for the LSTS Center of Vrije Universiteit Brussel. Gabriela is a member of the Reference Panel of the Global Privacy Assembly and was a member of the Executive Committee of ACM's Fairness, Accountability and Transparency (FaccT) Conference. She has experience in working for the European Data Protection Supervisor in Brussels, dealing with enforcement, litigation and policy matters, and contributed to the work of the Article 29 Working Party. She holds a PhD in law with a thesis on the rights of the data subject from the perspective of their adjudication in civil law, and an LLM in Human Rights. She published a comprehensive volume on the rights of the data subjects in 2015 (Protectia datelor personale. Drepturile persoanei vizate", C.H. Beck, Bucharest, 2015), and is one of the co-authors of "GDPR: A commentary", Oxford University Press, 2020.

10:00 am–10:30 am

Break with Refreshments

Mezzanine East/West

10:30 am–12:00 pm

Pull Up a Chair for Privacy

Session Chair: Pilar Garcia, Help Scout

Meaningful Hardware Privacy for a Smart and Augmented Future

Thursday, 10:30 am11:00 am

Marcus Hodges, Meta

Available Media

Smart home devices are becoming increasingly popular in our living spaces. Wearable devices that let you capture photos and videos in the moment, without taking out your phone, are designed to take with you everywhere you go. In the future, augmented reality (AR) glasses will feature a wide variety of cameras and sensors for capturing and understanding your surroundings. Because these devices transcend the keyboard, many of them feature sensors that are always-on, such as the wake word engines of smart voice assistants. Yet, despite thier many benefits, these emerging computing platforms also present new classes of security and privacy challenges to users and bystanders alike.

As these devices are gaining adoption, there are two evolving hardware privacy features that promise awareness and user control over sensor privacy - the Recording Indicator LED light and the Privacy “Mute” Switch. However, very little has been published on their design and security. How do these features work? Are they trustworthy? What does the LED actually mean?

This talk will describe these hardware privacy mechanisms and explain in detail their design, purpose, and security properties. We'll explore the constraints and trade-offs that influence their function and discuss the importance of hardware security assurance. I’ll clarify what privacy LED indicators mean, don't mean, and why augmented reality poses new challenges to their semantics.

These privacy features are deceptively simple and evolving. To preserve their meaning and potential for trustworthy privacy protection, we need to come together as an industry and align on new security and privacy standards.

Marcus Hodges, Meta

Marcus Hodges is a Security Partner at Meta where he supports Reality Labs and leads product security and architecture for Meta’s augmented reality (AR) glasses - silicon to cloud. Marcus is an accomplished security professional with over 15 years of experience building and breaking systems at every layer of the stack. He recently served on the Board of Directors of the ioXt Alliance where he established and chaired the IoT Privacy working group. Formerly, as the Director of Research and a Principal Security Engineer at Security Innovation, he designed and managed the development of a custom secure Linux distribution for the federal U.S. Judiciary. He has been an avid CTF player, hosted DEFCONs OpenCTF competition, and spoken at security conferences on topics including binary exploitation and the development of custom Linux rootkits. He has a comprehensive B.S. in mathematics from the University of Washington.

Technical Metamorphosis: The 3 Stages of DSAR Automation

Thursday, 11:00 am11:30 am

Alon Levy, Uber

Available Media

Caterpillars do not transform into butterflies overnight. It takes time and a series of steps. In this presentation, we’ll walk through each stage in the journey to automate, standardize, and validate Data Subject Access Request fulfillment engineering operations to address regulatory requirements and provide a good user experience. We’ll share our experience as a technical team building multiple automation tools for Data Subject Access Requests in order to help other engineering teams prepare or refine their own strategies for privacy automation.

Alon Levy, Uber

Alon is an engineering manager on the Technical Privacy team at Uber. He leads the team responsible for developing a privacy platform to enable Data Subject Access Request automation at scale. The platform is geared both towards compliance and building customer trust. Prior to Uber, Alon earned a wealth of experience developing large scale infrastructure and platforms at Microsoft.

Building an Automated Machine for Discovering Privacy Violations at Scale

Thursday, 11:30 am12:00 pm

Suchakra Sharma, Privado Inc.

Available Media

While the most advanced digital watch in 1980 asked us to manually enter and store our phone book on the watch, modern smartwatches are sending our GPS location pings and heartbeat each second to unknown cloud machines which you know nothing about! To tackle this information void of where our data flows, various regulations and privacy frameworks have been developed. While there are multiple stakeholders such as lawyers and privacy officers in privacy conversations, the onus falls on the developers to eventually write code that respects those regulations - or fix issues that got introduced. In this talk we discuss how tried and tested static analysis techniques such as taint tracking and dataflow analysis can be used on large code bases at scale to help fix privacy leaks right at the source itself. What does it take to build such tooling? What challenges would we face and how can you, a developer or a privacy engineer fix privacy bugs in code!

Suchakra Sharma, Privado Inc.

Suchakra Sharma is the Chief Scientist at Privado where he helps build code analysis tools for data privacy and data security. He completed his Ph.D. in computer engineering from École Polytechnique de Montréal where he worked on eBPF technology and hardware-assisted tracing techniques for OS analysis. For the last six years, Suchakra has been working on enhancing static analysis tooling for fixing security bugs at scale. He has delivered talks and trainings at venues such as USENIX LISA, SCALE, RSA Conference, BlackHat, Papers We Love, etc. When not playing with computers, he develops film photographs and writes poems.

12:00 pm–1:15 pm

Lunch

Terra Courtyard

1:15 pm–2:00 pm

Panel

True Crime Journalism: Cyber

Thursday, 1:15 pm2:00 pm

Moderator: Melanie Ensign, Discernible
Panelists: Joseph Menn, Washington Post; Lily Hay Newman, WIRED; Alfred Ng, Politico

Available Media

The role journalists play in reporting cybercrime is often more complicated than we imagine. What if you're contacted by a source involved in a crime? What obligation, if any, do journalists have to law enforcement, victims, and the general public? This panel of seasoned journalists will share their experiences from real investigations.

Joseph Menn, Washington Post

A leading security journalist for two decades, Joseph Menn is the author of the bestseller "Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World," first published in 2019. It revealed that then-presidential candidate Beto O'Rourke had belonged to the oldest surviving and most influential group of U.S. hackers and explained the origins of hacktivism and ethical security work. The New York Times Book Review called it "a hugely important piece of the puzzle for anyone who wants to understand the forces shaping the internet age." It was named one of the 10 best nonfiction works of the year by Hudson Booksellers as well as one of the five cybersecurity books everyone should read by the Wall Street Journal, and it was inducted into the Cybersecurity Canon project's Hall of Fame.

Menn now covers digital threats for the Washington Post, having joined in early 2022 after working at Reuters, the Financial Times, and Los Angeles Times. Menn also wrote 2010's "Fatal System Error: The Hunt for the New Crime Lords Who Are Bringing Down the Internet," a real-life thriller that brought the modern face of cybercrime to a mainstream audience. Fatal System Error revealed collaboration between major governments and organized crime. It was placed on the official reading list of the US Strategic Command, while the New Yorker magazine compared it to the "Dragan Tattoo" novels of Stieg Larsson. Before that, he wrote the definitive inside account "All the Rave: The Rise and Fall of Shawn Fanning's Napster," named one of the best three books of the year by Investigative Reporters & Editors Inc.

Menn speaks regularly at security conferences including Def Con, Black Hat, and RSA, and tweets as @josephmenn.

Lily Hay Newman, WIRED

Lily Hay Newman is a senior writer at WIRED focused on information security, digital privacy, and hacking. She previously worked as a technology reporter at Slate magazine and was the staff writer for Future Tense, a publication and project of Slate, the New America Foundation, and Arizona State University. Additionally, her work has appeared in Gizmodo, Fast Company, IEEE Spectrum, and Popular Mechanics. She lives in New York City.

Alfred Ng, Politico

Alfred Ng is a privacy reporter at POLITICO and previously covered cybersecurity at CNET. He was also an enterprise reporter on privacy at The Markup. He is known for his short bios and abrupt endings.

2:00 pm–2:30 pm

Break with Refreshments

Mezzanine East/West

2:30 pm–4:00 pm

Questioning the Status Quo

Session Chair: Swathi Joshi, Oracle

Myths and Lies in InfoSec

Thursday, 2:30 pm3:00 pm

Adrian Sanabria, Tenchi Security

Available Media

In InfoSec, many closely held beliefs, commonly accepted best practices, and accepted ‘facts’ are just wrong. These myths and lies spread quickly. Collectively, they can point security teams in the wrong direction. They can give rise to ineffective products. They often make their way into legitimate research, clouding results.

"Sixty percent of small businesses close within 6 months of being hacked."

There's a good chance you've seen this stat before. It has no basis in reality. The available evidence suggests quite the opposite.

"Attackers only need to get it right once, defenders have to get it right every single time."

This idea has been repeated so often in InfoSec that it has become generally accepted as a true statement. It isn't just wrong, it's demotivating and encourages defeatist thinking that can sink the morale of a security team.

Most of the myths and lies in InfoSec take hold because they seem correct, or sound logical. Similar cognitive biases make it possible for even the most preposterous conspiracy theories to become commonly accepted in some groups.

This is a talk about the importance of critical thinking and checking sources in InfoSec. Our industry is relatively new and constantly changing. Too often, we operate more off faith and hope than fact or results. Exhausted and overworked defenders often don't have the time to seek direct evidence for claims, question sources, or test theories for themselves.

This talk compiles some of the most interesting research I’ve done over the past decade. My goal is to convince you to treat vendor claims, commonly accepted industry statistics, and best practices with healthy skepticism. You don't need to be a data scientist or OSINT expert to test theories and discover the truth - you just need to sacrifice a bit of your time now and then. I'll show you how.

Adrian Sanabria, Tenchi Security

Adrian is a successful generalist with over two decades of experience hacking, fixing, breaking, building, and teaching in InfoSec. He’s always trying to see the big picture and figure out the best strategy. Despite all these years in the industry, he is still optimistic about our future.

The Slippery Slope of Cybersecurity Analogies

Thursday, 3:00 pm3:30 pm

Josiah Dykstra

Available Media

Cybersecurity is rich with analogies, from keys and locks to Trojan horses. We look for the “needle in the haystack” and “evict malware resident on our systems.” We debate “baked-in” versus “bolted-on” security. We do not mean all these things literally, of course. The language and analogies we use in this field are borrowed from many different domains. Analogies can help explain basic cybersecurity concepts, but too often they omit or overgeneralize important details. They can mislead, sometimes deliberately, because the experience they purport to connect might be out of proportion. Despite their shortcoming and imprecision, using an analogy or an abstraction might be helpful in appropriate situations. Using analogies, abstractions, and metaphors shapes technology’s development, practice, and policies. The analogies are more than simple figures of speech. They have a normative dimension; sometimes, they can be used to help the imaginary shape reality. This talk explores the use and misuse of analogies and metaphors across cybersecurity. We consider analogies from the physical world, medicine and biology, war and military, and law before discussing tips for avoiding pitfalls in using analogies and metaphors.

Josiah Dykstra, Independent Security Researcher

Josiah Dykstra is a cybersecurity practitioner, researcher, author, and speaker. He is a Technical Fellow in the Cybersecurity Collaboration Center at the National Security Agency (NSA) and the owner of Designer Security, LLC. He holds a Ph.D. in computer science and previously served as a cyber operator. Josiah is interested in cybersecurity science, especially where humans intersect with technology. He has studied stress in hacking, action bias in incident response, and the economics of knowing when sharing threat intelligence is more work than it is worth. Dr. Dykstra is a frequent speaker, including Black Hat and RSA Conference. He received the CyberCorps® Scholarship for Service (SFS) fellowship and is one of six in the SFS Hall of Fame. In 2017, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) from former President Barack Obama. Dr. Dykstra is a Fellow of the American Academy of Forensic Sciences and a Distinguished Member of the Association for Computing Machinery (ACM). He is the author of numerous research papers and the books Essential Cybersecurity Science (O’Reilly Media, 2016) and Cybersecurity Myths and Misconceptions (Pearson, forthcoming).

The Very Hungry Defender: Metamorphosing Security Decision-Making by Incorporating Opportunity Cost

Thursday, 3:30 pm4:00 pm

Kelly Shortridge, Fastly, Inc.

Available Media

Information security is often seen as a drain on the organization and defenders struggle to prove the value of their decisions to the rest of the organization. Security teams usually consider the narrow benefits of implementing a defensive measure for their own goals while ignoring the variegated costs they impose on the organization, its employees, its users, and even society. Therefore, information security is in continual danger of suboptimal outcomes and missed opportunities.

Opportunity cost can help transform information security programs from an organizational drag to an uplifting force by considering the foregone benefits of alternative options. This talk explores the importance of opportunity cost in security decision making and why defenders must incorporate different forms of cost in every decision. Drawing on cross-disciplinary research on opportunity cost in complex system domains, we will examine the tangible and intangible costs and effects that practitioners must take into account when evaluating defensive options. Through this lens, we’ll discuss negative externalities beget by security investments and how “cost” is far more than just money. To make these concepts more concrete, we’ll close with practical examples from the realm of application security to demonstrate why opportunity cost – and an appreciation of the kaleidoscopic nature of "cost" – is essential when making defensive decisions.

Kelly Shortridge, Fastly, Inc.

Kelly Shortridge is a Senior Principal Engineer at Fastly. Kelly is coauthor of Security Chaos Engineering (O'Reilly Media) and an expert in resilience-based strategies for systems defense. Shortridge has been a successful enterprise product leader as well as a startup founder (with an exit to CrowdStrike) and investment banker. Kelly frequently advises Fortune 500s, investors, startups, and federal agencies and has spoken at major technology conferences internationally, including Black Hat USA, O'Reilly Velocity Conference, and RSA Conference. Kelly’s research papers have been featured in Communications of ACM, IEEE, ACM Queue, and Human Factors and Ergonomics Society, spanning behavioral science in information security, deception strategies, and the ROI of resilience. They also serve on ACM Queue’s magazine editorial board.

4:00 pm–4:15 pm

Closing Remarks

Program Co-Chairs: Melanie Ensign, Discernible, and Joseph Lorenzo Hall, Internet Society