All the times listed below are in Pacific Standard Time (PST).
Tuesday, February 1, 2022
8:45 am–9:00 am
Opening Remarks, Day 1
Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter
9:00 am–10:30 am
Session Chair: Kate McKinley
Gillian "Gus" Andrews, Theorem Media and Front Line Defenders
The past few years have seen a surge of interest and funding in fighting disinformation. Rumors and conspiracy theories have disrupted democratic process from Brazil to India, to the halls of Congress in the United States; they have hobbled the success of the fight against COVID. Many proposed solutions hinge either on "fact-checking" or on using AI to identify and defuse disinformation on a large scale.
We can try to scale the fight against disinformation with machine learning. But what is it that we are trying to scale? Are we certain that hearts and minds can meaningfully be changed at scale? What would that effort look like?
This talk will challenge a key assumption currently made in fighting disinformation: that "trustworthiness" is a property of information, not of the people who spread it, and that trust is a human quality that can be generated at scale. Dr. Andrews will lay out findings from science and technology studies, neurocognitive development, and "new literacies" research to point to best practices and new approaches to the disinformation problem.
Patrick Gage Kelley, Google
The Covid-19 pandemic has given us a unique opportunity to investigate how misinformation narratives spread and evolve around the world. Throughout 2020 and 2021, we conducted regular surveys of over 50 thousand people from a dozen countries about their self-reported exposure to pandemic-related misinformation and their belief in those narratives. This large scale, longitudinal measurement provides a unique lens to understand how misinformation narratives resonate throughout the world, how the belief in these narratives evolves over time, and how ultimately misinformation affects personal health decisions such as vaccination. In this talk, we will share the key insights gleaned throughout this study that in turn help inform efforts to fight multiple types of misinformation.
10:30 am–11:00 am
11:00 am–12:30 pm
Humans Are Hard
Session Chair: Antonela Debiasi, The Tor Project
Thinking Slow: Exposing Influence as a Hallmark of Cyber Social Engineering and Human-Targeted Deception
Mirela Silva, University of Florida
Use of influence tactics (persuasion, emotional, gain/loss framing) is key in many human interactions, including advertisements, written requests, and news articles. However, they have been used and abused for cyber social engineering and human-targeted attacks, such as phishing, disinformation, and deceptive ads. In this emerging deceptive and abusive online ecosystem, important research questions emerge: Does deceptive material online leverage influence disproportionately, compared to innocuous, neutral texts? Can machine learning methods accurately expose the influence in text as part of user interventions to prevent them from being deceived by triggering their more analytical thinking mode? In this talk, I present my research on Lumen (a learning-based framework that exposes influence cues in texts) and Potentiam (a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news). Potentiam was labeled by multiple annotators following a carefully designed qualitative methodology. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, towards the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content.
Chloé Messdaghi, Cybersecurity Disruption Consultant and Researcher
From the pandemic, we have changed and transformed in ways we are still trying to discover. The effects have caused incredible burnout amongst colleagues and personal relationships, and has in ways, impacted managers, teams, and company structure and policies. It is not just burnout. We have another deeper issue that is becoming prevalent, Post-COVID Stress Disorder (PCSD). As an industry, we need to be aware of the seriousness of burnout, and recognize the role we play in mental health. This talk discusses burnout, what that means for security and the well-being of companies, and solutions to support one another as we proceed into a new era post-pandemic.
Dr. Sanchari Das, University of Denver
Even the largest enterprise can be subverted with a small device quietly tunneling through the network boundaries. One way to mitigate the damage is to purchase the higher quality IoT devices, to increase security before installation. In this work, we evaluated the purchase of a few devices that appear relatively harmless but create significant risk. Any workplace may have a small crockpot show up in the break room, or an employee with a fitness tracker. These may offer access to all Bluetooth Low Energy (BLE) devices, or real time audio surveillance. Alternative models of the same devices, without the corresponding risk, show the value of careful IoT selection. Yet an employee can not be expected to understand the security risk of IoT devices. To address this understanding and motivation gap, we present a security-enhancing interaction that provides an effective, acceptable, usable framing for non-technical people making IoT purchase decisions. The interface design nudges users to make risk-averse choices by integrating psychological factors in the presentation of the options. Participants using this purchasing interaction consistently avoided low security and high risk IoT products, even paying more than twice ($6.99 versus $17.95) to select a secure smart device over alternatives. We detail how the nudges were designed, and why they are effective. Specifically, our Amazon store wrapper integrated positive framing, risk communication, and the endowment effect in one interaction design. The result is a system that significantly changes human decision-making, incorporating security the default choice. This was a collaboration between Prof. Sanchari Das at the University of Denver with Shakthidhar Gopavaram and Prof. L. Jean Camp at Indiana University Bloomington.
12:30 pm–1:45 pm
1:45 pm–3:15 pm
Hate and Encryption
Session Chair: Jon Callas, The Electronic Frontier Foundation
Mallory Knodel, Center for Democracy & Technology
End-to-end encryption (E2EE) is an application of cryptography in online communications systems between endpoints. E2EE systems are unique in providing features of confidentiality, integrity and authenticity for users, yet these strong privacy and free expression guarantees create tension with legitimate needs for information controls. This talk proposes formal, feature- and requirement-based, and user centric definitions of end-to-end encryption that in aggregate are able to confront these tensions. Any improvements to E2EE should therefore strive to maximise the system's unique properties (confidentiality, integrity, authenticity), security and privacy goals, while balancing user experience through enhanced usability and availability. Concrete proposals for E2EE improvements were analysed thusly and the results will be presented. Improving mechanisms for user reporting and using existing metadata for platform abuse analysis are the most likely to preserve privacy and security guarantees for end-users, while also improving user experience. Both provide effective tools that can detect significant amounts of different types of problematic content on E2EE services, including abusive and harassing messages, spam, mis- and disinformation, and CSAM. Future research to improve these tools should measure efficacy for users while preserving E2EE systems’ unique guarantees.
Riana Pfefferkorn, Stanford Internet Observatory
In pressuring online service providers to better police harmful content on their services, regulators tend to focus on trust and safety techniques, such as automated systems for scanning or filtering content on a service, that depend on the provider's capability to access the contents of users' files and communications at will. I call these techniques content-dependent. The focus on content analysis overlooks the prevalence and utility of what I call content-oblivious techniques: ones that do not rely on guaranteed at-will access to content, such as metadata-based tools and users' reports flagging abuse which the provider did not (or could not) detect on its own.
This talk presents the results of a survey about the trust and safety techniques employed by a group of online service providers that collectively serve billions of users. The survey finds that abuse-reporting features are used by more providers than other techniques such as metadata-based abuse detection or automated systems for scanning content, but that the providers' abuse-reporting tools do not consistently cover the various types of abuse that users may encounter on their services, a gap I recommend they rectify. Finally, despite strong consensus among participating providers that automated content scanning is the most useful means of detecting child sex abuse imagery, they do not consider it to be nearly so useful for other kinds of abuse.
These results indicate that content-dependent techniques are not a silver bullet against abuse. They also indicate that the marginal impact on providers' anti-abuse efforts of end-to-end encryption (which, controversially, stymies providers' ability to access user content at will) can be expected to vary by abuse type. These findings have implications for policy debates over the regulation of online service providers' anti-abuse obligations and their use of end-to-end encryption.
Kurt Thomas, Google
While most security and anti-abuse protections narrowly focus on for-profit cybercrime today, we show how hate and harassment has grown and transformed the day-to-day threats experienced by Internet users. We provide a breakdown of the different classes of threats (such as coordinated mobs posting toxic content, anonymous peers breaking into a target’s account to leak personal photos, or intimate partner violence involving tracking and surveillance) and map these to traditional security or anti-abuse principles where existing solutions might help. We also provide prevalence estimates for each class of attack based on survey results from 22 countries and 50,000 participants. We find over 48% of people have experienced hate and harassment online, with a higher incidence rate among young people (18-24), LGBTQ+ individuals, and active social media users. We also highlight current gaps in protections, such as toxic comment classification, where differing personal interpretations of what constitutes hate and harassment results in uneven protections across users, especially at-risk populations. Our goal with this talk is to raise awareness of the changing abuse landscape online and to highlight the vital role that security practitioners and engineers can play in addressing these threats.
3:15 pm–3:45 pm
3:45 pm–5:15 pm
Make Attacks Hard
Session Chair: Swathi Joshi, Netflix
Pritam Dash, University of British Columbia
Autonomous Robotic Vehicles (RV) such as drones and rovers rely extensively on sensor measurements to perceive their physical states and the environment. For example, a GPS provides geographic position information, a gyroscope sensor measures angular velocities, an accelerometer measures linear accelerations. Attacks such as sensor tampering and spoofing can feed erroneous sensor measurements through external means that may deviate RVs from their course and result in mission failures. Attacks such as GPS spoofing have been performed against military drones and marine navigation systems. Prior work in the security of autonomous RVs mainly focuses on attack detection. However, detection alone is not enough, because it does not prevent adverse consequences such as drastic deviation and/or crash. The key question, "how to respond once an attack is detected in an RV?" still remains unanswered.
In this talk, we present two novel frameworks that provide safe response to attacks and allow RVs to continue the mission despite the malicious intervention. The first technique uses a Feed-Forward controller (FFC) which runs in tandem with RV’s primary controller and monitors it. When an attack is detected, the FFC takes over to recover the RV. The second technique identifies and isolates the sensor(s) under attack - this prevents the corrupted measurements from affecting the actuator signals. Then, it uses historic states to estimate RV’s current state and ensures stable operation even under attacks.
Xiali (Sharon) Hei, University of Louisiana at Lafayette
There are many studies on sensor attacks using sound, light, electric magnetic signals. However, most of them adopt a white-box approach and rely on manual adjustments of the attackers. Manual adjustments during the attack increase the burden and cannot work well for adaptive attacks. How to automize the cyber-physical attacks in a black-box setting is under investigation. Missing real-time system feedback is the main barrier to automize cyber-physical attacks. In this talk, we will talk about how to overcome the obstacles to achieve automatic attacks in a black-box setting. Then as a case study, we will demonstrate an automated approach for physical signal injection on systems based on inertial sensors and actuators using a black-box setting. The attack becomes more challenging when adversaries cannot access any internal status of the embedded system; nevertheless, we show that it is possible to construct a non-intrusive adversarial control loop to control the target system in real-time. With this technology, the threat model of cyber-physical attacks will evolve, and the researchers will make more automatic attacks in the future. Also, we will show many demo videos on how to attack mini-robots, self-balanced scooters, and others.
Graham Bleaney, Facebook
The security industry has spent decades building up tooling and knowledge on how to detect flaws in software that lead to vulnerabilities. To detect a breadth of vulnerabilities, these tools are built to identify general patterns such as data following from a source to a sink. These generalized patterns also map to problems in domains a diverse as performance, compliance, privacy, and data abuse. In this talk, I’ll present a series of case studies to show how Facebook engineers have applied our security tools to detect and prevent implementation flaws in domains such as these.
I’ll go deep on a case study showing how static taint flow analysis —a tool Facebook first deployed for security purposes— helped us make sure we weren’t storing or misusing user locations when we launched Instagram Threads. Then, to show that that case study was not an isolated example, I’ll more quickly walk through a half dozen additional examples where tools from our Product Security team have been used to check for implementation flaws in other domains. Finally we’ll discuss the limitations of this approach, stemming from the tools themselves, differing organizational structures, and the ever present need for defense in depth.
By the end of this talk, you should walk away brimming with ideas on new applications for your organization’s existing security tooling.
Wednesday, February 2, 2022
8:55 am–9:00 am
Opening Remarks, Day 2
Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter
9:00 am–10:30 am
Moderator: Elissa M. Redmiles, Max Planck Institute for Software Systems
Panelists: Kendra Albert, Harvard Law School; Kate D'Adamo, Reframe Health and Justice; Angela Jones (She/Her/Hers/They/Them/Theirs), State University of New York
In this panel, four experts will discuss the influence of technology & policy on the livelihoods and wellbeing of sex workers. We will discuss the ever changing landscape of regulation and efforts to remove sex & sex workers from the internet and the role of digital security & privacy and the experts who develop technologies to preserve it.
Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She has additionally served as a consultant and researcher at multiple institutions, including Microsoft Research, Facebook, the World Bank, the Center for Democracy and Technology, and the University of Zurich. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as the New York Times, Scientific American, Rolling Stone, Wired, Business Insider, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards from Facebook as well as the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.
Kendra Albert is a clinical instructor at the Cyberlaw Clinic at Harvard Law School, where they teach students to practice technology law by working with pro bono clients. Their practice areas include freedom of expression, computer security, and intellectual property law. Kendra also publishes on gender, adversarial machine learning, and power. They hold a law degree from Harvard Law School and serve on the board of the ACLU of Massachusetts and the Tor Project. They also are a legal advisor for Hacking // Hustling, a collective of sex workers, survivors, and accomplices working at the intersection of tech and social justice to interrupt state surveillance and violence facilitated by technology.
Kate D‘Adamo is a sex worker rights advocate with a focus on economic justice, anti-policing and incarceration and public health. Previously, she was the National Policy Advocate at the Sex Workers Project and a community organizer and advocate with the Sex Workers Outreach Project and Sex Workers Action New York. Kate has held roles developing programming, developing trainings and technical assistance, providing peer-led interventions to harm, offering service provision, and advancing political advocacy to support the rights and well-being of people engaged in the sex trade, including victims of trafficking.
Angela Jones is Professor of Sociology at Farmingdale State College, State University of New York. Jones's research interests include African American political thought and protest, race, gender, sexuality, sex work, feminist theory, and queer methodologies and theory. Jones is the author of Camming: Money, Power, and Pleasure in the Sex Industry (NYU Press, 2020) and African American Civil Rights: Early Activism and the Niagara Movement (Praeger, 2011). She is a co-editor of the three-volume After Marriage Equality book series (Routledge, 2018). Jones has also edited two other anthologies: The Modern African American Political Thought Reader: From David Walker to Barack Obama (Routledge, 2012), and A Critical Inquiry into Queer Utopias (Palgrave, 2013). Jones is the author of two forthcoming reference books: African American Activism and Political Engagement: An Encyclopedia of Empowerment and Black Lives Matter: A Reference Handbook (ABC-CLIO). She is also the author of numerous scholarly articles, which have been published in peer-reviewed journals such as Gender & Society, Signs: Journal of Women in Culture and Society, Sexualities, and Porn Studies. She also writes for public audiences and has published articles in venues such as Contexts (digital), The Conversation, the Nevada Independent, Peepshow Magazine, PopMatters, and Salon.
10:30 am–11:00 am
11:00 am–12:00 pm
Fairness and Inclusion
Session Chair: Kendra Albert, Harvard University
Seny Kamara, Brown University
Cryptography underpins a multitude of critical security- and privacy-enhancing technologies. Recent advances in modern cryptography promise to revolutionize finance, cloud computing and data analytics. But cryptography does not affect everyone in the same way. In this talk, I will discuss how cryptography benefits some and not others and how cryptography research supports the powerful but not the disenfranchised.
Seny Kamara is an Associate Professor of Computer Science at Brown University. Before joining Brown, he was a researcher at Microsoft Research.
His research is in cryptography and is driven by real-world problems from privacy, security and surveillance. He has worked extensively on the design and cryptanalysis of encrypted search algorithms, which are efficient algorithms to search on end-to-end encrypted data. He maintains interests in various aspects of theory and systems, including applied and theoretical cryptography, data structures and algorithms, databases, networking, game theory and technology policy.
At Brown, he co-directs the Encrypted Systems Lab and the Computing for the People project and is affiliated with the Center for Human Rights and Humanitarian Studies, the Policy Lab and the Data Science Initiative.
Steven Presser, Independent Researcher
hCaptcha, a commercial CAPTCHA product, currently protects 12-15% of websites against automation, including the talk submission website for this conference. It presents humans a picture-based puzzle to solve and uses the results to label datasets. Therefore, it only provides a visual CAPTCHA. In order to comply with accessibility requirements, hCaptcha provides a special "accessibility workflow," which requires additional information from users. However, this workflow has two major issues: it could be used to de-anonymize users and can be fully automated.
In this talk, I will examine how such a system was created. I begin with a brief background on CAPTCHAs, an overview of relevant assistive technologies for people with disabilities, and how the two interact. Next, I will discuss the disparate user experiences between the mainstream workflow and the accessibility workflow – as well as the privacy implications of their differences. I will discuss the design factors and requirements hCaptcha used when designing the accessibility workflow and then summarize the automation attack, including my responsible disclosure of the attack. Finally, I will conclude with a discussion of hCaptcha’s future plans for a more inclusive and privacy-friendly CAPTCHA, as well as asking some larger questions about the future of the CAPTCHA. These include: Is the era of the CAPTCHA at an end? If so, do we replace them and with what? How do we ensure inclusive access without creating security gaps?
12:00 pm–1:15 pm
1:15 pm–2:45 pm
ML Is Hard
Session Chair: Amanda Walker, Nuna
Nidhi Rastogi, Rochester Institute of Technology
An automatic, contextual, and trustworthy explanation of cyberattacks is the immediate goalpost for security experts. Achieving it requires deep knowledge of the system under attack, the attack itself, real-time data describing environmental conditions. It also requires the ability to communicate in a way that the explanation evokes experts to trust. Automating the process of communicating contextual and trustworthy explanations of cyberattacks should also handle various attack models, although it adds to the existing challenge. However, a scientific approach to addressing explanations can generate a system that can offer the desired explanations under most use cases. In this presentation, we discuss the limitations of existing machine learning-based security solutions and how contextual security solutions can address them. We share specific use cases to support our argument. We present our research on contextual security (threat intelligence using knowledge graphs) and ongoing work on explanation-based security.
Ram Shankar Siva Kumar, Microsoft
If you look under your toaster, you will find a sticker with the letters "UL" on it – this is a certification from "Underwriters Laboratory" promising that the toaster is relatively safe from spontaneous combustion.
Would it not be comforting to see a sticker under your smart device that it was robustly tested for security and privacy? Or a seal of approval attesting that it is robust from adversarial manipulations?
After all, if you want to know the security checks your router has passed, you can visit the manufactuerer's page and look under the security tab and get the details. Want to know how your bank’s mobile app is keeping your data safe? Just google your bank name and the words “security” and you can see detailed information on how they adhere to industry standards to safeguard your data.
So, what gives for AI systems? AI systems are deployed in some of the most critical areas including healthcare, finance, transportation, and even cybersecurity. Why don’t we have a concrete list of assurances from these AI vendors? Moreover, if AI is just software 2.0, shouldn’t all the existing standards and certifications just directly apply? Also, securing AI systems is a universal good, right?
Neither Band-Aids nor Silver Bullets: How Bug Bounties Can Help the Discovery, Disclosure, and Redress of Algorithmic Harms
Camille Francois and Sasha Costanza-Chock, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society
Bug bounty programs for security vulnerabilities have received a great deal of attention in recent years, accompanied by adoption from a wide variety of organizations and a significant expansion in the numbers of participants on major platforms hosting such programs. This talk presents the conclusions of a research effort by the Algorithmic Justice League, looking at the applicability of bug bounties and related vulnerability disclosure mechanisms to the discovery, disclosure, and redress of algorithmic harms. We present a typology of design levers that characterize these different programs in the information security space, and analyze their different tradeoffs. We scrutinize a recent trend of expanding bug bounty programs to socio-technical issues, from data abuse bounties (Facebook, Google) to algorithmic biases (Rockstar Games, Twitter). Finally, we use a design justice lens to evaluate what the algorithmic harms space could borrow from these programs, and reciprocally, what traditional bug bounty programs could learn from the burgeoning algorithmic harms community.
Camille Francois, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society
Sasha Costanza-Chock, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society
2:45 pm–3:15 pm
3:15 pm–4:45 pm
Privacy Is Hard
Session Chair: Nwokedi Idika, Google
Nicholas Carlini, Google
Current machine learning models are not private: they reveal particular details about the individual examples contained in datasets used for training. This talk studies various aspects of this privacy problem. For example, we have found that adversaries can query GPT-2 (a pretrained language model) to extract personally-identifiable information from its training set.
Preventing this leakage is difficult, and recent ad-hoc proposals are not effective. And while there exist provably-secure schemes (e.g., through differentially private gradient descent) they come at a high utility cost. We conclude with potential next steps for researchers (with problems that should be solved) and practitioners (with practical techniques to test for memorization).
Reza Shokri, National University of Singapore
In this talk, I will talk about what exactly privacy risk is and what it is not, the difference between privacy and confidentiality (which can be easily confused), the reasons models are vulnerable to inference attacks, the methodology for quantifying privacy risk in machine learning, and examples of how ML privacy meter and similar tools can enable detailed auditing of ML systems. I will show the very fundamental and intuitive relation of the auditing mechanisms and defense mechanisms for privacy (e.g., differential privacy).
It is very important to ML engineers, policymakers, and researchers to be aware of the risks, their implications, and the methodology for auditing the privacy risk for different types of machine learning algorithms. This can pave the way for privacy by design for machine learning.
Ghada Almashaqbeh, University of Connecticut
Cryptocurrencies and blockchains introduced an innovative computation model that paved the way for a large variety of applications. However, lack of privacy is a huge concern, especially for permissionless public blockchains. Clients do not want their financial activity to be tracked, their pseudonym addresses to be linked to their real identities, or even worse, disclose their sensitive data when processed by smart contracts. This talk will shed light on this issue, explore current solutions and technology trends, define the gaps, and then explore the road ahead towards viable privacy solutions for private computations over blockchains.
Thursday, February 3, 2022
8:55 am–9:00 am
Opening Remarks, Day 3
Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter
10:30 am–11:00 am
11:00 am–12:00 pm
Session Chair: Melanie Ensign, Discernible, Inc.
Vaibhav Garg, Comcast Cable
Two decades of economics research has repeatedly made the assertion that organizations as well as individuals do not have adequate incentive to invest in cybersecurity. Absent security, associated costs are imposed on third parties rather than producers of insecurity. Cybersecurity is thus a private good with externalities, one that will require regulation to prevent market failure. Underlying this body of research is the assumption that all organizations have the same business drivers, a similar attack surface, and a uniformly informed consumer base. This talk questions these assumptions and outlines seven naturally occurring incentives for organizations to invest in cybersecurity. Furthermore, I provide examples of how these incentives have driven investment in cybersecurity across different sectors. While the applicability of these incentives differs both across and within sectors, any cybersecurity public policy interventions must consider the resulting nuances. Cybersecurity covenants established absent the sword of regulation may be both more effective and sustainable, as they evolve with the experience and exposure of the stakeholders.
Anthony Vance, Virginia Tech
There are many teams in security—blue teams, red teams, purple teams, etc. This talk is about the security team that few people think about but has the potential to be the most powerful and influential security team in the organization: the board of directors. Through in-depth interviews of board directors, CISOs, and senior-level consultants who advise boards on security, we illustrate challenges that boards face in providing meaningful oversight of security. We also show how CISOs are gaining strategic importance in supporting and advising the board. Finally, we describe ways that security practitioners can help boards realize their potential as the most powerful security team in the company.
12:00 pm–12:45 pm
12:45 pm–2:00 pm
2:00 pm–3:30 pm
Following the Rules
Session Chair: Wendy Seltzer, W3C
Joy Forsythe, Alto Pharmacy
As the news fills up with ransomware attacks on health systems, the HealthTech startup space is booming. Why can’t our healthcare be as modern and friendly as ordering a pair of shoes or getting dinner reservations?
I made the leap from building enterprise security products to HealthTech startups 5 years ago with idealism about how technology could fix things and I’m here to share my "lessons learned." Like many other "disruptors," HealthTech companies are discovering that a lot of difficult security choices were made for a reason -- sometimes because the alternative is life-threatening -- or that alternative will take a long time to change.
Most importantly, healthcare is its own ecosystem that we have to understand before we can reason about it. Once I started to understand who the different entities were and how new startups fit in that system, I started to identify places where security can innovate and do better.
Justin Brookman, Consumer Reports
New privacy laws around the world are give consumers the right to stop unwanted processing of their personal information. But how can we be expected to tell thousands of different companies that we don't want our data sold? "Do Not Track" was an early effort to give consumers scalable privacy rights, but the effort foundered without the weight of the law behind it. Now, new privacy laws are both creating new rights and letting consumers delegate to others the ability to exercise those rights. Universal signals and global settings now may legally bind companies and expose them to liability for ignoring user preferences. The Global Privacy Control is one effort to allow consumers to transmit to every website they visit a request not to have their data shared with others. This and similar efforts may illuminate how to make privacy rights practically workable in the future.
Justin Brookman is the Director of Consumer Privacy and Technology Policy for Consumer Reports. Justin is responsible for helping the organization continue its groundbreaking work to shape the digital marketplace in a way that empowers consumers and puts their data privacy and security needs first. This work includes using CR research to identify critical gaps in consumer privacy, data security, and technology law and policy. Justin also builds strategies to expand the use and influence of the Digital Standard, developed by CR and partner organizations to evaluate the privacy and security of products and services.
Prior to joining CR, Brookman was Policy Director of the Federal Trade Commission’s Office of Technology Research and Investigation. At the FTC, Brookman conducted and published original research on consumer protection concerns raised by emerging technologies such as cross-device tracking, smartphone security, and the internet of things. He also helped to initiate and investigate enforcement actions against deceptive or unfair practices, including actions against online data brokers and digital tracking companies.
He previously served as Director of Consumer Privacy at the Center for Democracy & Technology, a digital rights nonprofit, where he coordinated the organization’s advocacy for stronger protections for personal information in the U.S. and Europe.
Cillian Kieran, Ethyca
Most current approaches to enterprise data privacy suffer from the ex-post nature of their application. Applications purporting to orchestrate crucial privacy tasks like access control, rights fulfillment, or risk assessment get bolted on to pre-existing systems and must dynamically respond to an underlying web of data flows that is poorly described, ever-evolving, and complex. It's a Sisyphean challenge that afflicts some of the most sophisticated technology enterprises operating today, to say nothing of non-digitally native legacy enterprises.
In this presentation, Cillian Kieran, Founder and CEO of Ethyca, will argue that the only way to meaningfully solve this important problem is to apply privacy protections at the start of the software delivery lifecycle rather than at the finish, and will propose one approach for doing so.
He'll demonstrate the benefits of ex-ante privacy by walking through a set of annotation and risk evaluation tools built on top of an open-source privacy taxonomy derived from the ISO/IEC 27701 standards. Cillian's presentation will show how an engineer can annotate projects, evaluate privacy risks in CI pipelines, and enable privacy rights to be enacted on data stored in annotated systems.
This will be a first public walkthrough of an open-source project that has been years development and has received interest from data engineering teams at some of the world's largest companies.
3:30 pm–3:45 pm
Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter