Monday, January 27, 2020
7:30 am–8:45 am
8:45 am–9:00 am
Opening Remarks, Day 1
Program Co-Chairs: Ben Adida, VotingWorks, and Daniela Oliveira, University of Florida
10:15 am–10:45 am
Break with Refreshments
10:45 am–12:15 pm
Other People's Code
Filippo Valsorda, Google
Modern software development relies increasingly on code reuse in the form of third-party dependencies from the Open Source ecosystem. Although each programming language has its own tooling and culture, they all encourage a widespread model of adoption without detailed review, and of eager updates to new versions.
This transitive trust in the dependencies authors led to a string of high-profile availability issues and attacks: the recent rest-client Ruby gem compromise, the similar event-stream Node package compromise, the infamous left-pad incident, and many more. These episodes have patterns that we can learn from as an industry: they either involve attackers compromising the developer credentials and uploading new compromised versions, or they involve the ecosystem losing access to the contents of existing versions.
The new Go checksum database—deployed in 2019—was designed to secure the Go modules ecosystem without requiring any extra work by module authors, like extra key management. It provides a centralized log for the checksums of all versions of all public modules. It then deploys the same technology as Certificate Transparency to keep this central authority accountable. It does not introduce any new accounts that can be compromised, and it enables third-party auditors to offer new version notifications to authors. Finally, it's designed to be easily cacheable, enabling a tradeoff in resources and privacy, from simple proxies all the way to full mirrors that don't leak any information about what modules are in use.
This talk will look at the high level design of the checksum database, and how it can be applied to other software package ecosystems to help secure the software supply chain.
Filippo Valsorda (@FiloSottile) is a cryptography engineer on the Go team at Google. He acts as primary security coordinator for the Go Project and owns the Go cryptography standard libraries. Since joining the team he introduced TLS 1.3 support in the Go standard library and co-designed the Go module authentication system, the Go Checksum Database. Previously at Cloudflare, he developed its experimental TLS 1.3 stack and kicked DNSSEC until it became something deployable.
Sarah Harvey, Square Inc
Microservice architecture is becoming increasingly common with the democratization of cloud computing power, and more and more organizations are realizing that it's often simply easy to pay for a particular service instead of building it from scratch. The result is that many large organizations often have to grapple with hundreds if not thousands of such third-party integrations. However, performing risk analysis about these interactions—especially when it relates to the sharing of data—can be extremely time-consuming if not impossible.
In this talk, we will briefly cover typical third-party integration flows within an organization, from request to implementation. We will identify common gaps in security visibility and access, and discuss various solutions with their degree of efficacy as we have measured. We will argue that it is through these improvements that you will be able have not just a more holistic, but more consistent risk map of your organization's assets.
The aim of this talk is to show that the boring, grueling work in security is just as important as exciting 0-days! We hope to also show that there are still new exciting metrics and incident response systems you can derive from these processes.
Sarah is a software engineer on a privacy engineering team at Square. Her background includes 4+ years of industry experience in security/privacy infrastructure design and engineering, and 4 years of academic privacy research. She has a variety of event organizing and speaking experience; highlights include speaking at and co-organizing BSidesSF 2019, organizing and presenting a 300+ person CTF workshop at Grace Hopper, and giving a series of funny lightning talks on infrastructure security and privacy challenges.
She also has given talks as a hologram, and in general never takes herself seriously.
She can be followed for cats and tech humor on Twitter: @worldwise001.
Felix Fischer, Technical University of Munich
Stack Overflow helps software developers from all over the world get their daily programming tasks done. Knowledge and source code shared via this platform shape digital services and applications that are used by billions of people every day. The tremendous impact Stack Overflow has had on today's software urges us and many other researchers to investigate to what extent information security is part of the discussions on Stack Overflow, what the biggest security problems are, and how developers solve them.
Our results tell a story of two tales. In the first tale, Stack Overflow seems to be the source of all evil. It's responsible for unintentionally marketing and distributing severe software vulnerabilities we traced in high-profile applications installed by billions of people. It's been demonstrated that these vulnerabilities would allow practical attacks and theft of credentials, credit cards, and other private data. The second tale tells a complete opposite story, where Stack Overflow becomes one of the most usable and effective tools in helping developers get security right. The moral of both stories is that it only takes small design tweaks to get from one to the other.
We are deeply convinced that these kinds of modifications could have an enormous positive effect on software security in general due to the pervasive use of Stack Overflow. Therefore, we want to highlight the most important results from usable security research over the last years to set the ball rolling. These include identified major security problems, what impact they had on real-world applications, and how we modified Stack Overflow to effectively help people develop secure software.
Felix Fischer is a Research Associate and PhD student of Jens Grossklags at the Chair of Cyber Trust at Technical University of Munich. He studies the interaction of people with information security and privacy technologies. His most recent publications focus on software engineers struggling with getting cryptography right and explore machine learning as a tool for usable security and privacy. His work has frequently been published at top-tier venues for security and privacy research, such as IEEE S&P, ACM CCS, and USENIX Security.
12:15 pm–1:30 pm
Sponsored by Salesforce
1:30 pm–3:30 pm
Fundamentals and Infrastructure
Francis Guibernau, Deloitte
Just like great escape artists captivate an innocent audience with perfectly measured and planned escapes, these extraordinary illusionists of the cyber world also aim for the same goal. Using meticulous, innovative maneuvers and their specially crafted malware pieces, they are able to analyze their surroundings to detect and evade sandbox environments. At this point, they can choose to conceal their real behavior to carry out their grand finale without being detected. But, how can we see beyond the surface? How can we harden our sandbox systems in order to prevent such evasion techniques?
In this talk, we are going to reveal the techniques used by these attackers to evade sandboxes and avoid being analyzed. We will walk you through the different approaches malware takes in order to achieve this and remain undetected. Additionally, we will show you unique malware samples to examine how they implement these techniques. Finally, we will demonstrate how, thanks to the use of MITRE ATT&CK Framework, we are able to document these techniques and improve our detection and analysis systems.
Francis is a Security Researcher Analyst at Deloitte Argentina's Cyber Threat Intelligence (CTI) Team specializing in tracking APT Group's activities worldwide, by analyzing their tools, tactics, and techniques with the help of Mitre ATT&CK Framework. He's currently finishing his studies on Information System Engineering at the Universidad Tecnológica Nacional (UTN)
Maya Kaczorowski, Google
Containers and microservices are increasingly being used to deploy applications, and with good reason, given their portability, simple scalability and lower management burden. In changing from an architecture based on monolithic applications to one using distributed microservices, known as a "cloud-native" architecture, there are changes not only to operations but also to security.
Where BeyondCorp states that user trust should be dependent on characteristics like the context-aware state of devices and not the ability to connect to the corp network, BeyondProd states that service trust should be dependent on characteristics like code provenance and service identity, not the location in the production network, such as IP or hostname identity.
Just like the security model evolved beyond the castle walls with BeyondCorp, BeyondProd proposes a cloud-native security architecture that assumes no trust between services, provides isolation between multi-tenant workloads, verifiable enforcement of what applications are deployed, automated vulnerability management, and strong access controls to critical data. These principles led Google to innovate several new systems in order to meet these requirements.
In this talk, we will cover what a cloud-native architecture is, and why it's different from a security point of view; design principles for security in a cloud-native world; how Google addressed these requirements and the internal tools used as part of this architecture; and how your organization might approach the same requirements. You'll come away with a better understanding of how to think about cloud-native security, and more capably decide what tools you might need to secure your infrastructure.
Maya is a Product Manager in Security & Privacy at Google, focused on container security. She previously worked on encryption at rest and encryption key management. Prior to Google, she was an Engagement Manager at McKinsey & Company, working in IT security for large enterprises, and before that, completed her Master's in mathematics focusing on cryptography and game theory. She is bilingual in English and French.
Outside of work, Maya is passionate about ice cream, making ice cream for friends at home, attending the Penn State Ice Cream Short Course in January 2014, and researching ice cream headaches. She also enjoys puzzling, running, and reading nonfiction.
Kenn White, MongoDB
Most databases in use today have an implicit central trust model—the idea being that system operators have full privilege to access and manage the information being processed in order to perform their work. This poses a problem in at least two particular cases: one, when the workload contains highly sensitive or confidential information, and two, when data are being processed and stored on third-party infrastructure such as a public cloud provider. In a central (or server-side) trust model, a live database breach or leak from publicly-exposed backups or logs can be catastrophic. One approach to protect both data-at-rest and data-in-use is client-side end-to-end encryption, in which sensitive data are encrypted at the application level before ever being sent to the server. Unfortunately, for mature modern databases, few options for native client-side encryption have existed for developers, particularly in the open-source world.
This talk will present lessons learned from nearly two years of engineering work spanning every major programming language, hardware platform, and operating system, to bring simple, usable authenticated encryption as a first-class citizen to the most widely deployed NoSQL database in the world. Insights from simple use cases of small stand-alone servers to some of the most demanding global distributed mission systems will be discussed. We'll review promising emerging cryptography and discuss the practical impact to developers and system designers.
Kenneth White is a security engineer whose work focuses on networks and global systems. He is co-founder and Director of the Open Crypto Audit Project and led formal security reviews on TrueCrypt and OpenSSL. He currently leads applied encryption engineering in MongoDB's global product group. He has directed R&D and security Ops in organizations ranging from startups to nonprofits to defense agencies to the Fortune 50. His work on applied signal analysis has been published in the Proceedings of the National Academy of Sciences. His work on network security and forensics has been cited by the Wall Street Journal, Reuters, Wired, and the BBC. He tweets about security, privacy, cryptography, and biscuits @kennwhite.
What do almost all recent cell network attacks that affect mobile user privacy have in common? They exploit the fact that cell phones have no way of authenticating towers during the initial connection bootstrapping phase. This includes everything from older IMSI catcher-style attacks to the newer spoofing attacks against the Presidential Alerts emergency broadcast system.
In this talk, we'll cover the distinct types of attacks that pre-authentication messages used in cell connection bootstrapping enable, how this ended up being such a prevalent issue, some of the efforts underway to try and fix this, and why this is ultimately such a hard problem to solve.
Yomna is a research engineer whose focus is cell network security. She is a Technology Fellow at EFF, was previously a core contributor to Certbot, and a research fellow at Harvard Law, and has a degree in mathematics from the University of Waterloo.
3:30 pm–4:00 pm
Break with Refreshments
4:00 pm–5:30 pm
Kavya Pearlman, XR Safety Initiative
New technologies inevitably bring along new risks. Virtual Reality (VR) is one of those technologies that is slowly creeping into our daily digital lives, however, not much attention has been paid to the risks it brings along. As the industry looks towards mass adoption of Virtual Reality with an expected $40 billion market size and over 200 million active users by the year 2020, these new cyber attacks have already begun making headlines. Kavya Pearlman, founder of XR Safety Initiative is busy building processes, standards and finding novel cyberattacks to stay ahead of the bad guys that are coming for this rising new domain of Virtual Reality.
Well known as the "Cyber Guardian", Kavya Pearlman is an Award-winning cybersecurity professional with a deep interest in immersive and emerging technologies. Kavya is the founder of non-profit, XR Safety Initiative (XRSI). XRSI is the very first global effort that promotes privacy, security, ethics and develops standards and guidelines for Virtual Reality, Augmented Reality and Mixed Reality (VR/AR/MR) collectively known as XR.
Kavya has advised Facebook on third party security risks during 2016 US presidential elections. She currently advises Wallarm, a global security company that protects hundreds of customers across e-commerce, fintech, health-tech, and SaaS for their artificial intelligence powered application security platform as a Global Cybersecurity Strategist.
Kavya is constantly exploring new technologies to solve current cybersecurity challenges. She has been named one of the Top Cybersecurity influencers for two consecutive years 2018-29019 by IFSEC Global. Kavya has won many awards for her work and contribution to the security community including 40 under 40 Top Business Executives 2019 by San Francisco Business Times, Rising Star of the year 2019 by Women in IT Award Series and Minority CISO of the Year 2018 by ICMCP. For her work with XR Safety Initiative, Middle East CISO Council recently awarded her - CISO 100 Women Security Leader award in Dubai.
Kavya holds a master's degree network security from DePaul University, Chicago and holds many prestigious Information Security certifications including CISM (Certified Information Security Manager) from ISACA, PCI-DSS-ISA (Internal Security Assessor) and PCIP for Payment Card Industry Security Standard Council. Kavya is truly passionate about her work and inspires many around the world including women and underrepresented communities in security and emerging technologies. Kavya gives back to the tech community by mentoring women through "Million Women Mentor" program and is a board of director for non-profit "Minorities in Cybersecurity" as well as advisory board member for "CISO Council North America."
Nicolas Papernot, University of Toronto and Vector Institute
The attack surface of machine learning is large: training data can be poisoned, predictions manipulated using adversarial examples, models exploited to reveal sensitive information contained in training data, etc. This is in large parts due to the absence of security considerations in the design of ML algorithms. Yet, adversaries have clear incentives to target these systems. Thus, there is a need to ensure that computer systems that rely on ML are trustworthy.
Fortunately, we are at a turning point where ML is still being adopted, which creates a rare opportunity to address the shortcomings of the technology before it is widely deployed. Designing secure ML requires that we have a solid understanding as to what we expect legitimate model behavior to look like.
In this talk, we lay the basis of a framework that fosters trust in deployed ML algorithms. The approach uncovers the influence of training data on test time predictions, which helps identify poison in training data but also adversarial examples or queries that would potentially result in a leak of private information. Beyond immediate implications to security and privacy, we demonstrate how this helps interpret and cast some light on the model's internal behavior. We conclude by asking what data representations need to be extracted at training time to enable trustworthy machine learning.
Nicolas Papernot is an Assistant Professor of Electrical and Computer Engineering at the University of Toronto and Canada CIFAR AI Chair at the Vector Institute. His research interests span the security and privacy of machine learning. Nicolas received a best paper award at ICLR 2017. He is also the co-author of CleverHans, an open-source library widely adopted in the technical community to benchmark machine learning in adversarial settings, and TF Privacy, an open-source library for training differentially private models. He serves on the program committees of several conferences including ACM CCS, IEEE S&P, and USENIX Security. He earned his Ph.D. at the Pennsylvania State University, working with Professor Patrick McDaniel and supported by a Google Ph.D. Fellowship. Upon graduating, he spent a year as a research scientist at Google Brain.
Sadia Afroz, ICSI, Avast
Given the existence of adversarial attacks and fairness biases, a question might arise if machine learning is useful for security at all. In this talk, we will discuss how to build robust machine learning systems to defend against real-world attacks. We focus on building machine learning-based malware detectors. We address the necessity of considering RoC curves where the FP rates need to lie well below 1%. Achieving this in the presence of a polluted ground truth set where 10–30% of data is unlabeled and 2–5% of labels are incorrect is a true challenge. When a dynamic model is built, testing it against a repository of malware is impossible, since most malware is ephemeral and may no longer exhibit the malicious property. Finally, we discuss how to model realistic adversaries for adversarial attacks and defenses.
Sadia Afroz is a research scientist at the International Computer Science Institute (ICSI) and Avast Software. Her work focuses on anti-censorship, anonymity and adversarial learning. Her work on adversarial authorship attribution received the 2013 Privacy Enhancing Technology (PET) award, the best student paper award at the 2012 Privacy Enhancing Technology Symposium (PETS) and the 2014 ACM SIGSAC dissertation award (runner-up). More about her research can be found: http://www1.icsi.berkeley.edu/~sadia/
5:30 pm–7:00 pm
Sponsored by Google
Tuesday, January 28, 2020
8:00 am–8:55 am
8:55 am–9:00 am
Opening Remarks, Day 2
Program Co-Chairs: Ben Adida, VotingWorks, and Daniela Oliveira, University of Florida
10:15 am–10:45 am
Break with Refreshments
10:45 am–12:15 pm
An Alternative Lens
Annalisa Nash Fernandez, Intercultural Strategist
Privacy and security are cultural constructs. We process and interpret them differently depending on our cultural framework. As technology yields unprecedented access across borders, frameworks designed for a few markets are ultimately deployed globally. Yet we still face linguistic and cultural barriers. Explore the geo-cultural dimensions of privacy, security, and communication that frame the global data frontier, unlocking global innovation in products and on multicultural teams. Understand the cultural values associated with approaches to trust, timing, change management, and data privacy and security. This engaging and informative presentation decodes how cultural differences present themselves on multicultural teams and in cross-border business transactions as challenges, but provide the opportunity for innovation and global excellence.
Annalisa Nash Fernandez is a specialist in world cultures, focusing on cultural elements in technology and business strategy. An experienced corporate strategic planning director who worked globally as an expatriate executive based in emerging markets, she bridges her dual background as a sociolinguist to navigate the cultural elements in digital communication, privacy, artificial intelligence, and the digital economy. Her expert quotes are featured widely, including by CIO magazine and the BBC, and her articles are published in trade journals and in leading media. Annalisa held various roles at Philip Morris International and Kraft Foods, based in São Paulo, Brazil, and investment banks, including Bankers Trust, based in New York City and Santiago, Chile. In her freelance consulting career she is a linguist for Transperfect, an intercultural strategist for multinational companies, a speaker at global conferences, and a pro bono interpreter and advocate. Annalisa holds an M.A. in language and translation from the University of Wisconsin, and a B.S. in international finance from Georgetown University.
L Jean Camp, Indiana University
Why don't people use security, protect their data, or adopt privacy-enhancing technologies? Is it that people don't care? Or people don't understand security and privacy? Is it a question of usability? Or is it a combination of all three? Individuals may rationally choose not to invest in security to benefit others, may underestimate their own risks, and may simultaneously find solutions to be unusable.
The solution to the lack of adoption of security (and the corresponding privacy paradox) depends upon the research thread one follows. For a classic economist, privacy is means a less efficient market. Given that market efficiency is contingent on more information, individuals are rationally unconcerned; the value from information sharing outweighs the costs of privacy loss. Thus, the solution is to ensure that the value of the information being transacted is realized by the individual.
Economics of security is often empirical and analytical, addressing the cost of crime and amounts of business. Economics of security is also focused on incentive-aligned design where the person investing in security obtains the benefit. Earlier work addressed the conversion of economic information into goods; for example, creating markets for vulnerabilities.
In this presentation, I focus on the economic component of failures of adoption and acceptability in security. I will provide references to the research that addresses these dimensions in-depth. I will include specific examples of both successes and failures.
Jean Camp is a Professor at the School of Informatics and Computing at Indiana University. She joined Indiana after eight years at Harvard's Kennedy School where her courses were also listed in Harvard Law, Harvard Business, and the Engineering Systems Division of MIT. She spent the year after earning her doctorate from Carnegie Mellon as a Senior Member of the Technical Staff at Sandia National Laboratories. She began her career as an engineer at Catawba Nuclear Station and with a MSEE at University of North Carolina at Charlotte. Her research focuses on the intersection of human and technical trust, levering economic models and human-centered design to create safe, secure systems. She has authored more than two hundred publications. She has peer-reviewed publications on security and privacy at every layer of the OSI model. She has alumni in the private, public, and nonprofit sectors. She is a Fellow of the Institute of Electrical and Electronic Engineers, as well as a Fellow of the American Association for the Advancement of Science.
Steven Buccini, Aspen Institute Tech Policy Hub
Our presentation outlines several state-of-the-art technical strategies to enable data access for public interest research while complying with privacy regulations like the EU General Data Protection Regulation. Platforms often hold large-scale, high-quality datasets that researchers cannot compile on their own. While GDPR contains exemptions intended to allow platforms to share data with third-party researchers, regulatory "gray zones" that exist within the law—including the concept of data "anonymity," the role and obligation of so-called "data controllers" in public interest research, and the standards for informed consent—are hindering the sharing of substantive datasets. We examine technical strategies being considered to deal with these ambiguities while maintaining user privacy and control, discuss where these strategies are useful and where they fall short, and what challenges still need to be solved. Finally, we propose a set of potential industry standards, both technical and philosophical, that companies, researchers, and users around the world can employ to ensure the privacy and security of data for public interest research.
Steven Buccini is a fellow at the Aspen Institute Tech Policy Hub where he investigated GDPR-compliant data sharing partnerships, fought to make North Carolina's voting machines more secure, and worked to protect seniors online. Previously, Steven worked as a software engineer for several companies based in the Bay Area before moving back to his hometown to run for the North Carolina House of Representatives. He earned his Bachelors in Electrical Engineering and Computer Science from UC Berkeley. He has sampled every BBQ spot in San Francisco and holds very strong positions in the never-ending debate on the merits of Eastern- vs Western-style Carolina pulled pork BBQ.
12:15 pm–1:30 pm
1:30 pm–3:30 pm
Matt Bishop, Department of Computer Science, University of California, Davis
Data anonymization focuses on hiding specific fields of records. Adversaries, however, view the records as a collection of fields and see what they can glean from the unanonymized fields that will impart information about the anonymized fields. In reality, the problem is one of relationships—which relationships can be exploited to reveal anonymized information. There is always some external information that enables the relationships to be uncovered. This talk examines the question of relationships and their role in anonymizing and deanonymizing data, and treat this as a problem of risk—can the adversaries characterize that external data and find it?
Matt Bishop received his Ph.D. from Purdue University in 1984, and is a Professor of Computer Science at the University of California at Davis. His main research area is the analysis of vulnerabilities in computer systems, and works on data sanitization, elections and e-voting systems, policy, formal modeling, the insider threat, and computer and information security education. He co-led the California Top-to-Bottom Review of electronic voting systems certified in California, and also co-led the Joint Task Force that developed the ACM/IEEE/ASIS SIGSAC/IFIP WG10.8 Cybersecurity Curricular Guidelines. The second edition of his textbook, "Computer Security: Art and Science", was published in November 2018 by Addison-Wesley Professional. Among other topics, he teaches programing and computer security.
Dr. Lea Kissner, Humu
We live in a world of failure and I think we're all heartily sick of it. The systems we build hurt people, both when they work as designed and when they break. Some of those failures are because of bugs, some because of design flaws, but so many of our failures are because we didn't build for the complex spectrum which constitutes actual humanity. People are not all the same. They don't have the same desires or needs or threats. We, as security and privacy professionals, are not living up to our ethical obligations when we fail to build with respect for humans. It requires more comfort with ambiguity. It requires putting aside purity. But this is how we can be truly effective.
Lea is the Chief Privacy Officer of Humu. She works to build respect for users into everything that Humu does, such as product design, privacy-enhancing infrastructure, application security, and novel research into both theoretical and practical aspects of privacy. She was previously the Global Lead of Privacy Technology at Google, working for over a decade on projects including logs anonymization, infrastructure security, privacy infrastructure, and privacy engineering. She earned a Ph.D. in computer science (with a focus on cryptography) at Carnegie Mellon University and a B.S. in electrical engineering and computer science from UC Berkeley.
Engin Bozdag, Uber
The concept of privacy by design (PbD) is more than 20 years old and a common element in both regulatory and technical discussions. While many strategies for Privacy by Design focuses on product development with a traditional waterfall-style methodology, today's current agile development process does not follow the historically clear cut and distinct design, planning, implementation, and release phases. Many privacy risk mitigation strategies are created for the waterfall-style methodology and focus on the planning phase. The implementation phase consists of taking the planned actions with the hopes that they are enough to avoid the identified risks.
In an agile methodology, software is released in an iterative and feedback-driven fashion, which emphasizes short development cycles, continuous testing, user-centricity and greater simplicity of design. Agile programming practices allow developers across services to continuously tweak, remove or add new features using "build-measure-learn" feedback loops. This includes experimental features, minimum viable products, and alpha releases. While agility requires quick software development sprints, privacy analysis is usually a slow and time-consuming activity. In addition, technical privacy assessments are based on the architectural description of the system, but in agile development, there is often no grand design upfront and the documentation is limited. It might be possible to assess the privacy readiness of each feature, but when these features are combined, there is no guarantee that the service itself or the entire supply chain that underlies it fulfills all the privacy requirements. The latter is the case due to modular micro-service oriented architectures that are favored in current-day software ecosystems.
In this talk, we will demonstrate an approach to technical privacy where privacy by design is applied in a hyper-connected service environment. We will walk through some of the principles coming from GDPR, industry standards such as ISO29100 and Data Protection Authority guidelines. We will also demonstrate how those principles can be applied to a complex agile environment.
Engin is a senior privacy architect at Uber and leads the technical privacy review process to ensure privacy is embedded into products and services as early as possible. Prior to Uber, Engin worked for health tech leader Philips and led their technical GDPR implementation program. Engin holds a Ph.D. degree in algorithmic bias and technology ethics and an M.S. in software engineering both from Delft University of Technology, the leading technical university of the Netherlands and one of the leading engineering schools in the world. Engin is a member of the ISO/PC 317 Working Group working to create a global standard on Privacy by Design. Engin is also affiliated with 4TU Centre for Ethics & Technology (the major research center in the Netherlands on technology ethics) and also a regular guest lecturer for Delft University of Technology.
Dr. Jeremy Gillula, Electronic Frontier Foundation
Web browsers are finally starting to take privacy seriously. Almost every major browser has now announced a privacy initiative, but which ones are serious and which ones are snake oil? Are any of the alternative browsers like Brave or Tor Browser serious contenders? Do browser privacy protections on desktop differ from mobile? In this talk, we'll look at a high-level overview of the technical details behind the major browsers' privacy pushes, and cut through the techno-jargon to see which browsers are actually trying to protect your privacy, and which are just pretending.
Dr. Gillula began his career in academia doing research in the fields of robotics and machine learning. As a participant in the DARPA Desert Grand Challenge, he did work on computer vision systems and sensor fusion systems for unmanned autonomous ground vehicles. During his doctorate, his research focused on how to design guaranteed safe control algorithms for hybrid systems, with a focus on unmanned aerial vehicles. His thesis focused on the design of guaranteed-safe machine learning systems, fusing control theoretic and machine learning techniques.
Since finishing his Ph.D., Dr. Gillula has turned his attention to the intersection of technology and civil liberties issues, including mobile devices, big data, net neutrality, and algorithmic fairness and transparency. He provides technical expertise to lawyers and activists who work on digital civil liberties, and has given a multitude of talks to conferences, invited groups, and policymakers.
A strong believer in never taking the straightforward path to anything, Dr. Gillula went to Caltech for undergrad, then got his Ph.D. in computer science from Stanford University by working on robotics projects with a professor in electrical engineering from UC Berkeley.
3:30 pm–4:00 pm
Break with Refreshments
4:00 pm–5:30 pm
Kathryn Kosmides, Founder, CEO of Garbo.io
We want to open the door for a conversation on what exactly a public record is in today's digital age and how they can be used to prevent crimes, with an emphasis on gender-based crimes like sexual assault, domestic violence, and sexual harassment.
Kathryn Kosmides is the founder and CEO of Garbo.io, a nonprofit that provides access to data that prevents domestic violence, sexual assault, and other crimes against vulnerable populations while holding systems and individuals accountable.
Sanchari Das, PhD Candidate and Information Security Engineer
The Internet of Things (IoT) is a phenomenon that has penetrated the global market in virtually all devices capable of connecting to the internet. Smart Toys are one such emerging device that enables one to have the toy experience and also provide various internet features, such as playing and interacting with one's child. Worldwide, smart toy sales in 2017 reached 5 billion and are expected to exceed 15 billion by 2022 by the IoT marketplace in 2017. Though useful, exposure to the internet also provides exposure to risks and vulnerabilities. Due to a lack of common knowledge of IoT functionality, home IoT devices pose a serious concern for users across the world. Risks are especially concerning for parents in the protection of their families' privacy and security.
Our research investigates smart toy vulnerabilities and performs penetration testing on toy products, presents a summary of the risks & vulnerabilities, and provides users employable mitigation practices to secure the private spaces, data, and members of their home. A Smart Toy was selected as a demonstration model due to its popularity among younger audiences, its brand trust among parents, and its design decisions that make it an overpowered and under-protected target. Acting as attackers, we were able to gain root access to the device, gain access to take pictures, record videos, create 30 GB of hidden storage space, as well as add software for remote control of the device or any other android based application for port scanning, emailing, or other network attacks. Additionally, we changed gameplay to inappropriate games intended to steal credit card data or other sensitive data through the child owner who is told it is all a game. All attacks function without the user knowing that their device has been compromised. As a defense mechanism, we have both developed a user educated threat model for home-based self-mitigation as well as offering actionable recommendations to the manufacturer in order to make the device safer through both—two software update options and one physical modification.
Sanchari Das is a Ph.D. Candidate in the School of Informatics, Computing, and Engineering at Indiana University Bloomington. A security track researcher, her research interests include multi-factor authentication, usable security and privacy, user experience, social media research, third party privacy, user risk perception, online harassment, risk communication, and human-computer interaction.
Currently working for American Express as an Information Security Engineer and Project Manager for the Identity and Access Management Team (Identity Services), she also took the role of a Global Privacy Adviser in XRSI.org. She has also presented her research work in several conferences such as, RSA, BlackHat, Financial Cryptography, HAISA, SOUPS, SM&S.
She has received dual Masters degrees from Jadavpur University, Kolkata, India (Computer Applications) and Indiana University Bloomington (MS in Informatics). She received her Bachelors from The Heritage Academy, Kolkata, India and was a Gold-medalist in her batch.
Earlier she has also worked in Infosys Limited and HCL Technologies.
Jennifer Helsby, Freedom of the Press Foundation
SecureDrop is a whistleblowing platform originally created in 2012 for journalists to accept leaked documents from anonymous sources. It's now currently in use by dozens of news organizations including NBC News, The Washington Post and The New York Times. The goals of the project are to (1) protect the identity of sources while also to (2) provide a secure environment for journalists to read documents and respond to sources. This talk is about is a new QubesOS-based (Xen) workstation for journalists and other users who need to open potentially malicious documents. The threat of journalists opening malware being submitted through a SecureDrop server is handled via compartmentalization, i.e. opening each potentially malicious document in a separate VM. As journalists are increasingly facing attacks—including those we've observed attempting to phish people through SecureDrop—this can make it significantly safer for them to work with source materials.
Jennifer Helsby (@redshiftzero) has been Lead Developer of SecureDrop at Freedom of the Press Foundation (FPF) since 2017. Prior to joining FPF, she was a postdoctoral researcher at the Center for Data Science and Public Policy at the University of Chicago. Jennifer is also a co-founding member of Lucy Parsons Labs, a non-profit that focuses on police accountability and surveillance oversight.
5:30 pm–7:00 pm
Sponsored by Netflix
Wednesday, January 29, 2020
8:00 am–8:55 am
8:55 am–9:00 am
Opening Remarks, Day 3
10:15 am–10:45 am
Break with Refreshments
10:45 am–12:45 pm
Joey Dodds, Galois and Free & Fair
It is discussed in the media every day. It is the focus of congressional investigations and DEF CON villages. It is a core concern of our nation. Our democracy must be resistant to adversarial influences, both domestic and foreign.
Hundreds of millions vote on outdated computers that no cybersecurity professional trusts. Tens of millions vote with no paper ballot record. Government agencies responsible for the correctness and security of election computing systems—primarily the Election Assistance Commission (EAC) and National Institute of Standards and Technology (NIST)—are under-resourced. Elected officials and electoral officials already have their plates full with IT challenges such as database management and ransomware attacks. The costs of recertification make voting system vendors hesitant to make significant changes to their products, especially if they don't see universal demand across their customer base.
These groups understand that things can be better, but they need help.
This talk will explain in plain language (i) how we got to where we are today in elections in the USA, (ii) the aspects of the elections systems landscape that make change difficult, and (iii) practical actions we can take to break this cycle.
We will describe what we are doing in the Microsoft ElectionGuard project and in the DARPA SSITH project to help create a new generation of trustworthy election technologies.
Dr. Joey Dodds is a Principal Researcher at Galois and the co-founder of Free & Fair. Joey is leading the ElectionGuard project and is one of the core experts in the world on matters of trustworthy election technologies.
Laurin B. Weissinger, Yale University
It is time to reckon with the security implications of the laissez-faire approach that has dominated Internet regulation. Since the late 1980s, this US-led, hands-off approach has facilitated unprecedented technical innovation. Competition and technological progress have driven down the price of resources like hosting and domains. While cheaper prices do benefit everyday users, near-general availability and low prices have the unintended consequence of enabling the inevitable elements of the human condition that are often kept in check by law and regulations. In short, laissez-faire governance was reasonable for infrastructures used by a small group of expert users but now comes at the cost of real harm and threats to individuals, organizations, and society at large.
In this talk, we focus on the multi-stakeholder approach to governance of Internet domain names and addresses that in part results from this laissez-faire approach. While technically open to all, meaningful participation in multi-stakeholder fora like ICANN and standard-setting bodies has always required time and money. Naturally, large vested interests like corporations will be heavily involved in, and often try to steer, governance and policymaking concerning the processes on which their operating environment and profit margins depend. Less profit-driven stakeholders, including academics and independent researchers, consumer protection agencies and advocacy organizations, as well as civil society in general, have fewer resources and are thus less able to have their interests represented and thus have an equivalent impact on policy. Recently, tensions among key actors have risen, along with familiar but escalating criticism by both insiders and outsiders regarding the imbalanced representation of stakeholders, volunteer burnout, slow progress, high cost, and unscalable results of policy development.
Due to the technically open but heavily stratified nature of internet governance, goals like public security and safety have often been neglected, and their proponents struggle to tackle these issues through existing policy avenues. Furthermore, independent researchers or public interest bodies have difficulties when trying to comprehensively study end-user security, or the relationships between policy, organizational arrangements, pricing, costs, and abuse.
In the short term, we must recognize that the current lack of data and access undermines our understanding of the status quo, and thus inhibits possible preparations for a more secure "cyberfuture." In the medium term, we argue that these fora will have to be reorganized to provide a stronger voice to consumer protection interests, and the independent experts and researchers that support them. In the long term, we need the regulatory function—or at least some form of oversight—to be (financially) independent from the industry it regulates.
Laurin Weissinger is a Lecturer in Law and the Cybersecurity Fellow at Yale Law School. He works on the problem of trust assurance in cybersecurity, covering both technical and socio-political questions, as well as cooperation in international and organizational cybersecurity. Laurin received his D.Phil. from the University of Oxford in 2018 and has over 15 years of work experience in IT. Much of his recent work focuses on policy questions related to internet security. He serves as a vice chair on ICANN's second Security, Stability, and Resiliency of the Domain Name System (SSR2) Review Team.
Eileen McFarland, Lucy Parsons Labs
Increased state use of facial recognition technology threatens to increase the power dynamic between the state and disenfranchised communities. However, what happens when individuals use facial recognition to watch the watchers? OpenOversight is an open-source project that promotes police accountability through public data. Users can sort through photos to identify an officer against whom they would like to complain. A lack of officer identification information has stifled previous attempts to monitor police; in Chicago from 2011–2015, 28% of complaints against police were immediately dropped due to a lack of identification. The OpenOversight team has leveraged facial recognition to make the project more effective. For example, we use the AWS tool Rekognition, which has been used by various police departments, to identify with 90% accuracy if police or military officers are present in a photo. This pre-processing has made the volunteer job of photo sorting more efficient. We also use facial recognition to match new photos against existing images in the database. After a discussion of this technical implementation, the talk will close by discussing the legal considerations that we encountered, such as department-specific guidelines and compliance with Illinois' Biometric Information Privacy Act.
Eileen is a member of the data liberation and civic transparency organization Lucy Parsons Labs, where she focuses on web app development. She has worked as a software developer on a variety of projects, using Java/Spring, Go, and AWS. With a background in legal aid, Eileen remains involved in civic tech projects, such as OpenOversight. When not programming, she enjoys doing yoga, reading, and people-watching. She once dropped her camera into a waterfall in Ecuador after rappelling down aforementioned waterfall. The camera still works, and Eileen likes to view it as a reminder of the importance of resilience.
12:45 pm–2:00 pm
2:00 pm–4:00 pm
Preparing and Responding
Birhanu Eshete, University of Michigan, Dearborn
When it comes to improving the state of defense in the cybercrime arms race, all too common advice is to be more proactive than reactive. However, close examination of the modus operandi of cybercriminals suggests a great deal of their pragmatism and adaptability to defensive moves. Among other blindspots, exploitable opportunities pursued by cybercriminals typically stem from flaws in the design, implementation, configuration, and deployment of systems. In essence, cybercriminals monetize these blindspots to stir the arms race in their favor.
Using a multi-faceted analysis of pre-packaged cybercrime tools called exploit kits, this talk argues and illustrates that defenders should as well be pragmatic and adaptive enough to turn the weakest links of cybercriminals into concrete opportunities to counter cybercrime. We use the exploit kit phenomenon to highlight how defenders could combine reactive, proactive, and offensive strategies towards pragmatic defense.
On the reactive front, we describe how seemingly simple yet identifying configuration and deployment artifacts are used to identify active exploit kits in the wild. On the offensive side, we illustrate how access to exploit kits source code is leveraged towards an automated infiltration and legally authorized takedown of live exploit kits. On the proactive front, we highlight how lessons learned from reactive and offensive strategies are combined toward real-time threat detection. The talk leaves the audience with key takeaways on pragmatic defense strategies in the face of an adaptive cybercriminal with motives and means.
Birhanu Eshete is an Assistant Professor of Computer Science at the University of Michigan, Dearborn, where he leads the Data-Driven Security and Privacy Lab. Prior to that, he was a Postdoctoral Researcher in the Systems and Internet Security Lab at the University of Illinois at Chicago. His research focuses on cybercrime analysis, cyber threat intelligence, and adversarial machine learning. His work on automated exploit generation received the distinguished paper award at the 2018 USENIX Security Symposium. The same work was one of the finalists in the 2018 NYU Applied Research Competition across the United States and Canada. Birhanu holds a Ph.D. degree in Computer Science from the University of Trento, and M.S. and B.S. in Computer Science from Addis Ababa University.
Munish Walther-Puri, Director of Cyber Risk, New York City Cyber Command (NYC3)
As we face cyber threats at the geopolitical and national levels, municipalities experience the impact of cyber risks on a local, tangible level. This talk will explore the technology and systems that serve New Yorkers, outline New York City Cyber Command's (NYC3) approach to cyber risk, and extract lessons from previous cyber attacks targeting municipalities, such as ransomware. NYC3 believes that cybersecurity is a public safety issue and this talk will explore several aspects of protecting the public and the city services that they use.
This presentation will address the following questions:
- How does one "compare" cybersecurity postures at vastly different agencies?
- How does assessing cyber risk at the city level differ from the state and federal levels?
- How does NYC3 safeguard the privacy of New Yorkers while protecting their digital lives?
- How can existing cyber risk management frameworks be applied, adopted, and synthesized?
- What lessons can we learn from cyber-attacks in other municipalities?
- What are the key emerging risks for municipalities?
Munish Walther-Puri is the Director of Cyber Risk for New York City Cyber Command, where he leads the cyber risk management program.
Previously, he founded Presearch Strategy, a firm dedicated to applying technology and analytics to geopolitical risk, strategic intelligence, and cybersecurity, and was the Chief Research Officer at Terbium Labs, a dark web data intelligence startup, where he led strategy and operations of the company's analysis, intelligence, and reporting. He also spent time at Citigroup, working on the intersection of fraud, cyber investigations, and terrorism.
He is a member of the Intelligence and National Security Alliance’s Cyber Council, the International Consortium of Minority Cybersecurity Professionals, and the ODNI-DHS Analyst Exchange Program. Munish also co-founded and is an advisor to the Fletcher Political Risk Group and a technical advisor to the Do No Digital Harm Initiative. He previously served on Women 2.0’s Men as Allies Committee and the State Department’s Overseas Security Advisory Council.
Swathi Joshi, Netflix
Scaling incident response is inherently hard. Incidents happen in waves and have sporadic surges. In 2018, we witnessed this first hand with a "December to Remember," where on average each responder had to deal with multiple incidents across different time zones. In an ideal world you have a large Incident Response team on standby, but hiring enough to match the occasional surge is expensive and impractical. How do you manage the demand without adding a massive headcount?
In this talk, I will describe how we have approached this problem at Netflix: a complex environment with a small incident response team and growing needs. I will delve into how we created the Reservist Program, a pool of auxiliary Crisis Managers that supplement our security incident response function. At the end of the talk, the audience will be equipped to build their own program with simple steps.
Swathi Joshi leads Netflix's Detection and Response team which focuses on managing the inevitable security incidents that arise and building detection pipelines to minimize risk to Netflix. Prior to Netflix, she was an Engagement Manager and Escalations Manager at Mandiant/FireEye helping companies defend against Advanced Persistent Threats (APT). Swathi was born in Mangalore, India. She received her Master's degree in Information Security and Assurance from George Mason University and sits on the board of https://sdie.org.
David Freeman, Facebook
Fighting spam, phishing, and other forms of abuse on the internet is typically seen as a detection problem: find signals that will identify the bad guys and then use these signals to block them. In this talk, I argue that the most difficult part of fighting abuse is not detecting and blocking the bad guys—it's figuring out whether they're there in the first place. What's the "background level" of spam and fake accounts? How can we figure out what our detection systems are missing? Which abuse problem is the most important one to work on right now?
In this talk, I will show how good measurement of abuse unlocks both prioritization of work and analysis of impact. I will present several approaches that Facebook's integrity teams have used to measure and prioritize their problems, including user reports, human labeling, and automated labeling, and offer scenarios in which each of these should and shouldn't be used.
I will also introduce the "Abuse Uncertainty Principle" which says that you can use a metric for measurement or detection, but not both. The Uncertainty Principle implies that measurement is never a finished project, but I will offer strategies for ensuring that your metrics are good enough to inform key decisions. Armed with these tools, you can go back to your product and find out how much abuse it's attracting, how good you are at stopping it, and where you need to invest next.
David Freeman is a research scientist/engineer at Facebook working on integrity problems, with a particular focus on fake engagement, scraping, and automation detection. He previously led anti-abuse engineering and data science teams at LinkedIn. He is an author, presenter, and organizer at international conferences on machine learning and security, such as Enigma, NDSS, WWW, and AISec, and has written (with Clarence Chio) a book on Machine Learning and Security published by O'Reilly. He holds a Ph.D. in mathematics from UC Berkeley and did postdoctoral research in cryptography and security at CWI and Stanford University.
Program Co-Chairs: Ben Adida, VotingWorks, and Daniela Oliveira, University of Florida