Enigma 2018 Conference Program
All sessions will be held in Grand Ballroom ABCFGH unless otherwise noted.
Download Attendee List (Available to Enigma Conference Attendees)
Note: Only includes attendees who opted in to appearing on the list. Log in to your USENIX account to access this file.
Tuesday, January 16, 2018
7:30 am–8:45 am
8:45 am–9:00 am
Opening Remarks, Day 1
9:00 am–10:30 am
Security and Human Factors
Session Chair: Eva Galperin, EFF
Social Cybersecurity: Reshaping Security through an Empirical Understanding of Human Social Behavior
Sauvik Das, Georgia Institute of Technology
How can we design systems that encourage better cybersecurity behaviors? Despite important improvements to the usability of cybersecurity systems, much security advice goes ignored and many security systems remain underutilized. I argue that this disconnect can partially be explained by the fact that there’s a largely unconsidered cost to engaging in good security behaviors: costs of social face. For example, by using two-factor authentication, one might be perceived as “paranoid.” By encrypting one’s phone, one might be perceived as having something to hide. More generally, by caring too strongly about cybersecurity, one may give off the impression of being shady. In this talk, I present evidence in support of the following claim: Social influences strongly affect cybersecurity behaviors, and it is possible to encourage better cybersecurity behaviors by designing security systems that are more social.
First, I empirically modeled how social influences affect the adoption of security behaviors and systems of 1.5 million Facebook users. Second, I designed a notification that informs Facebook users that their friends use optional security systems to protect their own accounts and evaluated these “social” notifications in a randomized, controlled experiment with 50,000 Facebook users. In so doing, I provide some of the first direct evidence that security behaviors are strongly driven by social influence, and that the design of a security system strongly influences its potential for social spread. Specifically, security systems that are more observable, inclusive, and stewarded are positively affected by social influence, while those that are not are often negatively affected by social influence.
Taken together, my work argues for a future of socially intelligent security systems that understand and accommodate basic human behaviors, desires and capabilities.
Dr. Sauvik Das is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. His research, which intersects HCI, data science and cybersecurity, aims to empower people with novel security systems that mitigate costs of time, effort and social capital. His work has won three best paper and best paper honorable mention awards at premier venues (UbiComp 2013, CHI 2016 and 2017) as well as an Honorable Mention for the NSA’s Best Scientific Cybersecurity Paper Award in 2014. His work has also been widely covered by the popular press, including features on the Financial Times, Slate, Ars Technica and The Atlantic. In addition, he was a NDSEG Fellow, a Qualcomm Innovation Fellow, a Stu Card Graduate Fellow, and a NSF EAPSI Fellow. Sauvik earned his Ph.D. and M.S. in Human-Computer Interaction at Carnegie Mellon University and his B.S. in Computer Science at Georgia Institute of Technology.
Gennie Gebhart, Researcher, Electronic Frontier Foundation / Postdoctoral Researcher, UC Berkeley; Bill Marczak, Senior Research Fellow, Citizen Lab
When we as technologists design a new product or provide advice and support, our assumptions and worldview necessarily color the work we do—and by extension the “rules” that we expect users to follow. We assume that we know how our users employ technology to communicate. We assume that we know what the concepts of "security" and "privacy" mean to our users. But those assumptions—and the security advice and design that they motivate—can fall apart when confronted with the on-the-ground realities that users face, often with harmful results. The world's most popular large-scale Internet platforms, often created and incubated in a monoculture, are not immune to this phenomenon. When users face risk beyond that anticipated by developers, the response is often that users should consider their threat model, change their behavior, or stop using certain tools altogether. In this talk, we argue that some of the assumptions we routinely bring to our work should be treated as security bugs, as they can relegate to second-class status those users who differently adapt technology to their lives. We question several common assumptions in the security community and tell stories of security issues that can stem from them, drawn from our extensive work with targeted communities around the world.
Gennie Gebhart does research and advocacy for the Electronic Frontier Foundation on consumer privacy, surveillance, and security issues. Gennie earned a Master of Library and Information Science from the University of Washington, where her research with the Department of Computer Science & Engineering's Security & Privacy Research Lab investigated user reactions to censorship.
Bill Marczak is a Senior Research Fellow at Citizen Lab, a co-founder of Bahrain Watch, and a Postdoctoral Researcher at UC Berkeley, where he received his PhD in Computer Science. His work focuses on defending against novel technological threats to Internet freedom, including new censorship and surveillance tools employed by well-resourced actors against activists and civil society.
Masha Sedova, Co-Founder, Elevate Security
For decades security awareness programs focused on one dimension: improving people’s security ability. The assumption has been that employees don’t know the correct course of action and that with the right amount of training, they’ll start performing more securely. There is a second dimension that needs to be considered in security behavior change: motivation. This talk will explore how and when to motivate employees to security action. It will also discuss how to "surf" motivation generated by both predictable and unpredictable security events to drive security behavior change in a workforce.
Masha Sedova is an industry-recognized people-security expert, speaker and trainer focused on engaging people to be key elements of secure organizations. She is the co-founder of Elevate Security delivering a behavioral-science based platform that can measure, motivate, and educate employees on security behaviors that prevent breaches. Before Elevate, Masha Sedova was a security executive at Salesforce where she built and led the security engagement team focused on improving the security mindset of employees, partners and customers. In addition, Masha has been Board member for the National Cyber Security Alliance and regular presenter at conferences such as Blackhat, RSA, ISSA and SANS.
10:30 am–11:00 am
Break with Refreshments
Grand Ballroom DE
Visit the Sponsor Showcase!
11:00 am–12:00 pm
Session Chair: Eric Rozier, Iowa State University
Joe Near, Postdoctoral Researcher, University of California, Berkeley
Much of the research in differential privacy to date falls short of private industry requirements in its ability to scale and data-proven success. However, most researchers do not have access to real-world data to prove new techniques. In this talk, Uber's privacy engineering team and Berkeley researchers discuss the story behind their pragmatic collaboration and how it led to multiple open source releases from their differential privacy stack.
Joe Near is a postdoctoral researcher in Dawn Song's group at UC Berkeley. His research interests include security, data privacy, program analysis and programming languages. As part of his PhD work at MIT, where he was the recipient of an NSF Graduate Research Fellowship, Joe worked on static techniques for uncovering access control bugs in web applications.
Carrie Jiang, Software Engineer @ Instagram
Instagram is a social network that allows people to share the world's moments as they happen. It now has 800 million monthly and 500 million daily active users and continues to grow at a rapid speed. With this growth comes the responsibility of making Instagram a place where people feel comfortable sharing content and connect with other people because they feel in control of their information and trust us to look out for them.
This talk will cover how Instagram handles these privacy concerns while experiencing internal and external growth of users, engineers, and systems. It will focus on how Instagram treats privacy from the data layer via the Data Privacy Framework. This includes the design principles followed in creating this framework, strategies for deployment and adoption, and lessons learned along the way.
Carrie is a member of Instagram’s Core Services Infrastructure team where she focuses on providing intuitive, reliable and performant Interfaces and System Services to ship Instagram products at scale. In this capacity, she has most recently focused on Instagram’s Data Privacy framework and scaling it across the organization. Carrie’s background is in building scalable data and backend infrastructure, including stints as Placed, Microsoft, and IBM.
12:00 pm–1:00 pm
Santa Clara Ballroom
1:00 pm–3:00 pm
Security and Society
Session Chair: David Evans, University of Virginia
Kendra Albert, Clinical Fellow, Harvard Law School
This is a joint work with Elizabeth Anne Watkins.
Computer security often takes place in an institutional context: large organizations in fields like finance, healthcare, law, and journalism set best practices and provide critical training for non-experts. Many tools used to protect users assume that security is administered by benevolent overlords who can set appropriate permissions, answer questions, and in general save workers from themselves. However, as the future of work shifts towards contingent labor models and precarious employment, institutions may no longer play such a primary role. What then?
In a gig-based economy, independent workers are cut off from organizational sources of security training. Security costs are externalized to the individual worker, who is no longer conceptualized as part of the organization. This in itself may have an outsized impact on marginalized communities due to their overrepresentation in gig work, denoting a coming class-based disparity of security awareness.
But even more impactful, gig workers have been relegated to outsider status, often categorized as another source of risk. What were once trust-based relationships may become new sources of vulnerability to be managed. The logical outcome of identifying users as a threat is a widening of mistrust of groups of people who were once trusted insiders, and with that, we risk fundamental changes to cultural conceptions of security.
Kendra Albert is a technology lawyer who specializes in computer security, and an affiliate at the Berkman Klein Center for Internet and Society. They work at the Cyberlaw Clinic at Harvard Law School, where they provide pro bono legal services to clients on issues related to the Internet and emerging technologies. They previously worked at Zeitgeist Law, a boutique technology law firm in San Francisco, and have interned at the EFF, Cloudflare, and Public Citizen. Kendra holds a J.D. from Harvard Law School and a B.H.A. from Carnegie Mellon University. Their paper “Computer Security Tools and Concepts for Lawyers” was published in the Green Bag in 2017 and won an award for Best Legal Hack from DC Legal Hackers. The Precarious Security work is a collaboration between Kendra and Elizabeth Anne Watkins, a PhD student in Communications at Columbia University.
Cara Marie, Offensive Security Lead, Datadog; Andy Grant, Regional Vice President, NCC Group
It's become more common for individuals to be forced to grant access to their "personal" online accounts/devices. There has been recent media focus on border crossings where individuals are required to temporarily turn over laptops/devices, often being coerced into giving decryption secrets or biometrics. Similar demands are made when individuals are arrested. It's also been reported that some employers request employees' credentials for social media accounts—there has even been consideration of adding similar requirements to the U.S. visa application process.
These are all obvious invasions of privacy with at-best questionable legality, and yet not complying has dire consequences—prolonged detainment, missed work, or barred entry to the country. As these measures are put in place domestically, it's only reasonable to expect similar policies in other nations, particularly those with less individual freedoms.
What options are left for an individual? This talk explores three programmatic options to help activists, dissidents, travelers, foreign nationals, and everyday citizens better protect their sensitive data. These proof of concepts (released post-talk) focus on what should be done to implement a "nuclear" option: revoking access to everything—possibly while under threat—wiping data when necessary. These solutions are not intended to be answers to the issues at large, but proof of concepts that we hope will start a greater conversation in regards to acceptable (and legal) privacy investigations.
You can view an animated version of the slides that were used for this presentation here.
Cara Marie is an Offensive Security Lead at Datadog. Prior to joining Datadog, Cara Marie worked as a security consultant performing penetration tests against a wide variety of products, applications, environments, and infrastructure. Cara has performed security research in the following areas: linux rootkits, compression bombs, and browser security. She has given talks at Blackhat, InfoSeCon, ZonCon, and ShellCon. Cara has compiled and released a "bomb" arsenal (https://bomb.codes) which aids security researchers and developers performing compression bomb testing.
Andy Grant is a Regional Vice President for NCC Group. While at NCC Group, Andy has worked on a wide-variety of projects. He has performed numerous mobile application assessments on Android, iOS and WP7, internal and external network penetration tests, web application security assessments, and widget/third-party platform reviews. Andy has worked with small tech start-ups, small and large software development groups, and large financial institutions. Andy has a BS in Computer Science and an Advanced Computer Security Certificate from Stanford University.
Robert McArdle, Director FTR – CyberCrime Research
Are your opinions really your own—or how much have they been carefully shaped until you think they are? Today’s digital connectivity makes it possible to share information like never before—traditional borders and constraints of distance simply do not exist anymore. But this same connectivity also makes it easier to manipulate the public’s perception of reality. The term “fake news” has become increasingly common in the past year—but it is only one facet of a much larger problem: the use of technology to alter public opinion, and ultimately affect the real world.
This is not a new phenomenon. At every point in human history where a new disruptive communications technology has replaced another, propaganda and manipulation flourish. Each communication method has its associated societal norms and customs, but these take time to form. What is allowed on television, radio, and print is well established—but the Internet has not reached that level of maturity yet.
I will demonstrate the techniques and methods that have been actively used to this end—and not exclusively in the field of politics. I will review the global availability and costs of such underground services, from Russia to China, Ukraine and the Middle East and demonstrate the affiliation of these activities with other Black Market commodities and services. I will discuss several techniques of identifying such campaigns by processing social media data and show how it is possible to trace those campaigns to the original perpetrators. Finally, I will walk through a number of case studies, demonstrating the links between artificially fabricated events and the executors carrying out those orders.
Note: The video for this talk is currently unavailable.
Robert is a Director of Trend Micro's Forward Looking Threat Research team, where he is involved in analysing the latest Cybercrime threats, specializing in researching the future threat landscape, Open Source Intelligence (OSINT) and coordinating investigations with international law enforcement. Robert is a regular presenter for the press and at security conferences. He also lectures in Malware Analysis and Cybercrime Investigations on MSc modules at Cork IT and University College Dublin (UCD). He worries that his hobby and job are one and the same, and constantly wonders if "normal" people have that problem. He enjoys long walks on the beach, puppies, and Guinness.
Arvind Narayanan, Princeton University
The open web has led to a privacy and security arms race: the ad tech industry has come up with sneaky ways to track people’s activities online, but informed users can control ads and tracking using sophisticated client-side tools. The open web also allows researchers to study tracking on an unprecedented scale using automated browsers. In this talk, I’ll describe my research group’s findings on the state of online tracking, and our efforts to build tools to shift the balance in users’ favor. I’ll conclude with recommendations for policy makers and browser vendors. In particular, I’ll argue that there is no meaningful "neutral" stance on privacy; browser vendors who value user privacy should bundle technical defenses against tracking.
Arvind Narayanan is an Assistant Professor of Computer Science at Princeton. He leads the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. Narayanan also leads a research team investigating the security, anonymity, and stability of cryptocurrencies as well as novel applications of blockchains. He co-created a Massive Open Online Course as well as a textbook on Bitcoin and cryptocurrency technologies. His doctoral research showed the fundamental limits of de-identification, for which he received the Privacy Enhancing Technologies Award.
Narayanan is an affiliated faculty member at the Center for Information Technology Policy at Princeton and an affiliate scholar at Stanford Law School's Center for Internet and Society. You can follow him on Twitter at @random_walker.
3:00 pm–3:30 pm
Break with Refreshments
Grand Ballroom DE
Visit the Sponsor Showcase!
3:30 pm–5:30 pm
Session Chair: Bryan Payne, Netflix
Tanya Janca, Application Security Technical Advisor
A lot is expected of software developers these days; they are expected to be experts in everything despite very little training. Throw in the IT security team (often with little-to-no knowledge of how to build software) telling developers what to do and how to do it, and the situation is further strained. This silo-filled, tension-laced situation, coupled with short deadlines and mounting pressure from management, often leads to stress, anxiety and less-than-ideal reactions from developers and security people alike. This talk will explain how people's personal insecurities can be brought out by leadership decisions in the way we manage our application security programs, and how this can lead to real-life vulnerabilities in software and other IT products. This is not a soft talk about "feelings", this is a talk about creating programs, governance and policies that ensure security throughout the entire SDLC. No more laying blame and pointing fingers, it's time to put our egos aside and focus on building high-quality software that is secure. The cause and effect of insecurities and other behavioural influencers, as well as several detailed and specific solutions will be presented that can be implemented at your own place of work, immediately. No more ambiguity or uncertainty from now on, only crystal clear expectations.
Tanya Janca is an application security evangelist, a web application penetration tester and vulnerability assessor, trainer, public speaker, an ethical hacker, the Co-Leader of the OWASP Ottawa chapter, an effective altruist and has been developing software since the late 90’s. She has worn many hats and done many things, including; Web App PenTesting, Technical Training, Custom Apps, Ethical Hacking, COTS, Incident Response, Enterprise Architect, Project and People Management, and even Tech Support. She is currently helping the Government of Canada secure their web applications.
Rob Clark, Cloud Security Leader at IBM
What do all cloud compromises (probably) have in common? They took place in SOC2 data-centres and on ISO27k systems and security management—we all know that Compliant != Secure. So why at IBM did we completely restructure our approach to security and model the way we do business around two sets of NIST guidance!?
In this talk I will describe how we built a security team to support the DevSecOps approach to secure development, decreased complexity in our cloud deployments, and drove security feature functionality in to platform offerings. We built a security organisation that aligned to NIST 800-53 revision 5 (draft) guidance and measured itself using the NIST Cyber Security Framework. We made individuals personally responsible for understanding how particular controls are met across the entire cloud (200+ services and acquisitions, 53 data centres) and more importantly; devising how these controls can be measured.
Rob Clark is a Distinguished Engineer at IBM where he is the CTO for cloud infrastructure security and responsible for the overall security posture of the IBM Cloud. Rob has a passion for building teams and solving hard problems. Rob is a keen contributor to Open Source software, having previously lead the OpenStack security project, which won the Linux Foundation's Core Infrastructure award for best practices around Security, Quality and Stability.
Bryan Zimmer, Senior Security Engineer
Many companies treat their buildings as trusted locations, granting devices on their office networks access to sensitive resources. This often renders logical security controls only as effective as the company's ability to prevent tailgating into the building. Zero Trust networks address this issue, with the most well-known being Google’s BeyondCorp. We'll review the Location Independent Security Approach (LISA), a Zero Trust architecture pioneered at Netflix. It removes trust of the office network, replaces it with trust of authenticated users and healthy devices, and is simple enough that many companies can implement it. High level implementation steps will be provided so attendees can evaluate and plan similar programs in their own organizations.
Bryan is a senior security engineer. He previously worked in the federal, finance, and education sectors. He writes short, to-the-point bios, despises superfluous multisyllabic words, and enjoys dry humor.
Melody Hildebrandt, 21st Century Fox
As the media industry rapidly evolves, so must the security model to protect it. As we rapidly embrace digital platforms for delivering content, we open new risk surface. As we evolve to a modern cloud-first architecture to deliver content and collaborate globally at scale, the notion of perimeters start to change. We need to protect our content in this complex environment. But we are a company of creatives and consumers with people who naturally have a high expectation for excellent user experience which increases as consumer technology gets so easy to use, so our approach can't be about walls and gates or they will find aways around them. This talk will cover how we position security within the overall enterprise to protect our crown jewels while also delivering platforms employees want to use.
Melody Hildebrandt is the Global Chief Information Security Officer at 21st Century Fox. In this capacity she is responsible for the cyber security posture of 21CF businesses including 20th Century Fox, Fox Networks Group, National Geographic Partners, Fox News, Star India and others.
Before joining 21CF, she was an executive vice president of Palantir Technologies. An early employee of the company, she was instrumental in opening and growing the company's commercial business and running its New York operations. She led Palantir’s work in cyber security, anti money laundering, and regulatory response.
Prior to that, she consulted to US and international governments with Booz Allen Hamilton where she designed military and strategy wargames. She also previously worked with the French Ministry of Education while living in France, where she also studied. She is the founder of Transcapitalist, a blog that explored the intersection of technology and free market mechanisms.
5:30 pm–7:00 pm
Santa Clara Ballroom
Sponsored by Google
Wednesday, January 17, 2018
8:00 am–8:55 am
8:55 am–9:00 am
Opening Remarks, Day 2
9:00 am–10:30 am
Session Chair: Josiah Dykstra, National Security Agency
Greg Conti, Director of Security Research, IronNet Cybersecurity
How do people with an army-sized cyber force think about cyber operations? One thing is for sure, it is not how we grew up thinking about traditional information security. Information security, while important, fails to capture the scale, audacity, capabilities, and resources of government-backed cyber operations.
Over the past 10 years we have seen the rise of massive nation-state cyber forces and we’ve felt the impact, from catastrophic data spills to destructive cyber attacks to influence on the U.S. presidential election. Like it or not, cyber conflict is raging now and is likely to get worse before it gets better.
From Clausewitz and Jomini to the present, militaries have learned how to fight at scale on the physical battlefield. For example, the United States fielded and coordinated actions of almost 100 divisions of 10,000-15,000 people each to fight around the globe in World War II. Militaries are now organizing, training, and equipping forces and learning to fight at scale in cyberspace. What we’ve experienced so far is only a first taste.
In this talk we will look at time-tested military techniques and study how they may be used to support offensive and defensive cyber operations. We’ll focus on key examples: effects based operations, targeting, deception, center of gravity analysis, command and control, and cyber-enabled information operations, and we’ll provide pointers to many more.
These insights will help prepare network defenders and security researchers for what is coming, help you to better understand the mindset of those attacking you, and convince you we aren’t thinking big enough.
Gregory Conti is Director of Security Research at IronNet Cybersecurity. Formerly he served on the West Point faculty for more than a decade where he led their cybersecurity research and education efforts. During his career in the Army he served in a variety of Signals Intelligence and Cyber Operations assignments, deploying to Iraq as Officer-in-Charge of U.S. Cyber Command's Expeditionary Cyber Support Team and as a Senior Advisor in the U.S. Cyber Command Commander's Action Group where he co-developed the Joint Advanced Cyber Warfare Course. Gregory is the author of the new book, On Cyber: Towards an Operation Art for Cyber Operations (Kopidion Press), as well as approximately 75 articles and papers covering online privacy, usable security, cyber conflict, and security visualization. He has spoken at numerous security conferences, including Black Hat, DEFCON, RSA, ShmooCon, HOPE, Google Ideas, and the NATO Conference on Cyber Conflict. Conti holds a B.S. from West Point, an M.S. from Johns Hopkins University, and a Ph.D. from the Georgia Institute of Technology, all in computer science. He may be found online at www.gregconti.com and on Twitter as @cyberbgone.
Brandon Valeriano, Donald Bren Chair of Armed Politics, Marine Corps University
This project examines the efficacy of various cyber strategies in the digital domain. We develop a theory that cyber operations are a form of covert coercion typically seeking to send ambiguous signals or demonstrate resolve. Cyber Coercion from this perspective is neither as revolutionary nor as novel as it seems when evaluated with evidence. We examine cyberstrategies in their varying forms through quantitative analysis, finding that cyber disruptions, short-term and long-term espionage, and degradation operations all usually fail to produce political concessions. When states do compel a rival, which is measured as a change in behavior in the target that is strategically advantageous to the initiator, the cyber operation tends to occur alongside more traditional coercive instruments such as diplomatic pressure, economic sanctions, and military threats and displays. Our findings suggest that before we develop recommendations for sound foreign policy responses to state-backed cyber intrusions or craft international frameworks that constrain the proliferation of politically-motivated malware, we should theoretically and empirically investigate cyber strategies and their efficacy.
Brandon Valeriano (Donald Bren Chair of Armed Conflict Marine Corps University) has published five books and dozens of articles. His two most recent books are Cyber War versus Cyber Reality (2015) and Cyber Strategy (2018), both with Oxford University Press. Ongoing research explores cyber coercion, biological examinations of cyber threat, and repression in cyberspace.
Paul Waller, Technical Director for Platform Security Research at NCSC
As connected devices and systems grow ever more complex, security experts recognise the need to protect the most critical functions. Standards and products have been developed to provide ‘roots of trust’; isolated components to manage identities and other keys, verify updates and measure/report the status of a device. We also see a growing list of processor features allowing arbitrary code to run in a protected environment, both on our devices and also in the cloud. Despite all this innovation, very few applications actually support these features. Market support for hardware security is low. This talk will explore why that is, and what we might do about it.
Paul has worked in cryptography and hardware security since graduating with a degree in mathematics in 2001. He has represented the NCSC and its predecessor organisation in various standards bodies, including the Trusted Computing Group, Global Platform and FIDO. His current role as Technical Director for Platform Security Research allows him to spend time with academic and industry partners learning what the future holds for security technology, and also to help user communities take advantage of new features. Outside of work Paul likes to cycle up small hills in summer, and ski down bigger ones in winter.
10:30 am–11:00 am
Break with Refreshments
Grand Ballroom DE
Visit the Sponsor Showcase!
11:00 am–12:00 pm
Session Chair: Ben Adida, Clever
Grzegorz Milka, Software Engineer, Google
With billions of usernames and passwords readily accessible via the black market, account takeover poses a significant threat to services that rely solely on passwords for authentication. In this talk, we provide a deep dive into the ecosystem supporting account takeovers, the danger it poses to users, and the importance of automatic, defense-in-depth risk detection systems as a fundamental defense in identity solutions. We start by exploring the relative likelihood that users fall victim to data breaches, phishing, or malware using a dataset of over 3.3 billion stolen credentials; and how hijackers subsequently use these credentials for spam, financial theft, and stepping-stone attacks. We then turn to examine how identity providers can use risk analysis, in conjunction with ‘login challenges’, to bridge the security gap between two-factor authentication and password-only users with minimal additional friction. We show the practical weaknesses of certain login challenges (SMS and email) and evidence of attackers now collecting risk profile data and challenge responses to weaken user security. Finally, we discuss ongoing challenges such as how public opinion (as measured by our user studies) may be at odds with actions identity providers should take to improve overall account security, and how the industry should try and change this.
Grzegorz Milka is a security engineer at Google working on account takeover prevention technologies. His mission is to build state-of-the-art, safe automated systems that stop hijackers from getting access to users’ accounts. Grzegorz received an M.S. degree in Computer Science and B.S. degree in Mathematics from the University of Warsaw.
Travis McPeak, Sr. Security Engineer at Netflix
The principle and benefits of Least Privilege are long established in Computer Security—dating back to the 1970s. Despite this it is far from universally adopted. Technologies used to define and enforce Least Privilege policy are arcane to most in the computing industry. Software developers are incentivized to ship products and features, so they focus on what helps them work fast: wildcards in policies, if they even have one.
Traditional attempts to counter this typically require system administrators or security staff to perform manual reviews and craft security policies in response. As application complexity and development velocity increase it becomes impractical to manually determine Least Privilege ahead of time. A central policy gatekeeper doesn't scale efficiently and is likely to negatively impact delivery velocity.
Our approach at Netflix combines gathering data about how applications interact with their environment and automatically adjusting the permissions in their security policy. Unused permissions are automatically removed from application policies across our environment without manual effort from developers or the security team. This approach gives us the best of both worlds: the security team gets least privilege policies and developers maintain high velocity. During this talk we’ll describe how this works in our environment, challenges we’ve overcome along the way, and recommend other applications for the same methodology.
Travis is a Senior Cloud Security Engineer at Netflix. He enjoys applying automation to solving complex problems that can’t be addressed manually. Travis is currently the OWASP Bay Area chapter lead and a core developer of several open-source security tools including Bandit and Repokid. In the past he has led several initiatives on the OpenStack Security Team and was a founding member of the Cloud Foundry Security Team. When not geeking out on security he enjoys traveling and quality beer and food.
12:00 pm–1:00 pm
Santa Clara Ballroom
1:00 pm–3:00 pm
Session Chair: Teddy Reed, Facebook
David Brumley, CEO, ForAllSecure
We need to move to a fully autonomous world for software security. Current software security attack and defense is done by humans, at human time lines. Cyber-autonomy research and development is creating tech that makes fully autonomous cyber possible. Cyber-autonomy promises to scale better and make defense possible within machine-scale time.
In this talk, I will describe the Cyber Grand Challenge and the system Mayhem. Mayhem is a fully autonomous cyber system that can find new vulnerabilities, generate exploits, and self-heal off-the-shelf software. Mayhem is the result of 10 years of academic research and 3 years of commercial development. Mayhem competed and won a $2 million dollar prize in the US Cyber Grand Challenge competition co-hosted at DEFCON 2016. I will describe how Mayhem works, the Cyber Grand Challenge competition, and how Mayhem fared against the world's best hacking teams. I will also describe how I think Mayhem, and other autonomous systems like it, will change the security landscape in the next decade.
David Brumley is the CEO and co-founder of ForAllSecure, a company bent on securing the world's software against exploitable bugs. David is on leave as the Director of CyLab, the Carnegie Mellon Security and Privacy Institute, and a Professor of ECE and CS at CMU. His research interests include all areas of security, with a specialization in software security. Prof. Brumley received his Ph.D. in Computer Science from Carnegie Mellon University, an M.S. in Computer Science from Stanford University, and a B.A. in Mathematics from the University of Northern Colorado. Brumley's honors include a United States Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, a 2013 Sloan Foundation award and numerous best paper awards. Prof. Brumley is also advisor and a founding member of PPP, one of the world's most elite competitive hacking teams.
Giovanni Vigna, Professor, UCSB / CTO, Lastline; Davide Balzarotti, Professor at Eurecom Institute, France
Malware uses packing and other forms of obfuscation in order to foil analysis by anti-virus systems. However, few realize that benign applications use packing and obfuscation as well, in order to protect intellectual property and prevent license abuse.
This talk will discuss how packing works and show, with experimental data, that many leading anti-virus product do not really understand if programs are malicious or not, but only if they are packed or not. This is a by-product of the (incorrect) pervasive use of machine learning and AI in malware detection, and results in substantial false positives that plague the anti-virus community.
The inconvenient truth: unless the AV industry does better than detecting packers we are doomed to live in a world in which good and bad programs are misclassified, causing pain to the users, and eventually resulting in alert fatigue and missed detections.
The authors have recently performed a systematization of the behavior of packers (published in the IEEE Security and Privacy Symposium in 2015) and have now applied this analysis to a large corpus of real-world malware observed across thousands of corporate networks worldwide.
Giovanni Vigna is a Professor in the Department of Computer Science at the University of California in Santa Barbara and the CTO at Lastline, Inc. His research interests include malware analysis, vulnerability assessment, the underground economy, binary analysis, web security, and mobile phone security. He has been the Program Chair of the International Symposium on Recent Advances in Intrusion Detection (RAID 2003), of the ISOC Symposium on Network and Distributed Systems Security (NDSS 2009), and of the IEEE Symposium on Security and Privacy in 2011. He is known for organizing and running an inter-university Capture The Flag hacking contest, called iCTF, that every year involves dozens of institutions around the world. Giovanni Vigna is also the founder of the Shellphish hacking group, who has participated in more DEF CON CTF competitions than any other group in history.
Casey Ellis, Founder of Bugcrowd
The current state of the industry and “what happens next” is keeping security practitioners up at night. The interaction between companies and security researchers is a fragile equation: can they like each other or should they hate each other? Or both?
Questions around this love/hate relationship, and its future abound, include: If we like each other, what are our groups’ strengths and weaknesses? How can this newly formulated partnership be celebrated? Controlled? Secured? When the honeymoon period is over, what happens if things go wrong? Can potential issues be predicted before we agree to partner together? How is trust established: Have we asked the right questions? How do we build long term rapport and respect? What regulations or legislation, if any, do we need to learn and follow?
One misstep in a bug bounty program could shut down this harmonious “marriage,” but at the same time, more and more companies are taking on this perceived risk given that they’re seeing this new way of doing things is necessary and inevitable. Arming companies with more ammunition is necessary to defeat their attackers.
If we make it through this fragile security landscape, what will that future look like?
Our current approach to security assessment is inherently flawed. In this talk, Casey will examine how we got here, and how the "unlikely romance" between whitehats and enterprise organizations is changing everything.
As Founder of Bugcrowd, Casey Ellis brings over 14 years of information security experience to lead the company’s technology vision and strategic operation. Prior to Bugcrowd, he served as chief security officer at ScriptRock and as an information security specialist and account manager for Vectra Corporation Ltd. A former penetration tester, Casey has taken on the role of “white hat” to connect organizations large and small with the power of Bugcrowd’s platform for a revolutionary approach to cybersecurity. Casey has presented at several top security shows including RSA, DerbyCon, BSides, Converge, SOURCE Conference and the AISA National Summit.
Shannon Lietz, DevSecOps Leader, Intuit
Hats off to those who have cracked the code towards Software Safer Sooner. Figuring out how to convert from DevOps to DevSecOps is non-trivial but extremely valuable. And the journey from mindless security controls to ones that fend off the bad guys is no small feat. Using transformation as a goal, hear more about how teams across the globe are migrating to security at scale out of sheer necessity and how you can learn from their mistakes. Come learn simple hat tricks that can make your journey easier.
Shannon Lietz is an award-winning innovator with over two decades of experience pursuing advanced security defenses and next generation security solutions. Ms. Lietz is currently the DevSecOps Leader for Intuit where she is responsible for setting and driving the company’s security engineering strategy and cloud security support for product innovation. She is passionate about leading the charge for security transformation and change management in large environments, leveraging Agile and Rugged principles. Prior to joining Intuit, Ms. Lietz worked for several fortune 500 companies, most notably Sony and ServiceNow, where she supported mission critical teams engaged in advanced security programs. She has also had previous experience as a serial entrepreneur operating to solve some of the world’s hard problems. Ms. Lietz holds a Bachelors of Science degree in Biological Sciences from Mount St. Mary’s College. She is an avid blogger and evangelist for all things DevSecOps, HackerGirl and Rugged Software.
3:00 pm–3:30 pm
Break with Refreshments
Grand Ballroom DE
Visit the Sponsor Showcase!
3:30 pm–4:30 pm
Session Chair: Carrie Gates, Securelytix
Jeremy Howard, fast.ai and USF
If you want to be accepted into the ML hipster clique, just say the magic words: "deep learning is over-hyped," and watch those around you nod their heads sagely. But as a deep learning researcher I see little sign of that in practice. Approaches that are well understood in academia are not being well used in the infosec community, despite clear advantages. Perhaps it is actually the case that "deep learning is over-hyped" is over-hyped.
Jeremy Howard is an entrepreneur, business strategist, developer, and educator. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, a faculty member at Singularity University, and a Young Global Leader with the World Economic Forum.
Jeremy’s most recent startup, Enlitic, was the first company to apply deep learning to medicine, and has been selected one of the world’s top 50 smartest companies by MIT Tech Review two years running. He was previously the President and Chief Scientist of the data science platform Kaggle, where he was the top-ranked participant in international machine learning competitions two years running. He was the founding CEO of two successful Australian startups (FastMail and Optimal Decisions Group—purchased by Lexis-Nexis). Before that, he spent eight years in management consulting at McKinsey & Co and AT Kearney. Jeremy has invested in, mentored, and advised many startups, and has contributed to many open source projects.
He has many television and other video appearances, including as a regular guest on Australia’s highest-rated breakfast news program, a popular talk on TED.com, and data science and web development tutorials and discussions.
Allison Woodruff, User Experience Researcher, Google
We explore the topic of algorithmic fairness and how it relates to user trust. We conducted participatory design workshops and interviews with 44 participants from several populations that have traditionally been marginalized, specifically, Black, Hispanic, and low socioeconomic status participants in the United States. While the concept of algorithmic fairness was largely unfamiliar to these participants and they tended to underestimate the scale, complexity, and impact of algorithmic systems, participants also indicated that algorithmic fairness (or perceived lack thereof) could substantially affect their trust in a company or product. Our findings suggest that in addition to providing important benefits to society and individuals, improving algorithmic fairness can also enhance user trust. We outline positive steps companies and organizations can take to include algorithmic fairness as a value in product design and development.
Allison Woodruff is a user experience researcher on Google’s Security & Privacy team. She received her PhD in Computer Science from UC Berkeley. Prior to working at Google, Allison worked at the Palo Alto Research Center (PARC) and Intel Labs Berkeley. Allison is a co-inventor on 20 issued patents and has published over 60 papers on topics such as privacy, mobile computing, domestic technology, sustainability, citizen science, and information visualization. She has conducted research in a wide range of settings, such as green homes, low-income neighborhoods, religious environments, museums, amusement parks, traditional work environments, and street sweeper maintenance yards.
4:30 pm–5:30 pm
Security Meets Politics
Session Chair: Eli Sugarman, William and Flora Hewlett Foundation
Tim Maurer, Co-director, Cyber Policy Initiative, Carnegie Endowment for International Peace
In their March 2017 communique, the G20 Finance Ministers and Central Bank Governors warned that “The malicious use of Information and Communication Technologies could…undermine security and confidence and endanger financial stability.” That is why, the Carnegie Endowment has proposed that the G20 explicitly commit not to engage in offensive cyber operations that could undermine financial stability, namely manipulating the integrity of data of financial institutions or undermining the availability of critical systems, and to cooperate when such incidents occur. Such an agreement by the world’s leading economies would send a clear signal condemning such activity and enable future cooperation. The G20 has been discussing such a commitment by its member states, which, if adopted, will require the collaboration between governments, financial institutions, and technical experts to be effective.
Tim Maurer is the Co-director of the Cyber Policy Initiative at the Carnegie Endowment for International Peace. His work focuses on the Internet and international affairs, namely cybersecurity, human rights online, and Internet governance, currently with a specific focus on cybersecurity and financial stability. He is a member of several U.S. track 1.5 cyber dialogues and the research advisory group of the Global Commission on the Stability of Cyberspace. Previously, he was part of the Freedom Online Coalition’s working group “An Internet Free and Secure” and the Research Advisory Network of the Global Commission on Internet Governance. He co-chaired the Advisory Board of the Global Conference on CyberSpace in The Hague and supported the OSCE’s cyber confidence-building efforts by developing the Global Cyber Definitions Database for the chair of the OSCE. His work has been published by Slate, Lawfare, Just Security, Jane’s Intelligence Review, TIME, Foreign Policy, CNN, and other academic and media venues. Prior to joining Carnegie, Maurer was the director of the Global Cybersecurity Norms and Resilience Project at New America and head of research of New America’s Cybersecurity Initiative. He also spent several years focusing on humanitarian assistance and the coordination of the UN system gaining experience with the United Nations in Rwanda, Geneva, and New York. He holds a MPP from the Harvard Kennedy School. Cambridge University Press will publish his forthcoming Cyber Mercenaries—The State, Hackers, and Power in January 2018.
Jason Truppi, Director, Endpoint Detection and Response, Tanium, Inc.
From the Enigma machine to the DES and RSA algorithms, encryption has engendered a long, drawn-out war between governments. In the last few decades, however, it has evolved into a topic that is quickly dividing the world into privacy and security advocates. As industry continues to provide enhanced encryption options to the consumer, the government is losing visibility into threat actors who are perpetrating crimes and exploiting the security of nation states. The move toward end-to-end encryption is not only impacting government, but the overall security posture of corporations as well. This raises security and risk concerns for the entire community. How are the government and private sector planning to maintain security and privacy in a fully encrypted world? How will governments maintain foreign intelligence collection requirements? What are tech companies inventing to counteract emerging threats while maintaining the privacy of their users? I will also present some creative solutions for how we can move the encryption and privacy debate forward and create reasonable common ground that will align parties instead of increasing the cryptographic divide.
Jason Truppi is a career technologist turned FBI agent and now tech entrepreneur. Jason has many years of experience working in information systems and security. Jason was an FBI Cyber Agent in New York City where he worked some of the Nation's largest national security and criminal cyber intrusions. He was later promoted as Supervisory Special Agent at FBI Cyber Division where he was responsible for major data breaches, hacktivism and cyber extortion cases across the country. As a Director at Tanium and CSIS cybersecurity fellow, Jason is helping to advance the security industry to enable corporate network defenders on an enterprise scale. He is applying his skills and experience in incident response, investigations, penetration testing, analysis and threat intelligence to help solve the cybercrime epidemic that we face today.
5:30 pm–7:00 pm
Santa Clara Ballroom
Sponsored by Netflix
Thursday, January 18, 2018
8:00 am–8:55 am
8:55 am–9:00 am
Opening Remarks, Day 3
9:00 am–10:00 am
Law & Regulation
Session Chair: Eli Sugarman, William and Flora Hewlett Foundation
Travis LeBlanc, Partner, Boies Schiller Flexner LLP
There is no denying the massive benefits that internet-connected devices can provide. But, along with smart products, the Internet of Things brings attendant risks to privacy, cybersecurity, and even physical safety. These risks have only multiplied as the number of internet-connected devices has grown enormously over the last two decades. And—unsurprisingly, given this explosive growth—policymakers, consumers, and even the tech industry have not yet caught up with the privacy and security challenges raised by the Internet of Things. This talk will examine the vulnerabilities that internet-connected devices introduce, the roadblocks that make it difficult to address these challenges, and the possibilities for creative policymaking that can mitigate the risks.
Travis is a leading authority on consumer protection, cybersecurity, privacy, telecommunications, and the regulation of emerging and disruptive technologies. Drawing on his broad experience in federal and state government, he helps clients manage their litigation, regulatory risk, and strategic responses to government enforcement efforts, including the growing number of enforcement matters before state attorneys general. Travis also advises clients on crisis response, internal investigations, and compliance. In August 2017, Travis was selected by the U.S. Department of Commerce and the European Commission as an arbitrator for the EU-U.S. Privacy Shield Framework.
Amit Elazari, Doctoral Candidate, Berkeley Law, Research Fellow, CTSP, Berkeley School of Information
While the bug bounty economy is booming, a novel survey of bug bounty terms reveals that platforms and companies sometimes put hackers in “legal” harm’s way, shifting the risk for civil and criminal liability towards hackers instead of authorizing access and creating “safe harbors.” This is a call for action to hackers to unite, negotiate and influence the emerging landscape of cyberlaw, since hackers’ actions speak louder than scholars’ words. I suggest simple steps that could and should be taken in order to minimize the legal risks of thousands of hackers participating in bug bounties and create a “race-to-the-top” competition over the quality of bug bounty terms. I further suggest that the industry should move towards standardization of legal terms, especially in light of the recent DOJ framework. Hackers will learn not only which terms they should beware of in light of recent developments in anti-hacking laws, but which terms they, individually and through the platform, should demand to see to ensure “authorized access.” Most importantly, this is a case study of how a united front of hackers could demand and negotiate important rights, similar to what is done by organizations in other industries. Contracts and laws will continue to play a role in the highly regulated cyber landscape, conflicts of interests will inevitably arise, therefore hackers should not only pay attention to the fine print, but unite and negotiate for better terms.
Amit Elazari, Doctoral Candidate, Berkeley Law, Research Fellow, CTSP, Berkeley School of Information
Amit is a doctoral law candidate at Berkeley Law, one of the world’s leading institutions in law and technology, and a Research Fellow at CTSP, Berkeley School of Information. Her work on Cyberlaw and Intellectual Property has been published in the Canadian Intellectual Property Journal, Berkeley Technology Law Journal (BTLJ) and Berkeley Business Law Journal blogs and presented in leading security, Internet Law and IP conferences. Additionally, Amit serves as the submissions editor of BTLJ, the world’s leading IP and Tech Law Journal and as professors Deirdre Mulligan and Kenneth Bamberger, leaders in cyber law, and Professor Peter Menell, a leader in copyright law, research assistant. On 2017, Amit moderated a Key-Note session at the Women in Cyber Security 2017 (WiCyS) conference and presented her research on bug bounties at BsidesLV and Defcon (Skytalks).
10:00 am–10:30 am
Break with Refreshments
Grand Ballroom DE
Visit the Sponsor Showcase!
10:30 am–12:00 pm
Session Chair: Zachary Peterson, Cal Poly, San Luis Obispo
Steve Weis, Software Engineer, Facebook
This talk discusses emerging cryptographic technologies including secure enclaves, homomorphic encryption, secure multiparty computation, and functional cryptography. We'll focus on the potential impact to everyday security and privacy.
For background, we'll recap trends in cryptographic adoption and how it has affected real world users. We'll cover both positive and negative examples and suggest areas of development most beneficial to the next billion users coming online. We'll then discuss how emerging cryptography may enable new models of computation, while better protecting people's sensitive data.
Steve Weis is a software engineer on the Facebook Security Infrastructure team, where he focuses on securing people's data with applied cryptography. His past work includes building PGP support for Facebook, Google's 2-step verification, and the Keyczar cryptographic library. Previously, Steve co-founded a security startup, PrivateCore, which was acquired by Facebook, worked on Google's applied security team, and earned a PhD in computer science from MIT.
Mayank Varia, Research Scientist, Boston University
This talk describes our experience deploying a web application that securely analyzed the (sensitive!) salary data of 112,600 employees in the Greater Boston Area in order to calculate pay disparity across gender and race. We use a cryptographic tool called secure multi-party computation in order to balance transparency with confidentiality; this tool allows us to analyze the data without ever learning any person or company's salary information. Our experiences demonstrate that cryptographically secure data analysis over sensitive data can provide significant social benefits in contexts where data sharing is constrained or prevented by legal, ethical, or privacy restrictions.
Dr. Mayank Varia is a Research Scientist at Boston University. His research interests span theoretical and applied cryptography and their application to problems throughout computer science. He currently directs the Modular Approach to Cloud Security project (bu.edu/macs), an NSF Frontier project to build cloud computing systems with meaningful, multi-layered, composable security guarantees. He received a PhD in Mathematics from MIT for his work on cryptographically secure program obfuscation.
Charles Wright, Assistant Professor, Portland State University
Governments around the world are demanding more access to encrypted data, but it has been difficult to build a system that allows the authorities some access without providing unlimited access in practice. And so the public policy debate around encryption has been framed as a binary choice between two absolutist positions: either we allow law enforcement no access at all to encrypted data, or we must effectively give them complete, unrestricted access to all our communications. In the first half of this talk, I begin by summarizing the current state of the debate around the availability of strong end-to-end encryption. How did we get here, and why did key escrow fail in the 90’s? What are the current arguments for and against strong end-to-end crypto? What are the motivations of the various parties on each side of the debate? Where is public opinion on this issue? What is the current status quo—in the US, in Europe, and in developing nations around the world?
In the second half of the talk, I argue that the crypto and security community should investigate better technical solutions to provide more security for people in countries where unbreakable, end-to-end encryption is already illegal (e.g., India) or where it may soon be banned (the UK). As a first step toward this goal, I present new constructions that maximize user privacy while still allowing the authorities some limited “exceptional access” to encrypted data. I present two new constructions for crumpling cryptographic keys to make it possible— although arbitrarily expensive—for a government to recover the plaintext for targeted messages. In contrast to previous work on this topic (e.g., key escrow), our approach places most of the responsibility for achieving exceptional access on the government rather than on the users or developers of cryptographic tools. I conclude with some thoughts on remaining challenges and directions for future work, including: issues of internationalization; creating disincentives for misuse; and some possible countermeasures.
Dr. Wright is an assistant professor at Portland State University. His research focuses on security and privacy in multimedia, inference attacks on encrypted data, and efficient encrypted databases. Prior to joining PSU, he was a member of the technical staff at MIT Lincoln Laboratory.
12:00 pm–1:00 pm
Santa Clara Ballroom
1:00 pm–3:00 pm
Web and Mobile
Session Chair: Astha Singhal, Salesforce
Nick Sullivan, Head of Cryptography, Cloudflare
The encrypted web is built on top of a few fundamental protocols: HTTP, TLS, and DNS. These protocols were written with some fundamental assumptions about the architecture of the internet in mind, like the idea that different IP addresses correspond to separate physical machines. However, some of these assumptions are changing, and changing quickly. The popularity of technologies like IP anycast, Layer 4 load balancing, and the consolidation of massive portions of the web behind a small set of reverse proxy services mean that the architecture of the web today is very different than what is taught in computer networking classes. In this talk, I will examine some of the impacts of these changes and how internet standards such as HTTP/2 are being adapted to take advantage of the new architecture. I will also debate the tradeoffs between the complexity added by these changes and the privacy and latency benefits they provide to users of the web.
Nick Sullivan is a leading cryptography and security technologist. As Head of Cryptography at Cloudflare, a top Internet performance and security company, he is responsible for overseeing all cryptographic products and strategy for the company. He was instrumental in building Cloudflare’s security engineering team and led major projects including Keyless SSL and TLS 1.3. Prior to joining Cloudflare, he was a digital rights management pioneer, helping build and secure Apple’s multi-billion dollar iTunes store. He holds an MSc in Cryptography, is the author of more than a dozen computer security patents, and regularly speaks at major security conferences.
Julien Vehent, Firefox Operations Security Lead at Mozilla
Guaranteeing the integrity of data sent from web services to Firefox is difficult due to transport intermediaries that have the ability to modify data on-the-fly, and the risk of a web server compromise sending fraudulent data to end-users. Content signature is a new signing protocol built into Firefox to protect the integrity of the data exchanged between Mozilla and millions of Firefox installations in the wild.
Julien leads the Firefox Operations Security team at Mozilla, tasked with defining, implementing and operating the security of Firefox's backend services and release engineering infrastructure. Julien's background is in web applications security, services architecture, cryptography and risk management. Julien is the author of "Securing DevOps", published at Manning Editions.
Erik Derr, PhD Student, CISPA, Saarland University
Third-party libraries are an indispensable aspect of modern software development. They ease the developer's job through code re-use but, at the same time, increase the apps' attack surface by adding vulnerable code. On Android, there is an imminent risk of misuse by libraries as they inherit the access rights of their host apps. Correctly attributing improper app behavior either to app or library code or isolating library code from their host apps would be highly desirable to mitigate these problems, but is impeded by the absence of a third-party library detection that is effective in spite of commonly used code obfuscation and minification techniques.
In this talk, I'll present a library detection approach that overcomes these obstacles and that is capable of pinpointing exact library versions in Android applications. Applied to apps from Google Play, we measure the outdatedness of libraries and show that app developers slowly adapt new library versions, exposing their end-users to large windows of vulnerability. We discover that even long-known security vulnerabilities in popular libraries are still present in current apps. A subsequent updatability study reveals that the vast majority of vulnerable versions could be patched automatically. I'll conclude the talk by highlighting potential obstacles in improving this unsatisfactory status-quo.
Erik Derr is a PhD student in Computer Science working with Dr. Michael Backes at the Center for IT-Security, Privacy and Accountability (CISPA) in Germany. He received a BSc and MSc in Computer Science from Saarland University. His research focuses on mobile security and code analysis.
Developing a Science of Internet Censorship Resistance: Opportunities and Challenges for Network Measurement
Phillipa Gill, Assistant Professor, Computer Science, University of Massachusetts—Amherst
The Internet has become a critical communication infrastructure for citizens to obtain accurate information, organize political actions, and express dissatisfaction with their governments. This fact has not gone unnoticed, with governments clamping down on this medium via censorship, surveillance and even large-scale Internet take-downs. As online information controls become more common, circumvention researchers are left working tirelessly to stay one step ahead. In this talk, I will present my research which leverages network measurement as a basis to stay one step ahead in the censorship arms race. In this talk, I will overview how we measure Internet censorship and describe ICLab, our platform for measuring Internet censorship. I will then describe two case studies where ICLab has helped to understand filtering in Yemen and Iran. Specifically, I will describe how ICLab has been able to pinpoint specific products used for censorship and how it helped understand how sanctions impact users’ Internet experience in Iran.
Phillipa Gill is an assistant professor in the Computer Science Department at the University of Massachusetts -- Amherst. Her work focuses on many aspects of computer networking and security with a focus on designing novel network measurement techniques to understand online information controls, network interference, and interdomain routing. She currently leads the ICLab project which is working to develop a network measurement platform specifically for online information controls. She was included on N2Women’s list of 10 women in networking to watch in 2016 and was recognized on MIT Tech Review’s list of top 35 innovators in 2017. She has received the NSF CAREER award, Google Faculty Research Award and best paper awards at the ACM Internet Measurement Conference (characterizing online aggregators), and Passive and Active Measurement Conference (characterizing interconnectivity of large content providers).
3:00 pm–3:30 pm
Break with Refreshments
3:30 pm–5:00 pm
Emerging Security Issues
Session Chair: Nwokedi Idika, Google
Joshua Corman, Founder, I am The Cavalry
Healthcare Cybersecurity is in critical condition. We know there is Promise & Peril in Connected Medicine… While the benefits of connected medicine are undeniable, the increased and premature, hyper connectivity now exposes us to a bevy of accidents and adversaries. As a sixth of our economy, healthcare’s vital public safety role, critical infrastructure, and national security interests, healthcare is both too big to fail and at serious risk of failure. We will outline some of the uncomfortable truth (and uncomfortable recommendations) uncovered by the Congressional Task Force on Healthcare Cyber Security, detail the profound impacts of a spate of 2016/2017 attacks affecting patient care, and highlight the experiences and opportunities revealed by the Cyber Med Summit - a first ever clinical hacking simulation involving physical, hackers, city/state/federal government, and medical stakeholders. We were always prone, we were prey, but we had lacked predators. That relative obscurity is now over. We will eventually strike the right balance. Our belief is that we can be safer. sooner, if we work together.
Joshua Corman is a Founder of I am The Cavalry (dot org) and Director of the Cyber Statecraft Initiative for the Atlantic Council. Corman previously served as CTO for Sonatype, Director of Security Intelligence for Akamai, and in senior research & strategy roles for The 451 Group and IBM Internet Security Systems. He co-founded RuggedSoftware and IamTheCavalry to encourage new security approaches in response to the world’s increasing dependence on digital infrastructure. Josh's unique approach to security in the context of human factors, adversary motivations and social impact has helped position him as one of the most trusted names in security. He also serves as an adjunct faculty for Carnegie Mellon’s Heinz College and on the 2016 HHS Cybersecurity Task Force
Emily McReynolds, Tech Policy Lab, University of Washington
As people and companies bring more devices that are "always on" into homes, workplaces, and hotels, the nature of the experience of surveillance is changing. The introduction of the Amazon Echo, Google Home, and other similar products have demonstrated the growing misunderstanding regarding these device’s capabilities and whether to trust them (for example, "Can an Echo Testify Against You?" and "Google Home Ends a Domestic Dispute by Calling the Police"). For individuals, these devices are often insecure on the front end: others can give the device commands, anyone present would hear your password if the device requires one, and it is possible for the device to be physically hacked. The devices are also confusing and lack transparency on the back end: How does one access and delete recordings? How are the recordings stored and used by the device maker? If it isn’t "my" device, what happens to my data and recordings? Are there data deletion policies? How does the individual find out this information? Using examples from work on internet-connected toys and examinations of "always on" devices, this talk explores the growing impact of artificially intelligent devices on how we experience privacy and security. Consumer education and design choices will be key to the wide adoption of these devices and how secure they can be.
Emily McReynolds is a researcher at the University of Washington Tech Policy Lab. The Lab is an interdisciplinary research collaboration of the UW's School of Law, Information School, and School of Computer Science & Engineering focused on emerging technology with the goal to strengthen and inform tech policy. Her research centers on privacy, anonymity, and security, with an emphasis on policy surrounding emerging technologies. Emily's work includes analyses of the privacy and security impact of Internet of Things devices, privacy in the technical process of big data, and the law and policy implications of Bitcoin. Emily went to law school planning to work on tech policy and previously taught people to use computers back when there were still floppy disks.
Lilly Ryan, Software and Systems Engineer
People have been using technology to try to contact ghosts for over a hundred years, but now, for the first time, we are leaving behind seeds for a genuine digital afterlife. Trailing personal information in our wake every time we touch the Internet, it becomes increasingly possible to create a digital presence that will use these (and future) data points to respond and react to events after our deaths much as we might have in life.
We have already begun to build this reality, but we still need to ask some tough ethical questions about our digital ghosts. Are they technically 'us'? Are they subject to the law? Who owns your digital remains after you die? Could a hacker spin up a doppelgänger to plague you in life? Could we donate our metadata to science as we can already donate our physical selves?
Outside of 'Black Mirror' episodes and art installations, the question of personal data and digital legacies is rarely seriously considered, and it leads to uncomfortable gaffes as digital services grapple with what to do when users die. This talk is a space to take stock of how the software we write today could be used in fifty years, and what design decisions we should make to ensure we can respect the wishes of the dead.
Lilly Ryan is a software engineer and privacy advocate from Australia who spends her days building and breaking corporate identity systems. Following a stint as an academic specialising in the surveillance mechanisms of medieval Europe, she has spent more recent years writing code, teaching practical tech privacy to the public, speaking about the history and ethics of technology, and camouflaging herself in libraries.