All sessions will be held in Grand Ballroom D unless otherwise noted.
Papers are available for download below to registered attendees now and to everyone beginning Tuesday, August 13, 2019. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].
Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)
Tuesday, August 13
8:00 am–9:00 am
Grand Ballroom Foyer
9:00 am–9:10 am
Program Co-Chairs: Susan McGregor, Columbia University, and Michael Carl Tschantz, International Computer Science Institute (ICSI)
10:00 am–10:30 am
Break with Refreshments
Grand Ballroom Foyer
10:30 am–12:00 pm
Session Chair: Masashi Nishihata, University of Toronto
Nguyen Phong Hoang, Stony Brook University; Sadie Doreen, The Invisible Internet Project; Michalis Polychronakis, Stony Brook University
The prevalence of Internet censorship has prompted the creation of several measurement platforms for monitoring filtering activities. An important challenge faced by these platforms revolves around the trade-off between depth of measurement and breadth of coverage. In this paper, we present an opportunistic censorship measurement infrastructure built on top of a network of distributed VPN servers run by volunteers, which we used to measure the extent to which the I2P anonymity network is blocked around the world. This infrastructure provides us with not only numerous and geographically diverse vantage points, but also the ability to conduct in-depth measurements across all levels of the network stack. Using this infrastructure, we measured at a global scale the availability of four different I2P services: the official homepage, its mirror site, reseed servers, and active relays in the network. Within a period of one month, we conducted a total of 54K measurements from 1.7K network locations in 164 countries. With different techniques for detecting domain name blocking, network packet injection, and block pages, we discovered I2P censorship in five countries: China, Iran, Oman, Qatar, and Kuwait. Finally, we conclude by discussing potential approaches to circumvent censorship on I2P.
An Efficient Method to Determine which Combination of Keywords Triggered Automatic Filtering of a Message
Ruohan Xiong and Jeffrey Knockel, Citizen Lab, University of Toronto
WeChat, the most popular social media platform in China, has over one billion monthly active users. China-based users of the platform are subject to automatic filtering of chat messages limiting their ability to freely communicate. WeChat is one among many Chinese Internet platforms which automatically filter content using keyword combinations, where if every keyword component belonging to a blacklisted keyword combination appears in a message then it is filtered. Discovering these sensitive combinations has previously been performed by sending messages containing potentially sensitive news articles and, if the article is filtered, attempting to isolate the triggering keyword combination from the article by sending additional messages over the platform. However, due to increasing restrictions on account registration, this testing has become decreasingly economical. In order to improve its economy, we analyzed the algorithm previously used to extract keyword combinations from news articles and found large areas of improvement in addition to subtle flaws. We evaluate multiple approaches borrowing concepts from group testing literature and present an algorithm which eliminates the aforementioned flaws and which requires on average 10.3% as many messages as the one previously used.
Zimo Chai, Amirhossein Ghafari, and Amir Houmansadr, University of Massachusetts Amherst
With the increasing use of TLS encryption over web traffic, censors start to deploy SNI filtering for more effective censorship. Specifically, a censor can identify the web domain being accessed by a client via the SNI extension in the TLS ClientHello message. In response, in August 2018, a new extension called ESNI (Encrypted-SNI) is proposed for TLS 1.3, aiming at fixing this server name leakage.
In this paper, we first characterize SNI-based censorship in China by measuring its prevalence and effectiveness. We outline its assisting role in censorship by comparing it with other commonly used censorship methods. We then measure the deployment prevalence of ESNI and further analyze its current and potential effectiveness in censorship circumvention. We also monitor the censorship associated with ESNI from 14 areas all around the world. Based on our analysis, we discuss the key factors to the success of ESNI and potential problems in a post-ESNI era. We hope our work will make ESNI a more promising and effective censorship circumvention strategy.
12:00 pm–1:30 pm
1:30 pm–3:00 pm
Session Chair: Rob Jansen, U.S. Naval Research Laboratory
Austen Barker, Staunton Sample, Yash Gupta, Anastasia McTaggart, Ethan L. Miller, and Darrell D. E. Long, University of California Santa Cruz
The challenge of deniability for sensitive data can be a life or death issue depending on location. Plausible deniability directly impacts groups such as democracy advocates relaying information in repressive regimes, journalists covering human rights stories in a war zone, and NGO workers hiding food shipment schedules from violent militias. All of these users would benefit from a plausibly deniable data storage system. Previous deniable storage solutions only offer pieces of an implementable solution. Artifice is the first tunable, operationally secure, self repairing, and fully deniable steganographic file system.
Artifice operates through the use of a virtual block device driver stored separately from the hidden data. It uses external entropy sources and erasure codes to deniably and reliably store data within the unallocated space of an existing file system. A set of data blocks to be hidden are combined with entropy blocks through erasure codes to produce a set of obfuscated carrier blocks that are indistinguishable from other pseudorandom blocks on the disk. A subset of these blocks may then be used to reconstruct the data. Artifice presents a truly deniable storage solution through its use of external entropy and erasure codes, while providing better durability than other deniable storage systems.
Trisha Datta, Nick Feamster, Jennifer Rexford, and Liang Wang, Princeton University
Internet Protocol (IP) addresses can reveal information about communicating Internet users and devices, even when the rest of the traffic between them is encrypted. At the same time, IP addresses serve as endpoints for network-layer communication and, as a result, are typically visible to the intermediate routers to allow them to forward traffic to its ultimate destination. Previous approaches to obfuscate the IP addresses of the sender and receiver commonly depend on either custom user software (e.g., Tor browser) or significant modifications to network hardware along the end-to-end path (which has proved to be a major roadblock). SPINE, on the other hand, conceals IP addresses and relevant TCP fields from intermediate—and potentially adversarial—autonomous systems (ASes) but requires only two participating ASes and no cooperation from end hosts. To demonstrate SPINE’s practicality, we have implemented it on commodity programmable switches using the P4 programming language. Our evaluation shows that SPINE can run at hardware rates on commodity switches, paving the way to real-world deployment.
Steven Sheffey and Ferrol Aderholdt, Middle Tennessee State University
As the internet becomes increasingly crucial to distributing information, internet censorship has become more pervasive and advanced. Tor aims to circumvent censorship, but adversaries are capable of identifying and blocking access to Tor. Meek, a traffic obfuscation method, protects Tor users from censorship by hiding traffic to the Tor network inside an HTTPS connection to a permitted host. However, machine learning attacks using side-channel information against Meek pose a significant threat to its ability to obfuscate traffic. In this work, we develop a method to efficiently gather reproducible packet captures from both normal HTTPS and Meek traffic. We then aggregate statistical signatures from these packet captures. Finally, we train a generative adversarial network (GAN) to minimally modify statistical signatures in a way that hinders classification. Our GAN successfully decreases the efficacy of trained classifiers, increasing their mean false positive rate (FPR) from 0.183 to 0.834 and decreasing their mean area under the precision-recall curve (PR-AUC) from 0.990 to 0.414.
3:00 pm–3:30 pm
Break with Refreshments
Grand Ballroom Foyer
3:30 pm–4:50 pm
Session Chair: Nick Feamster, University of Chicago
Eric Goldman, Santa Clara University School of Law
This talk will consider how Internet companies respond to rule violations by their users. Typically, the responses are viewed as binary: remove the content or keep it; terminate the user's account or not. However, there are a wide range of responses in between those binary options. The talk will identify a wide range of intermediate options and explore some design principles for choosing appropriate responses to rule violations. These design principles should be useful both to Internet companies and those who regulate them.
Eric Goldman is a Professor of Law, and Co-Director of the High Tech Law Institute, at Santa Clara University School of Law. Before he became a full-time academic in 2002, he practiced Internet law for 8 years in Silicon Valley. His research and teaching focus on Internet, IP and advertising law topics, and he blogs on these topics at the Technology & Marketing Law Blog.
Matt Goerzen, Data & Society Research Institute; Elizabeth Anne Watkins, Columbia University; Gabrielle Lim, Data & Society Research Institute
The rise of social media platforms has produced novel security threats and vulnerabilities. Malicious actors can now exploit entanglements of once disparate technical and social systems to target exposed communities. These exploits pose a challenge to legacy security frameworks drawn from technical and state-based conceptions of referent objects and bounded information systems. In this paper we propose a new framework of analysis to meet this challenge, Sociotechnical Security (STsec), which acknowledges how the interplay between actors produces emergent threats to participant communities. This exploratory paper offers an overview of sociotechnical systems, explains why these threats and vulnerabilities require us to expand our understanding of security with regards to participatory technology, and how sociotechnical security can be operationalized as a framework for analysis.
4:50 pm–5:00 pm
5:00 pm–5:30 pm
Session Chair: Roger Dingledine, The Tor Project
6:00 pm–7:00 pm
Tuesday Happy Hour
Sponsored by Intel
Mingle with other attendees while enjoying snacks and beverages. Attendees of USENIX Security '19 and all co-located events are welcome.