All sessions will be held in Grand Ballroom IX–X unless otherwise noted.
Papers are available for download below to registered attendees now and to everyone beginning August 14, 2018. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].
Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)
Tuesday, August 14, 2018
8:00 am–9:00 am
Grand Ballroom Foyer
9:00 am–9:10 am
Program Co-Chairs: Lex Gill, The Citizen Lab, and Rob Jansen, U.S. Naval Research Laboratory
9:10 am–10:10 am
Arturo Filastò and Simone Basso, Open Observatory of Network Interference (OONI)
A lot has changed in the OONI-verse since we first presented a paper at FOCI in 2012.
OONI has gone from being a network measurement project primarily oriented at technical users with a unix only CLI tool, to being a global community that measures and responds to internet censorship events worldwide. Today, hundreds of thousands of OONI Probe users in more than 200 countries measure internet censorship from thousands of local vantage points every month.
This has led to many new and interesting opportunities, such as working with a growing community of human rights advocates and supporting their work with network measurement data. But this has also presented challenges, such as scaling our platform to meet demand and adapting to the ever-changing network landscapes.
As part of this session, we will discuss some solutions to problems that are common to network measurement instruments. Join us to learn all about what OONI has been up to over the last 6 years and what we’re building next!
10:10 am–10:40 am
Break with Refreshments
Grand Ballroom Foyer
10:40 am–11:55 am
Takedowns and Shutdowns
Session Chair: Phillipa Gill, University of Massachusetts Amherst
Ryan Shandler, University of Haifa
In the modern age, digital communication tools play a central role in the conduct of protests and assemblies during periods of social unrest. The foremost illustration of this was during the Arab spring, where digital platforms were leveraged to promote and sustain public protests. In response, countries attempted to dispel gatherings and limit public protest by ‘shutting down’ the Internet or blocking online access to social forums. Since then, dozens of states have begun to employ cyber-shutdowns. During a one year period, a study tracked 81 different instances of Internet shutdowns in 19 countries. This trend is only escalating. While significant resources have been invested in tracking the technical contest between governments and activists, no research has succeeded in acquiring individual level data regarding the political and social effects of Internet deprivation. This research paper reflects on two novel experimental designs that quantitatively measure how citizens respond to cyber blackouts. The first experiment consists of a controlled experiment that tests the ability of participants to complete tasks related to political participation under simulated treatments of Internet access or deprivation. The empirical findings support our hypotheses that the absence of Internet access significantly restricts the ability to engage in political activity. A second experiment extends this research beyond a laboratory setting by tracking the activity of Internet users in multiple countries during period of government-initiated Internet shutdowns. This allows us to determine whether modern forms of digital political participation have supplanted traditional offline avenues of political activity, and whether citizens are able to circumvent cyber blackouts.
Austin Hounsel, Prateek Mittal, and Nick Feamster, Princeton University
Internet censorship measurements rely on lists of websites to be tested, or “block lists” that are curated by third parties. Unfortunately, many of these lists are not public, and those that are tend to focus on a small group of topics, leaving other types of sites and services untested. To increase and diversify the set of sites on existing block lists, we use natural language processing and search engines to automatically discover a much wider range of websites that are censored in China. Using these techniques, we create a list of 1125 websites outside the Alexa Top 1,000 that cover Chinese politics, minority human rights organizations, oppressed religions, and more. Importantly, none of the sites we discover are present on the current largest block list. The list that we develop not only vastly expands the set of sites that current Internet measurement tools can test, but it also deepens our understanding of the nature of content that is censored in China. We have released both this new block list and the code for generating it.
Michael Carl Tschantz, ICSI; Sadia Afroz, ICSI and UC Berkeley; Shaarif Sajid, Shoaib Asif Qazi, and Mobin Javed, LUMS; Vern Paxson, ICSI and UC Berkeley
This paper examines different reasons for web inaccessibility. Prior work on accessibility mostly focuses on censorship by nation states. We look at three forms of server-side blocking: blocking visitors from the EU to avoid GDPR compliance, blocking based upon the visitor’s country, and blocking due to security concerns. We argue that these and other forms of blocking warrant more research.
11:55 am–1:30 pm
Grand Ballroom VI
1:30 pm–2:45 pm
Session Chair: Susan E. McGregor, Columbia Journalism School
Ruba Abu-Salma, University College London; Elissa M. Redmiles, University of Maryland; Blase Ur and Miranda Wei, University of Chicago
End-to-end (E2E) encrypted communication tools can help users keep their communications secure from government or corporate surveillance. In this work, we conduct a quantitative survey (n=125) to assess general mental models and understandings of a hypothetical E2E encrypted communication tool. We find that the vast majority of respondents had adopted E2E encrypted tools in the real world, but lacked confidence and accuracy in their mental models of E2E encryption. Two key misconceptions include (1) three-quarters of respondents believing that their E2E encrypted communications could be accessed by unauthorized entities, and (2) one-half of respondents feeling that SMS and landline phone calls were more secure than, or as secure as, E2E encrypted communications. These findings raise concerns that respondents may not feel threatened by proposals of "backdoors" since they already feel that different entities can access their communications. More broadly, our findings suggest that the primary user-related challenge for E2E encrypted tools may no longer be adoption, but helping users who already have these tools avoid sending sensitive information over less secure channels.
A Cryptographic Airbag for Metadata: Protecting Business Records Against Unlimited Search and Seizure
Charles V. Wright, Portland State University; Mayank Varia, Boston University
Governments around the world require that electronic service providers, including telecoms, ISP's, and even online services like Twitter and Facebook, must provide law enforcement agencies with broad access to so-called "business records" including communications metadata. Metadata is data about data; it does not include the contents of the users' communications, but it does typically show who each user communicated with, and at what times, and for how long. Metadata is actually surprisingly powerful, especially in a time when more and more messages are being encrypted from end-to-end.
In this paper, we present a new approach for protecting communications metadata and other business records against unwarranted, bulk seizure. Our approach is designed from the start to be robust against a new class of political and legal attack. To achieve this, we borrow the recent notion of cryptographic crumple zones, i.e. encryption that can be broken, but only at a substantial monetary cost. We propose that a service provider who wishes to protect their users' privacy should encrypt each business record with its own unique, crumpled, symmetric key. Then, a law enforcement agency who compels disclosure of the records learns only ciphertext until they expend the necessary resources to recover keys for the records of interest. We show how this approach can be easily applied to protect metadata in the form of network flow records. We describe how a service provider might select the work factor of the crumpling algorithm to defend against political and legal attacks by allowing legitimate investigations, while still preventing use of the same metadata for mass surveillance.
Paul Syverson and Matthew Traudt, U.S. Naval Research Laboratory
HTTP Strict Transport Security (HSTS) was introduced to force clients to use HTTPS connections on sites that support it, thus preventing Man in the Middle and other attacks. HSTS has always been understood to potentially allow sites to track visiting clients, but this security threat has been considered outweighed by the security benefits it provides. With specific examples, verified on a website constructed to test them, we show that tracking is far more significant than previously recognized. We also demonstrate how to use our approach to censor individuals or classes of visiting clients. Further, we describe and demonstrate how third parties, such as site analytics services, can track clients across multiple domains. We discuss possible changes to allow users to control HSTS settings and better manage their security, and we compare and complement HSTS with HTTPS Everywhere, a popular browser extension with similar goals.
2:45 pm–3:15 pm
Break with Refreshments
Grand Ballroom Foyer
3:15 pm–4:30 pm
Session Chair: Michael Carl Tschantz, International Computer Science Institute
Mingming Zhang, Baojun Liu, Chaoyi Lu, and Jia Zhang, Tsinghua University; Shuang Hao, University of Texas at Dallas; Haixin Duan, Tsinghua University
HTTP transparent proxies are widely deployed in mobile networks and can lead to potential security and privacy issues. HTTP traffic is increasingly subject to in-path manipulation, especially in cellular networks. Although the traffic manipulation behavior has been studied for long, little has been understood about the manipulation and privacy concerns that arise on networks in China, due to the limitations in measurement vantage points. In this work, we aim to fill this research gap by collecting large-scale HTTP sessions originating from China-wide mobile networks, and investigating potential privacy threats caused by HTTP transparent proxy devices. Our findings are multi-faceted. First, contents of web pages can be modified by proxy devices, which are replaced by or injected with advertisements. Second, HTTP headers with user-related and device-related data are injected into HTTP requests, which raises privacy concerns. In particular, we also find HTTP headers that embed exploit codes. Our study sheds light on the HTTP traffic manipulation behavior in China-wide mobile networks, and discusses related privacy threats.
Jeffrey Knockel, Lotus Ruan, and Masashi Crete-Nishihata, Citizen Lab
We report results from a series of experiments that uncover mechanisms used to filter images on WeChat, the most popular social media platform in China. Our results inform strategies for evading image filtering on the application. By performing tests on a collection of politically sensitive images filtered by WeChat, we found that WeChat uses two different algorithms to filter, an Optical Character Recognition (OCR)-based algorithm that filters images containing sensitive text, and a visual-based algorithm that filters images that are visually similar to those on an image blacklist. The OCR-based algorithm has implementation similarities to many common OCR algorithms that allow us to create text images that evade filtering. We found that the visual-based algorithm does not use any machine learning approach that uses high level classification of an image to determine whether it is sensitive; however, we discovered multiple implementation details of the visual-based algorithm that inform the creation of images that are visually similar to those blacklisted but that evade filtering. This study is the first in-depth technical analysis of image filtering on WeChat, and we hope that our methods will serve as a road map for studying image censorship on other platforms.
Nathalie Marechal, University of Southern California
This paper offers a political history of Telegram, a platform that combines aspects of social networking with secure messaging, and whose vocal commitment to user privacy and freedom of expression has brought it into open conflict with a number of governments, most recently in Iran and Russia. A detailed project history traces Telegram’s roots to Pavel Durov’s ouster from Vkontakte, the social networking site he had founded, at the behest of the Kremlin. The paper then analyzes Telegram’s ideology and politics by focusing, in turn, on Telegram’s emergence in the context of Vladimir Putin’s crackdown on technologically-enabled civil society; on Pavel Durov’s cyber-libertarianism; and on Telegram’s peculiar business model. The analysis shows that while Telegram’s rhetoric emphasizes user security, privacy, and freedom of expression, the company fails to demonstrate that it actually lives up to these commitments. Rather than earning user trust through transparency and accountability, Telegram’s value proposition hinges on blind trust on Pavel Durov’s good intentions and his team’s stated credentials.
4:30 pm–4:40 pm
4:40 pm–5:30 pm
Session Chair: Roya Ensafi, University of Michigan
Victoria Manfredi and Pi Songkuntham, Wesleyan University
Most approaches to circumventing Internet censorship and monitoring use conventional proxies which are accessed directly by their IP addresses and so are easily blocked. Decoy routing is an alternative approach that deploys a proxy in association with a router, called a decoy router, that is only accessible indirectly when traffic traverses the router. In this work, we design MultiFlow, a new decoy routing protocol that re-uses the TLS protocol's session resumption mechanism to enable the decoy router itself to resume a client's session. As a consequence, MultiFlow is able to (1) authenticate a client without blocking traffic inline on the decoy router, and (2) use information provided by the client to bootstrap the establishment of additional secure connections for covert communication. The client and decoy router then use a message board-like tunnel to communicate across multiple connections in a way that mitigates probing and traffic analysis attacks.
Arun Dunna, Ciarán O'Brien, and Phillipa Gill, University of Massachusetts Amherst
At the end of 2011, China's Great Firewall (GFW) began to block unpublished Tor bridges. Past studies of this blocking have found that the firewall implements both deep packet inspection (DPI) and active probing in order to identify and block usage of the Tor protocol. We build upon the information from previous studies conducted in 2012 and 2015, using a vantage point in China, and bridge relays that we deploy in the US, Canada, and the UK. We determine the extent to which both published and unpublished relays, specifically bridges, are currently blocked by the GFW. We also analyze the active scanners employed by the GFW, and determine the viability of various deployed circumvention methods. We specifically observe that a simple technique to identify and not respond to the GFW's scanners is effective in keeping a bridge relay from being blocked. We conclude by discussing the current circumvention methods, and how best to implement these circumvention methods to improve the accessibility of the Tor based on our measurements.
5:30 pm–6:00 pm
Rump Session: Special Onion Edition™
Session Chairs: Roya Ensafi, University of Michigan, and Roger Dingledine, The Tor Project