All sessions will be held in Grand Ballroom D unless otherwise noted.
The workshop papers are available for download below to registered attendees now and to everyone beginning Monday, August 14. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].
Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)
Monday, August 14
8:00 am–9:00 am
9:00 am–10:30 am
Censorship: Methods, Measurements, and Impact
Measuring Decentralization of Chinese Keyword Censorship via Mobile Games
Jeffrey Knockel, Lotus Ruan, and Masashi Crete-Nishihata, Citizen Lab, Munk School of Global Affairs, University of Toronto
China has the world’s largest mobile gaming market. Like other technology and Internet companies operating in the country, the gaming industry must follow strict content control policies including submitting lists of blacklisted keywords to regulators. In this paper we provide a first look at how content regulations over the gaming industry are implemented in practice by analyzing over 180,000 unique blacklisted keywords collected across over 200 games from app stores in China.
Internet censorship in China is often presented as a uniformly enforced, top-down system. However, we find content control responsibilities are pushed down to companies resulting in varied implementations. We find that, among the hypotheses we tested, the only consistent predictor of keyword list similarity is whether games have the same publisher and developer, which suggests there is no central state or provincial authority controlling the generation of keyword lists and companies have a degree of flexibility in implementing controls. These results suggest a decentralized and fractured regime of control.
The Decline of Social Media Censorship and the Rise of Self-Censorship after the 2016 Failed Turkish Coup
Rima Tanash, Zhouhan Chen, Dan Wallach, and Melissa Marschall, Rice University
In this work, we examine the effect of the 2016 Turkish coup on social media censorship, both by the government ordering Twitter to conduct censorship and as well by people removing their own tweets. We compared 5.5M tweets collected from Turkey pre-coup to 8.5M tweets collected post-coup. Although self-censorship of the press is not a novel practice following past military coups in Turkey, in this work we examine and quantify social media self-censorship, and empirically compare its effect relative to government-implemented censorship of social media.
Our measurements following the coup show a 72% decline in publicly identifiable government-censored tweets. We attribute this, in part, to an estimated 43% decline in overall Twitter usage in Turkey and in part to users’ self-censorship. Supporting this theory, we detected that 41% of all users in our pre-coup dataset voluntarily removed 18% of their old tweets by either switching their accounts to protected mode, deleting their accounts, or deleting some tweets. Using NLP and graph metrics, we identify a new focus of Turkish government censorship on the Gülen movement. Our analysis show pro-Gülen tweets being widely self-censored. Additionally, we detected 40% more publicly-accessible anti-Gülen tweets. Unlike activists who regularly tweet political content, and are more likely to be censored by the government, we found that self-censoring users appear to be more typical users who normally post neutral tweets, and only 6% political tweets on average.
Autosonda: Discovering Rules and Triggers of Censorship Devices
Jill Jermyn, Columbia University; Nicholas Weaver, International Computer Science Institute, University of California, Berkeley
Using censorship to forbid access to certain content on the internet is very common in the world today. Some censorship mechanisms are well-studied, however there remain a large number of techniques that remain unknown. Furthermore, many censorship implementations are dynamic as they attempt to prevent against new circumvention techniques. Current tools often tell us when something is censored, but don’t do an automated analysis of the approach nor provide clues about the rule sets used by censorship devices. This paper presents Autosonda, a tool for discovering and studying decision models of censorship devices. Through network traffic alone, Autosonda fingerprints censorship devices by discovering their models and mechanisms for how they enforce rule sets. The strength of Autosonda is demonstrated in a study that we present of 76 web filters currently in use in the New York City metropolitan area. In our study we encounter a great variety of behavior and implementation techniques for blocking prohibited web content. Not only does Autosonda help us to find implementation flaws and rule sets, it also allows us to find circumvention paths for 100% of our test subjects. Being able to perform this type of detailed analysis automatically and at scale is a large contribution for understanding censorship and how device behavior can be classified.
10:30 am–11:00 am
Break with Refreshments
11:00 am–12:30 pm
Political Discourse, Activism, and Security
Ambivalence in the (Private) Public Sphere: How Global Digital Activists Navigate Risk
Sarah Myers West, Annenberg School for Communication & Journalism, University of Southern California
This paper seeks to provide insight into how digital activists navigate the risks posed to them in online environments. I examine how a group of activists across ten different non-Western countries adapt and respond to threats posed by two types of powerful actors: not just the state, but also the technology companies that run the social media platforms on which many activists rely to conduct their advocacy. Through a series of interviews, I examine how resistance against censorship and surveillance manifests through their everyday practices, not only by using encryption and circumvention technologies, but also by using commercial social media platforms to their advantage despite considerable ambivalence about the risks they pose.
Creative and Set in Their Ways: Challenges of Security Sensemaking in Newsrooms
Elizabeth Anne Watkins, Columbia University; Mahdi Nasrullah Al-Ameen, Clemson University School of Computing; Franziska Roesner, University of Washington Department of Computer Science and Engineering; Kelly Caine, Clemson University School of Computing; Susan McGregor, Columbia University Graduate School of Journalism
Maintaining computer security in an organization requires navigating a thorny landscape of adversaries, devices, and systems. As organizations grow more complex, integrating remote workers and networked, third-party tools, security risks multiply, and become more difficult to fully comprehend. News organizations are exemplary of this type of risk-laden workplace, as they combine the technical and complexity issues typical of bureaucratic systems with the creative, autonomous decision-making of journalists. As more industries face changing labor models, shifting to remote workers and building more of their computing needs on third-party platforms, journalists can serve as a critical early-warning population, a canary-in-the-coal-mine look at the management of cybersecurity in the future of work. As a first step towards building our social-science-based research, we took from organization theory the literature on sensemaking, to study how journalists who work in organizations "make sense" of cybersecurity. After analyzing interviews with a range of journalists with diverse priorities and obligations, and testing for an array of sensemaking frameworks, we found fragmented sensemaking to be pervasive. This is a hazardous condition for security in a networked organization, because such a framework correlates with misaligned and scattered behaviors. We conclude with a discussion of questions that emerged during this study, and propose next steps in research.
Measuring Offensive Speech in Online Political Discourse
Rishab Nithyanand, Brian Schaffner, and Phillipa Gill, University of Massachusetts, Amherst
The Internet and online forums such as Reddit have become an increasingly popular medium for citizens to engage in political conversations. However, the online disinhibition effect resulting from the ability to use pseudonymous identities may manifest in the form of offensive speech, consequently making political discussions more aggressive and polarizing than they already are. Such environments may result in harassment and self-censorship from its targets. In this paper, we present preliminary results from a large-scale temporal measurement aimed at quantifying offensiveness in online political discussions.
To enable our measurements, we develop and evaluate an offensive speech classifier. We then use this classifier to quantify and compare offensiveness in the political and general contexts. We perform our study using a database of over 168M Reddit comments made by over 7M pseudonyms between January 2015 and January 2017 – a period covering several divisive political events including the 2016 US presidential elections.
12:30 pm–2:00 pm
2:00 pm–3:30 pm
Censorship and Surveillance Circumvention Applied
SecurePost: Verified Group-Anonymity on Social Media
Michael Nekrasov, Daniel Iland, Miriam Metzger, Ben Zhao, and Elizabeth Belding, UC Santa Barbara
As Internet freedoms are increasingly threatened both at home and abroad, marginalized groups, such as journalists, activists, and government watchdogs require new tools to retain free and open discourse on-line. In this paper, we introduce SecurePost - a tool for verified group anonymity on social media. SecurePost gives social media posters anonymity while safeguarding group credibility through the use of revocable asymmetric keys and an anonymizing proxy. It provides trust to readers via the use of HMAC verification signatures appended to posts verifying integrity and authenticity of a post. We root our work in survey-based research and ethnographic interviews conducted with marginalized groups in Mongolia, Turkey, and Zambia from 2014 to 2016. SecurePost widens the toolkit of security applications, by giving vulnerable communities a way of balancing individual anonymity and safety with group credibility.
An ISP-Scale Deployment of TapDance
Sergey Frolov, University of Colorado; Fred Douglas, University of Illinois Urbana-Champaign; Will Scott, Allison McDonald, and Benjamin VanderSloot, University of Michigan; Rod Hynes and Adam Kruger, Psiphon; Michalis Kallitsis, Merit Network; David G. Robinson, Upturn; Nikita Borisov, University of Illinois Urbana-Champaign; Alex Halderman, University of Michigan; Eric Wustrow, University of Colorado Boulder
We report initial results from the world’s first ISP-scale field trial of a refraction networking system. Refraction networking is a next-generation censorship circumvention approach that locates proxy functionality in the middle of the network, at participating ISPs or other network operators. We built a high-performance implementation of the TapDance refraction networking scheme and deploy edit on four ISP uplinks with an aggregate bandwidth of 100 Gbps. Over one week of operation, our deployment served more than 50,000 real users. The experience demonstrates that TapDance can be practically realized at ISP scale with good performance and at a reasonable cost, potentially paving the way for long-term, large-scale deployments of TapDance or other refraction networking schemes in the future.
Alice and Bob, who the FOCI are they?: Analysis of end-to-end encryption in the LINE messaging application
Antonio M. Espinoza, William J. Tolley, and Jedidiah R. Crandall, UNM; Masashi Crete-Nishihata and Andrew Hilts, Citizen Lab
End-to-end encryption (E2EE) is becoming a standard feature in many popular chat apps, but independent security assessments of these implementations are limited. In this paper we provide the first independent analysis of E2EE features in LINE, a messaging application popular in Asian markets, and identify a replay attack and an attack on a lack of forward secrecy. Based on our analysis and communications with LINE about the vulnerabilities we discuss challenges and new research directions to better bridge vendors, researchers, and end-users around security issues.
3:30 pm–4:00 pm
Break with Refreshments
4:00 pm–4:30 pm
Security and Online Freedom: Legal Perspectives
Best We Forget: Expressive Freedom and the Right to be Forgotten
Maryna Polataiko, McGill Faculty of Law
The right to be forgotten is often framed as being at odds with the freedom of expression. This paper aims to challenge that position. First, I briefly summarize Google Spain as an introduction to the right to be forgotten. Next, I review recent cases in the Canadian jurisprudence and argue that they potentially set the stage for a Canadian right to be forgotten. I then turn to an analysis of how the right to be forgotten is consonant with the three core purposes of the freedom of expression: democracy and self-government, the search for the truth and the marketplace of ideas, and individual autonomy and self-actualization. Given the Canadian Supreme Court’s acceptance of the core justifications of expressive rights in the Charter jurisprudence, I suggest that constitutional considerations of the right to be forgotten be enriched by an understanding of how forgetting bolsters expressive freedom in the digital age.