USENIX Security '21 Technical Sessions

All the times listed below are in Pacific Daylight Time (PDT). Each paper presentation is 15 minutes inclusive of Q&A.

Papers and Proceedings

The full Proceedings published by USENIX for the symposium are available for download below. Individual papers can also be downloaded from their respective presentation pages. Copyright to the individual works is retained by the author[s].

Proceedings Front Matter
Proceedings Cover | Title Page and List of Organizers | Message from the Program Co-Chairs | Table of Contents

Full Proceedings PDFs
 USENIX Security '21 Full Proceedings (PDF, 346 MB)
 USENIX Security '21 Proceedings Interior (PDF, 344.1 MB, best for mobile devices)

Attendee Files 
USENIX Security '21 Attendee List (PDF)
USENIX Security '21 Wednesday Paper Archive 1 of 2 (67.28 MB ZIP, includes Proceedings front matter and attendee lists)
USENIX Security '21 Wednesday Paper Archive 2 of 2 (58.33 MB ZIP)
USENIX Security '21 Thursday Paper Archive 1 of 2 (88.96 MB ZIP)
USENIX Security '21 Thursday Paper Archive 2 of 2 (71.24 MB ZIP)
USENIX Security '21 Friday Paper Archive 1 of 2 (80.85 MB ZIP)
USENIX Security '21 Friday Paper Archive 2 of 2 (61.4 MB ZIP)
Display:

Wednesday, August 11

7:00 am–7:30 am

Opening Remarks and Awards

Program Co-Chairs: Michael Bailey, University of Illinois at Urbana–Champaign, and Rachel Greenstadt, New York University

7:30 am–9:15 am

Track 1

Usability: Authentication

Session Chairs: Elissa M. Redmiles, Max Planck Institute for Software Systems, and Tamara Bonaci, Northeastern University

Effect of Mood, Location, Trust, and Presence of Others on Video-Based Social Authentication

Cheng Guo and Brianne Campbell, Clemson University; Apu Kapadia, Indiana University; Michael K. Reiter, Duke University; Kelly Caine, Clemson University

Available Media

Current fallback authentication mechanisms are unreliable (e.g., security questions are easy to guess) and need improvement. Social authentication shows promise as a novel form of fallback authentication. In this paper, we report the results of a four-week study that explored people's perceived willingness to use video chat as a form of social authentication. We investigated whether people's mood, location, and trust, and the presence of others affected perceived willingness to use video chat to authenticate. We found that participants who were alone, reported a more positive mood, and had more trust in others reported more willingness to use video chat as an authentication method. Participants also reported more willingness to help others to authenticate via video chat than to initiate a video chat authentication session themselves. Our results provide initial insights into human-computer interaction issues that could stem from using video chat as a fallback authentication method within a small social network of people (e.g., family members and close friends) who know each other well and trust each other.

'Passwords Keep Me Safe' – Understanding What Children Think about Passwords

Mary Theofanos and Yee-Yin Choong, National Institute of Standards and Technology; Olivia Murphy, University of Maryland, College Park

Available Media

Children use technology from a very young age, and often have to authenticate. The goal of this study is to explore children's practices, perceptions, and knowledge regarding passwords. Given the limited work to date and the fact that the world's cyber posture and culture will be dependent on today's youth, it is imperative to conduct cybersecurity research with children. We conducted the first large-scale survey of 1,505 3rd to 12th graders from schools across the United States. Not surprisingly, children have fewer passwords than adults. We found that children have complicated relationships with passwords: on one hand, their perceptions about passwords and statements about password behavior are appropriate; on the other hand, however, they simultaneously do not tend to make strong passwords, and practice bad password behavior such as sharing passwords with friends. We conclude with a call for cybersecurity education to bridge the gap between students' password knowledge with their password behavior, while continuing to provide and promote security understandings.

On the Usability of Authenticity Checks for Hardware Security Tokens

Katharina Pfeffer and Alexandra Mai, SBA Research; Adrian Dabrowski, University of California, Irvine; Matthias Gusenbauer, Tokyo Institute of Technology & SBA Research; Philipp Schindler, SBA Research; Edgar Weippl, University of Vienna; Michael Franz, University of California, Irvine; Katharina Krombholz, CISPA Helmholtz Center for Information Security

Available Media

The final responsibility to verify whether a newly purchased hardware security token (HST) is authentic and unmodified lies with the end user. However, recently reported attacks on such tokens suggest that users cannot take the security guarantees of their HSTs for granted, even despite widely deployed authenticity checks. We present the first comprehensive market review evaluating the effectiveness and usability of authenticity checks for the most commonly used HSTs. Furthermore, we conducted a survey (n=194) to examine users' perceptions and usage of these checks.

We found that due to a lack of transparency and information, users often do not carry out---or even are not aware of---essential checks but rely on less meaningful methods. Moreover, our results confirm that currently deployed authenticity checks suffer from improperly perceived effectiveness and cannot mitigate all variants of distribution attacks. Furthermore, some authenticity concepts of different manufacturers contradict each other. In order to address these challenges, we suggest (i) a combination of conventional and novel authenticity checks, and (ii) a user-centered, transparent design.

Inexpensive Brainwave Authentication: New Techniques and Insights on User Acceptance

Patricia Arias-Cabarcos, KASTEL/KIT; Thilo Habrich, Karen Becker, and Christian Becker, University of Mannheim; Thorsten Strufe, KASTEL/KIT

Available Media

Brainwaves have proved to be unique enough across individuals to be useful as biometrics. They also provide promising advantages over traditional means of authentication, such as resistance to external observability, revocability, and intrinsic liveness detection. However, most of the research so far has been conducted with expensive, bulky, medical-grade helmets, which offer limited applicability for everyday usage. With the aim to bring brainwave authentication and its benefits closer to real world deployment, we investigate brain biometrics with consumer devices. We conduct a comprehensive experiment that compares five authentication tasks on a user sample up to 10 times larger than those from previous studies, introducing three novel techniques based on cognitive semantic processing. We analyze both the performance and usability of the different options and use this evidence to elicit design and research recommendations. Our results show that it is possible to achieve Equal Error Rates of 14.5% (a reduction between 37%-44% with respect to existing approaches) based on brain responses to images with current inexpensive technology. With regard to adoption, users call for simpler devices, faster authentication, and better privacy.

Why Older Adults (Don't) Use Password Managers

Hirak Ray, Flynn Wolf, and Ravi Kuber, University of Maryland, Baltimore County; Adam J. Aviv, The George Washington University

Available Media

Password managers (PMs) are considered highly effective tools for increasing security, and a recent study by Pearman et al. (SOUPS '19) highlighted the motivations and barriers to adopting PMs. We expand these findings by replicating Pearman et al.'s protocol and interview instrument applied to a sample of strictly older adults (>60 years of age), as the prior work focused on a predominantly younger cohort. We conducted n=26 semi-structured interviews with PM users, built-in browser/operating system PM users, and non-PM users. The average participant age was 70.4 years. Using the same codebook from Pearman et al., we showcase differences and similarities in PM adoption between the samples, including fears of a single point of failure and the importance of having control over one's private information. Meanwhile, older adults were found to have higher mistrust of cloud storage of passwords and cross-device synchronization. We also highlight PM adoption motivators for older adults, including the power of recommendations from family members and the importance of education and outreach to improve familiarity.

"It's Stored, Hopefully, on an Encrypted Server'': Mitigating Users' Misconceptions About FIDO2 Biometric WebAuthn

Leona Lassak, Ruhr University Bochum; Annika Hildebrandt, University of Chicago; Maximilian Golla, Max Planck Institute for Security and Privacy; Blase Ur, University of Chicago

Available Media

While prior attempts at passwordless authentication on the web have required specialized hardware, FIDO2's WebAuthn protocol lets users sign into websites with their smartphone. Users authenticate locally via the phone's unlock mechanism. Their phone then uses public-key cryptography to authenticate to the website. Using biometrics (e.g., fingerprint, face) for this local authentication can be convenient, yet may engender misconceptions that discourage adoption. Through three complementary studies, we characterized and sought to mitigate misconceptions about biometric WebAuthn. We also compared it to non-biometric WebAuthn and traditional passwords. First, 42 crowdworkers used biometric WebAuthn to sign into a website and then completed surveys. Critically, 67% of participants incorrectly thought their biometrics were sent to the website, creating security concerns. In remote focus groups, 27 crowdworkers then co-designed short notifications to mitigate biometric WebAuthn misconceptions. Through a 345-participant online study, we found that some notifications improved perceptions of biometric WebAuthn and partially addressed misconceptions, yet key misconceptions about where the biometric is stored partially persisted. Nonetheless, participants were willing to adopt biometric WebAuthn over non-biometric WebAuthn or passwords. Our work identifies directions for increasing the adoption of biometric WebAuthn by highlighting its security and usability.

Driving 2FA Adoption at Scale: Optimizing Two-Factor Authentication Notification Design Patterns

Maximilian Golla, Max Planck Institute for Security and Privacy; Grant Ho, University of California San Diego; Marika Lohmus, Cleo AI; Monica Pulluri, Facebook; Elissa M. Redmiles, Max Planck Institute for Software Systems

Available Media

Two-factor authentication (2FA) is one of the primary mechanisms for defending end-user accounts against phishing and password reuse attacks. Unfortunately, getting users to adopt 2FA remains a difficult challenge. While prior work at the intersection of measurement and usability has examined how to persuade people to avoid dangerous behavior (e.g., clicking through TLS warnings), relatively little work has conducted measurements at industry scale about how to persuade people to adopt protective behaviors.

In this work, we focus on improving end user security in the wild by examining whether (1) messaging that addresses users' motivations, mental models, and concerns about 2FA and (2) UX design patterns found effective in other fields can effectively improve 2FA adoption. To do so, we conduct a series of large-scale in-the-wild, controlled, messaging experiments on Facebook, with an average of 622,419 participants per experiment. Based on our results, we distill a set of best-practice design patterns for most effectively encouraging protective behavior through carefully communicating with users about 2FA. Finally, we suggest concrete directions for future work on encouraging digital security behavior through security prompts.

Track 2

Cryptography: Attacks

Session Chairs: Sze Yiu Chau, The Chinese University of Hong Kong, and Juraj Somorovsky, Paderborn University

Hiding the Access Pattern is Not Enough: Exploiting Search Pattern Leakage in Searchable Encryption

Simon Oya and Florian Kerschbaum, University of Waterloo

Available Media

Recent Searchable Symmetric Encryption (SSE) schemes enable secure searching over an encrypted database stored in a server while limiting the information leaked to the server. These schemes focus on hiding the access pattern, which refers to the set of documents that match the client's queries. This provides protection against current attacks that largely depend on this leakage to succeed. However, most SSE constructions also leak whether or not two queries aim for the same keyword, also called the search pattern.

In this work, we show that search pattern leakage can severely undermine current SSE defenses. We propose an attack that leverages both access and search pattern leakage, as well as some background and query distribution information, to recover the keywords of the queries performed by the client. Our attack follows a maximum likelihood estimation approach, and is easy to adapt against SSE defenses that obfuscate the access pattern. We empirically show that our attack is efficient, it outperforms other proposed attacks, and it completely thwarts two out of the three defenses we evaluate it against, even when these defenses are set to high privacy regimes. These findings highlight that hiding the search pattern, a feature that most constructions are lacking, is key towards providing practical privacy guarantees in SSE.

A Highly Accurate Query-Recovery Attack against Searchable Encryption using Non-Indexed Documents

Marc Damie, University of Technology of Compiègne, France; Florian Hahn and Andreas Peter, University of Twente, The Netherlands

Available Media

Cloud data storage solutions offer customers cost-effective and reduced data management. While attractive, data security issues remain to be a core concern. Traditional encryption protects stored documents, but hinders simple functionalities such as keyword search. Therefore, searchable encryption schemes have been proposed to allow for the search on encrypted data. Efficient schemes leak at least the access pattern (the accessed documents per keyword search), which is known to be exploitable in query recovery attacks assuming the attacker has a significant amount of background knowledge on the stored documents. Existing attacks can only achieve decent results with strong adversary models (e.g. at least 20% of previously known documents or require additional knowledge such as on query frequencies) and they give no metric to evaluate the certainty of recovered queries. This hampers their practical utility and questions their relevance in the real-world. We propose a refined score attack which achieves query recovery rates of around 85% without requiring exact background knowledge on stored documents; a distributionally similar, but otherwise different (i.e. non-indexed), dataset suffices. The attack starts with very few known queries (around 10 known queries in our experiments over different datasets of varying size) and then iteratively recovers further queries with confidence scores by adding previously recovered queries that had high confidence scores to the set of known queries. Additional to high recovery rates, our approach yields interpretable results in terms of confidence scores.

Fragment and Forge: Breaking Wi-Fi Through Frame Aggregation and Fragmentation

Mathy Vanhoef, New York University Abu Dhabi

Available Media

In this paper, we present three design flaws in the 802.11 standard that underpins Wi-Fi. One design flaw is in the frame aggregation functionality, and another two are in the frame fragmentation functionality. These design flaws enable an adversary to forge encrypted frames in various ways, which in turn enables exfiltration of sensitive data. We also discovered common implementation flaws related to aggregation and fragmentation, which further worsen the impact of our attacks. Our results affect all protected Wi-Fi networks, ranging from WEP all the way to WPA3, meaning the discovered flaws have been part of Wi-Fi since its release in 1997. In our experiments, all devices were vulnerable to one or more of our attacks, confirming that all Wi-Fi devices are likely affected. Finally, we present a tool to test whether devices are affected by any of the vulnerabilities, and we discuss countermeasures to prevent our attacks.

Card Brand Mixup Attack: Bypassing the PIN in non-Visa Cards by Using Them for Visa Transactions

David Basin, Ralf Sasse, and Jorge Toro-Pozo, Department of Computer Science, ETH Zurich

Available Media

Most EMV transactions require online authorization by the card issuer. Namely, the merchant's payment terminal sends an authorization request to the card issuer over a payment network, typically operated by the company that brands the card such as Visa or Mastercard. In this paper we show that it is possible to induce a mismatch between the card brand and the payment network, from the terminal's perspective. The resulting card brand mixup attack has serious security consequences. In particular, it enables criminals to use a victim's Mastercard contactless card to pay for expensive goods without knowing the card's PIN. Concretely, the attacker fools the terminal into believing that the card being used is a Visa card and then applies the recent PIN bypass attack that we reported on Visa. We have built an Android application and successfully used it to carry out this attack for transactions with both Mastercard debit and credit cards, including a transaction for over 400 USD with a Maestro debit card. Finally, we extend our formal model of the EMV contactless protocol to machine-check fixes to the issues found.

Partitioning Oracle Attacks

Julia Len, Paul Grubbs, and Thomas Ristenpart, Cornell Tech

Available Media

In this paper we introduce partitioning oracles, a new class of decryption error oracles which, conceptually, take a ciphertext as input and output whether the decryption key belongs to some known subset of keys. Partitioning oracles can arise when encryption schemes are not committing with respect to their keys. We detail adaptive chosen ciphertext attacks that exploit partitioning oracles to efficiently recover passwords and de-anonymize anonymous communications. The attacks utilize efficient key multi-collision algorithms—a cryptanalytic goal that we define—against widely used authenticated encryption with associated data (AEAD) schemes, including AES-GCM, XSalsa20/Poly1305, and ChaCha20/Poly1305.

We build a practical partitioning oracle attack that quickly recovers passwords from Shadowsocks proxy servers. We also survey early implementations of the OPAQUE protocol for password-based key exchange, and show how many could be vulnerable to partitioning oracle attacks due to incorrectly using non-committing AEAD. Our results suggest that the community should standardize and make widely available key-committing AEAD to avoid such vulnerabilities.

Raccoon Attack: Finding and Exploiting Most-Significant-Bit-Oracles in TLS-DH(E)

Robert Merget and Marcus Brinkmann, Ruhr University Bochum; Nimrod Aviram, School of Computer Science, Tel Aviv University; Juraj Somorovsky, Paderborn University; Johannes Mittmann, Bundesamt für Sicherheit in der Informationstechnik (BSI), Germany; Jörg Schwenk, Ruhr University Bochum

Available Media

Diffie-Hellman key exchange (DHKE) is a widely adopted method for exchanging cryptographic key material in real-world protocols like TLS-DH(E). Past attacks on TLS-DH(E) focused on weak parameter choices or missing parameter validation. The confidentiality of the computed DH share, the premaster secret, was never questioned; DHKE is used as a generic method to avoid the security pitfalls of TLS-RSA.

We show that due to a subtle issue in the key derivation of all TLS-DH(E) cipher suites in versions up to TLS 1.2, the premaster secret of a TLS-DH(E) session may, under certain circumstances, be leaked to an adversary. Our main result is a novel side-channel attack, named Raccoon attack, which exploits a timing vulnerability in TLS-DH(E), leaking the most significant bits of the shared Diffie-Hellman secret. The root cause for this side channel is that the TLS standard encourages non-constant-time processing of the DH secret. If the server reuses ephemeral keys, this side channel may allow an attacker to recover the premaster secret by solving an instance of the Hidden Number Problem. The Raccoon attack takes advantage of uncommon DH modulus sizes, which depend on the properties of the used hash functions. We describe a fully feasible remote attack against an otherwise-secure TLS configuration: OpenSSL with a 1032-bit DH modulus. Fortunately, such moduli are not commonly used on the Internet.

Furthermore, with our large-scale scans we have identified implementation-level issues in production-grade TLS implementations that allow for executing the same attack by directly observing the contents of server responses, without resorting to timing measurements.

A Side Journey To Titan

Thomas Roche and Victor Lomné, NinjaLab, Montpellier, France; Camille Mutschler, NinjaLab, Montpellier, France and LIRMM, Univ. Montpellier, CNRS, Montpellier, France; Laurent Imbert, LIRMM, Univ. Montpellier, CNRS, Montpellier, France

Available Media

The Google Titan Security Key is a FIDO U2F hardware device proposed by Google (available since July 2018) as a two-factor authentication token to sign in to applications such as your Google account. In this paper, we present a side-channel attack that targets the Google Titan Security Key's secure element (the NXP A700x chip) by the observation of its local electromagnetic radiations during ECDSA signatures. This work shows that an attacker can clone a legitimate Google Titan Security Key. As a side observation, we identified a novel correlation between the elliptic curve group order and the lattice-based attack success rate.

Track 3

Embedded Security & SW Sec

Session Chairs: Hamed Okhravi, MIT Lincoln Laboratory, and Alvaro A. Cardenas, University of California, Santa Cruz

PASAN: Detecting Peripheral Access Concurrency Bugs within Bare-Metal Embedded Applications

Taegyu Kim, Purdue University; Vireshwar Kumar, Indian Institute of Technology, Delhi; Junghwan Rhee, University of Central Oklahoma; Jizhou Chen and Kyungtae Kim, Purdue University; Chung Hwan Kim, University of Texas at Dallas; Dongyan Xu and Dave (Jing) Tian, Purdue University

Available Media

Concurrency bugs might be one of the most challenging software defects to detect and debug due to their non-deterministic triggers caused by task scheduling and interrupt handling. While different tools have been proposed to address concurrency issues, protecting peripherals in embedded systems from concurrent accesses imposes unique challenges. A naïve lock protection on a certain memory-mapped I/O (MMIO) address still allows concurrent accesses to other MMIO addresses of a peripheral. Meanwhile, embedded peripherals such as sensors often employ some internal state machines to achieve certain functionalities. As a result, improper locking can lead to the corruption of peripherals' on-going jobs (we call transaction corruption) thus corrupted sensor values or failed jobs.

In this paper, we propose a static analysis tool namely PASAN to detect peripheral access concurrency issues for embedded systems. PASAN automatically finds the MMIO address range of each peripheral device using the parser-ready memory layout documents, extracts the peripheral's internal state machines using the corresponding device drivers, and detects concurrency bugs of peripheral accesses automatically. We evaluate PASAN on seven different embedded platforms, including multiple real time operating systems (RTOSes) and robotic aerial vehicles (RAVs). PASAN found 17 true positive concurrency bugs in total from three different platforms with the bug detection rates ranging from 40% to 100%. We have reported all our findings to the corresponding parties. To the best of our knowledge, PASAN is the first static analysis tool detecting the intrinsic problems in concurrent peripheral accesses for embedded systems.

On the Design and Misuse of Microcoded (Embedded) Processors — A Cautionary Note

Nils Albartus and Clemens Nasenberg, Ruhr University Bochum, Germany; Max Planck Institute for Security and Privacy, Germany; Florian Stolz, Ruhr University Bochum, Germany; Marc Fyrbiak, Max Planck Institute for Security and Privacy, Germany; Christof Paar, Ruhr University Bochum, Germany; Max Planck Institute for Security and Privacy, Germany; Russell Tessier, University of Massachusetts, Amherst, USA

Available Media

Today's microprocessors often rely on microcode updates to address issues such as security or functional patches. Unfortunately, microcode update flexibility opens up new attack vectors through malicious microcode alterations. Such attacks share many features with hardware Trojans and have similar devastating consequences for system security. However, due to microcode's opaque nature, little is known in the open literature about the capabilities and limitations of microcode Trojans.

We introduce the design of a microcoded RISC-V processor architecture together with a microcode development and evaluation environment. Even though microcode typically has almost complete control of the processor hardware, the design of meaningful microcode Trojans is not straightforward. This somewhat counter-intuitive insight is due to the lack of information at the hardware level about the semantics of executed software. In three security case studies we demonstrate how to overcome these issues and give insights on how to design meaningful microcode Trojans that undermine system security. To foster future research and applications, we publicly release our implementation and evaluation platform.

M2MON: Building an MMIO-based Security Reference Monitor for Unmanned Vehicles

Arslan Khan and Hyungsub Kim, Purdue University; Byoungyoung Lee, Seoul National University (SNU); Dongyan Xu, Antonio Bianchi, and Dave (Jing) Tian, Purdue University

Available Media

Unmanned Vehicles (UVs) often consist of multiple MicroController Units (MCUs) as peripherals to interact with the physical world, including GPS sensors, barometers, motors, etc. While the attack vectors for UV vary, a number of UV attacks aim to impact the physical world either from the cyber or the physical space, e.g., hijacking the mission of UVs via malicious ground control commands or GPS spoofing. This provides us an opportunity to build a unified and generic security framework defending against multiple kinds of UV attacks by monitoring the system's I/O activities. Accordingly, we build a security reference monitor for UVs by hooking into the memory-mapped I/O (MMIO), namely M2MON. Instead of building upon existing RTOS, we implement M2MON as a microkernel running in the privileged mode interceptingMMIOs while pushing the RTOS and applications into the unprivileged mode. We further instantiate an MMIO firewall using M2MON and demonstrate how to implement a secure Extended Kalman Filter (EKF) within M2MON. Our evaluation on a real-world UV system shows that M2MON incurs an 8.85% runtime overhead. Furthermore, M2MON-based firewall is able to defend against different cyber and physical attacks. The M2MON microkernel contains less than 4K LoC comparing to the 3M LoC RTOS used in our evaluation. We believe M2MON provides the first step towards building a trusted and practical security reference monitor for UVs.

Sharing More and Checking Less: Leveraging Common Input Keywords to Detect Bugs in Embedded Systems

Libo Chen, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University; Yanhao Wang, QI-ANXIN Technology Research Institute; Quanpu Cai and Yunfan Zhan, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University; Hong Hu, Pennsylvania State University; Jiaqi Linghu, QI-ANXIN Technology Research Institute; Qinsheng Hou, QI-ANXIN Technology Research Institute; Shandong University; Chao Zhang and Haixin Duan, BNRist & Institute for Network Science and Cyberspace, Tsinghua University; Tsinghua University-QI-ANXIN Group JCNS; Zhi Xue, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University

Available Media

IoT devices have brought invaluable convenience to our daily life. However, their pervasiveness also amplifies the impact of security vulnerabilities. Many popular vulnerabilities of embedded systems reside in their vulnerable web services. Unfortunately, existing vulnerability detection methods cannot effectively nor efficiently analyze such web services: they either introduce heavy execution overheads or have many false positives and false negatives.

In this paper, we propose a novel static taint checking solution, SaTC, to effectively detect security vulnerabilities in web services provided by embedded devices. Our key insight is that, string literals on web interfaces are commonly shared between front-end files and back-end binaries to encode user input. We thus extract such common keywords from the front-end, and use them to locate reference points in the back-end, which indicate the input entry. Then, we apply targeted data-flow analysis to accurately detect dangerous uses of the untrusted user input. We implemented a prototype of SaTC and evaluated it on 39 embedded system firmwares from six popular vendors. SaTC discovered 33 unknown bugs, of which 30 are confirmed by CVE/CNVD/PSV. Compared to the state-of-the-art tool KARONTE, SaTC found significantly more bugs on the test set. It shows that, SaTC is effective in discovering bugs in embedded systems.

Jetset: Targeted Firmware Rehosting for Embedded Systems

Evan Johnson, University of California, San Diego; Maxwell Bland, YiFei Zhu, and Joshua Mason, University of Illinois at Urbana–Champaign; Stephen Checkoway, Oberlin College; Stefan Savage, University of California, San Diego; Kirill Levchenko, University of Illinois at Urbana–Champaign

Available Media

The ability to execute code in an emulator is a fundamental part of modern vulnerability testing. Unfortunately, this poses a challenge for many embedded systems, where firmware expects to interact with hardware devices specific to the target. Getting embedded system firmware to run outside its native environment, termed rehosting, requires emulating these hardware devices with enough accuracy to convince the firmware that it is executing on the target hardware. However, full fidelity emulation of target devices (which requires considerable engineering effort) may not be necessary to boot the firmware to a point of interest for an analyst (for example, a point where fuzzer input can be injected). We hypothesized that, for the firmware to boot successfully, it is sufficient to emulate only the behavior expected by the firmware, and that this behavior could be inferred automatically.

To test this hypothesis, we developed and implemented Jetset, a system that uses symbolic execution to infer what behavior firmware expects from a target device. Jetset can generate devices models for hardware peripherals in C, allowing an analyst to boot the firmware in an emulator (e.g., QEMU). We successfully applied Jetset to thirteen distinct pieces of firmware together representing three architectures, three application domains (power grid, avionics, and consumer electronics), and five different operating systems. We also demonstrate how Jetset-assisted rehosting facilitates fuzz-testing, a common security analysis technique, on an avionics embedded system, in which we found a previously unknown privilege escalation vulnerability.

LIGHTBLUE: Automatic Profile-Aware Debloating of Bluetooth Stacks

Jianliang Wu and Ruoyu Wu, Purdue University; Daniele Antonioli and Mathias Payer, EPFL; Nils Ole Tippenhauer, CISPA Helmholtz Center for Information Security; Dongyan Xu, Dave (Jing) Tian, and Antonio Bianchi, Purdue University

Available Media

The Bluetooth standard is ubiquitously supported by computers, smartphones, and IoT devices. Due to its complexity, implementations require large codebases, which are prone to security vulnerabilities, such as the recently discovered BlueBorne and BadBluetooth attacks. While defined by the standard, most of the Bluetooth functionality, as defined by different Bluetooth profiles, is not required in the common usage scenarios.

Starting from this observation, we implement LIGHTBLUE, a framework performing automatic, profile-aware debloating of Bluetooth stacks, allowing users to automatically minimize their Bluetooth attack surface by removing unneeded Bluetooth features. L IGHT B LUE starts with a target Bluetooth application, detects the associated Bluetooth profiles, and applies a combination of control-flow and data-flow analysis to remove unused code within a Bluetooth host code. Furthermore, to debloat the Bluetooth firmware, LIGHTBLUE extracts the used Host Controller Interface (HCI) commands and patches the HCI dispatcher in the Bluetooth firmware automatically, so that the Bluetooth firmware avoids processing unneeded HCI commands.

We evaluate LIGHTBLUE on four different Bluetooth hosts and three different Bluetooth controllers. Our evaluation shows that LIGHTB LUE achieves between 32% and 50% code reduction in the Bluetooth host code and between 57% and 83% HCI command reduction in the Bluetooth firmware. This code reduction leads to the prevention of attacks responsible for 20 known CVEs, such as BlueBorne and BadBluetooth, while introducing no performance overhead and without affecting the behavior of the debloated application.

PACStack: an Authenticated Call Stack

Hans Liljestrand, University of Waterloo; Thomas Nyman and Lachlan J. Gunn, Aalto University; Jan-Erik Ekberg, Huawei Technologies and Aalto University; N. Asokan, University of Waterloo and Aalto University

Available Media

A popular run-time attack technique is to compromise the control-flow integrity of a program by modifying function return addresses on the stack. So far, shadow stacks have proven to be essential for comprehensively preventing return address manipulation. Shadow stacks record return addresses in integrity-protected memory secured with hardware-assistance or software access control. Software shadow stacks incur high overheads or trade off security for efficiency. Hardware-assisted shadow stacks are efficient and secure, but require the deployment of special-purpose hardware.

We present authenticated call stack (ACS), an approach that uses chained message authentication codes (MACs). Our prototype, PACStack, uses the ARMv8.3-A general purpose hardware mechanism for pointer authentication (PA) to implement ACS. Via a rigorous security analysis, we show that PACStack achieves security comparable to hardware-assisted shadow stacks without requiring dedicated hardware. We demonstrate that PACStack's performance overhead is small (≈3%).

9:15 am–9:30 am

Break

9:30 am–11:15 am

Track 1

Usable Security and Privacy: User Perspectives

Session Chairs: Patrick Gage Kelley, Google, and Elissa M. Redmiles, Max Planck Institute for Software Systems

"It's stressful having all these phones": Investigating Sex Workers' Safety Goals, Risks, and Practices Online

Allison McDonald, University of Michigan; Catherine Barwulor, Clemson University; Michelle L. Mazurek, University of Maryland; Florian Schaub, University of Michigan; Elissa M. Redmiles, Max Planck Institute for Software Systems

Distinguished Paper Award Winner

Available Media

We investigate how a population of end-users with especially salient security and privacy risks --- sex workers --- conceptualizes and manages their digital safety. The commercial sex industry is increasingly Internet-mediated. As such, sex workers are facing new challenges in protecting their digital privacy and security and avoiding serious consequences such as stalking, blackmail, and social exclusion. Through interviews (n=29) and a survey (n=65) with sex workers in European countries where sex work is legal and regulated, we find that sex workers have well-defined safety goals and clear awareness of the risks to their safety: clients, deficient legal protections, and hostile digital platforms. In response to these risks, our participants developed complex strategies for protecting their safety, but use few tools specifically designed for security and privacy. Our results suggest that if even high-risk users with clear risk conceptions view existing tools as insufficiently effective to merit the cost of use, these tools are not actually addressing their real security needs. Our findings underscore the importance of more holistic design of security tools to address both online and offline axes of safety.

"Now I'm a bit angry:" Individuals' Awareness, Perception, and Responses to Data Breaches that Affected Them

Peter Mayer, Karlsruhe Institute of Technology; Yixin Zou and Florian Schaub, University of Michigan; Adam J. Aviv, The George Washington University

Available Media

Despite the prevalence of data breaches, there is a limited understanding of individuals' awareness, perception, and responses to breaches that affect them. We provide novel insights into this topic through an online study (n=413) in which we presented participants with up to three data breaches that had exposed their email addresses and other personal information. Overall, 73% of participants were affected by at least one breach, 5.36 breaches on average. Many participants attributed the cause of being affected by a breach} to their poor email and security practices; only 14% correctly attributed the cause to external factors such as breached organizations and hackers. Participants were unaware of 74% of displayed breaches and expressed various emotions when learning about them. While some reported intending to take action, most participants believed the breach would not impact them. Our findings underline the need for user-friendly tools to improve consumers' resilience against breaches and accountability for breached organizations to provide more proactive post-breach communications and mitigations.

"It's the Company, the Government, You and I": User Perceptions of Responsibility for Smart Home Privacy and Security

Julie Haney, National Institute of Standards and Technology; Yasemin Acar, National Institute of Standards and Technology and Leibniz University Hannover; Susanne Furman, National Institute of Standards and Technology

Available Media

Smart home technology may expose adopters to increased risk to network security, information privacy, and physical safety. However, users may lack understanding of the privacy and security implications. Additionally, manufacturers often fail to provide transparency and configuration options, and few government-provided guidelines have yet to be widely adopted. This results in little meaningful mitigation action to protect users’ security and privacy. But how can this situation be improved and by whom? It is currently unclear where perceived responsibility for smart home privacy and security lies. To address this gap, we conducted an in-depth interview study of 40 smart home adopters to explore where they assign responsibility and how their perceptions of responsibility relate to their concerns and mitigations. Results reveal that participants’ perceptions of responsibility reflect an interdependent relationship between consumers, manufacturers, and third parties such as the government. However, perceived breakdowns and gaps in the relationship result in users being concerned about their security and privacy. Based on our results, we suggest ways in which these actors can address gaps and better support each other.

The Role of Computer Security Customer Support in Helping Survivors of Intimate Partner Violence

Yixin Zou and Allison McDonald, University of Michigan; Julia Narakornpichit, Nicola Dell, and Thomas Ristenpart, Cornell Tech; Kevin Roundy, Norton Research Group; Florian Schaub, University of Michigan; Acar Tamersoy, Norton Research Group

Available Media

Technology plays an increasingly salient role in facilitating intimate partner violence (IPV). Customer support at computer security companies are receiving cases that involve tech-enabled IPV but might not be well equipped to handle these cases. To assess customer support's existing practices and identify areas for improvement, we conducted five focus groups with professionals who work with IPV survivors (n=17). IPV professionals made numerous suggestions, such as using trauma-informed language, avoiding promises to solve problems, and making referrals to resources and support organizations. To evaluate the practicality of these suggestions, we conducted four focus groups with customer support practitioners (n=11). Support practitioners expressed interest in training agents for IPV cases, but mentioned challenges in identifying potential survivors and frontline agents' limited capacity to help. We conclude with recommendations for computer security companies to better address tech-enabled IPV through training support agents, tracking the prevalence of these cases, and establishing partnerships with IPV advocates.

Evaluating In-Workflow Messages for Improving Mental Models of End-to-End Encryption

Omer Akgul, Wei Bai, Shruti Das, and Michelle L. Mazurek, University of Maryland

Available Media

As large messaging providers increasingly adopt end-to-end encryption, private communication is readily available to more users than ever before. However, misunderstandings of end-to-end encryption's benefits and shortcomings limit people's ability to make informed choices about how and when to use these services. This paper explores the potential of using short educational messages, built into messaging workflows, to improve users' functional mental models of secure communication. A preliminary survey study (n=461) finds that such messages, when used in isolation, can effectively improve understanding of several key concepts. We then conduct a longitudinal study (n=61) to test these messages in a more realistic environment: embedded into a secure messaging app. In this second study, we do not find statistically significant evidence of improvement in mental models; however, qualitative evidence from participant interviews suggests that if made more salient, such messages could have potential to improve users' understanding.

PriSEC: A Privacy Settings Enforcement Controller

Rishabh Khandelwal and Thomas Linden, University of Wisconsin–Madison; Hamza Harkous, Google Inc.; Kassem Fawaz, University of Wisconsin–Madison

Available Media

Online privacy settings aim to provide users with control over their data. However, in their current state, they suffer from usability and reachability issues. The recent push towards automatically analyzing privacy notices has not accompanied a similar effort for the more critical case of privacy settings. So far, the best efforts targeted the special case of making opt-out pages more reachable. In this work, we present PriSEC, a Privacy Settings Enforcement Controller that leverages machine learning techniques towards a new paradigm for automatically enforcing web privacy controls. PriSEC goes beyond finding the webpages with privacy settings to discovering fine-grained options, presenting them in a searchable, centralized interface, and – most importantly – enforcing them on demand with minimal user intervention. We overcome the open nature of web development through novel algorithms that leverage the invariant behavior and rendering of webpages. We evaluate the performance of PriSEC to find that it is able to precisely annotate the privacy controls for 94.3% of the control pages in our evaluation set. To demonstrate the usability of PriSEC, we conduct a user study with 148 participants. We show an average reduction of 3.75x in the time taken to adjust privacy settings as compared to the baseline system.

Are Privacy Dashboards Good for End Users? Evaluating User Perceptions and Reactions to Google's My Activity

Florian M. Farke, Ruhr University Bochum; David G. Balash, The George Washington University; Maximilian Golla, Max Planck Institute for Security and Privacy; Markus Dürmuth, Ruhr University Bochum; Adam J. Aviv, The George Washington University

Available Media

Privacy dashboards and transparency tools help users review and manage the data collected about them online. Since 2016, Google has offered such a tool, My Activity, which allows users to review and delete their activity data from Google services. We conducted an online survey with n = 153 participants to understand if Google's My Activity, as an example of a privacy transparency tool, increases or decreases end-users' concerns and benefits regarding data collection. While most participants were aware of Google's data collection, the volume and detail was surprising, but after exposure to My Activity, participants were significantly more likely to be both less concerned about data collection and to view data collection more beneficially. Only 25% indicated that they would change any settings in the My Activity service or change any behaviors. This suggests that privacy transparency tools are quite beneficial for online services as they garner trust with their users and improve their perceptions without necessarily changing users' behaviors. At the same time, though, it remains unclear if such transparency tools actually improve end user privacy by sufficiently assisting or motivating users to change or review data collection settings.

Track 2

Cryptographic Proof Systems, Analysis, and Applications

Session Chair: Sherman S. M. Chow, The Chinese University of Hong Kong

Mystique: Efficient Conversions for Zero-Knowledge Proofs with Applications to Machine Learning

Chenkai Weng, Northwestern University; Kang Yang, State Key Laboratory of Cryptology; Xiang Xie, Shanghai Key Laboratory of Privacy-Preserving Computation and MatrixElements Technologies; Jonathan Katz, University of Maryland; Xiao Wang, Northwestern University

Available Media

Recent progress in interactive zero-knowledge (ZK) proofs has improved the efficiency of proving large-scale computations significantly. Nevertheless, real-life applications (e.g., in the context of private inference using deep neural networks) often involve highly complex computations, and existing ZK protocols lack the expressiveness and scalability to prove results about such computations efficiently.

In this paper, we design, develop, and evaluate a ZK system (Mystique) that allows for efficient conversions between arithmetic and Boolean values, between publicly committed and privately authenticated values, and between fixed-point and floating-point numbers. Targeting large-scale neural-network inference, we also present an improved ZK protocol for matrix multiplication that yields a 7× improvement compared to the state-of-the-art. Finally, we incorporate Mystique in Rosetta, a TensorFlow-based privacy-preserving framework.

Mystique is able to prove correctness of an inference on a private image using a committed (private) ResNet-101 model in 28 minutes, and can do the same task when the model is public in 5 minutes, with only a 0.02% decrease in accuracy compared to a non-ZK execution when testing on the CIFAR10 dataset. Our system is the first to support ZK proofs about neural-network models with over 100 layers with virtually no loss of accuracy.

Poseidon: A New Hash Function for Zero-Knowledge Proof Systems

Lorenzo Grassi, Radboud University Nijmegen; Dmitry Khovratovich, Ethereum Foundation and Dusk Network; Christian Rechberger, IAIK, Graz University of Technology; Arnab Roy, University of Klagenfurt; Markus Schofnegger, IAIK, Graz University of Technology

Available Media

The area of practical computational integrity proof systems, like SNARKs, STARKs, Bulletproofs, is seeing a very dynamic development with several constructions having appeared recently with improved properties and relaxed setup requirements. Many use cases of such systems involve, often as their most expensive part, proving the knowledge of a preimage under a certain cryptographic hash function, which is expressed as a circuit over a large prime field. A notable example is a zero-knowledge proof of coin ownership in the Zcash cryptocurrency, where the inadequacy of the SHA-256 hash function for such a circuit caused a huge computational penalty.

In this paper, we present a modular framework and concrete instances of cryptographic hash functions which work natively with GF(p) objects. Our hash function Poseidon uses up to 8x fewer constraints per message bit than Pedersen Hash.

Our construction is not only expressed compactly as a circuit, but can also be tailored for various proof systems using specially crafted polynomials, thus bringing another boost in performance. We demonstrate this by implementing a 1-out-of-a-billion membership proof with Merkle trees in less than a second by using Bulletproofs.

Dynamic proofs of retrievability with low server storage

Gaspard Anthoine, Jean-Guillaume Dumas, Mélanie de Jonghe, Aude Maignan, and Clément Pernet, Université Grenoble Alpes; Michael Hanling and Daniel S. Roche, United States Naval Academy

Available Media

Proofs of Retrievability (PoRs) are protocols which allow a client to store data remotely and to efficiently ensure, via audits, that the entirety of that data is still intact. A dynamic PoR system also supports efficient retrieval and update of any small portion of the data. We propose new, simple protocols for dynamic PoR that are designed for practical efficiency, trading decreased persistent storage for increased server computation, and show in fact that this tradeoff is inherent via a lower bound proof of time-space for any PoR scheme. Notably, ours is the first dynamic PoR which does not require any special encoding of the data stored on the server, meaning it can be trivially composed with any database service or with existing techniques for encryption or redundancy. Our implementation and deployment on Google Cloud Platform demonstrates our solution is scalable: for example, auditing a 1TB file takes just less than 5 minutes and costs less than $0.08 USD. We also present several further enhancements, reducing the amount of client storage, or the communication bandwidth, or allowing public verifiability, wherein any untrusted third party may conduct an audit.

Where's Crypto?: Automated Identification and Classification of Proprietary Cryptographic Primitives in Binary Code

Carlo Meijer, Radboud University; Veelasha Moonsamy, Ruhr University Bochum; Jos Wetzels, Midnight Blue Labs

Available Media

The continuing use of proprietary cryptography in embedded systems across many industry verticals, from physical access control systems and telecommunications to machine-to-machine authentication, presents a significant obstacle to black-box security-evaluation efforts. In-depth security analysis requires locating and classifying the algorithm in often very large binary images, thus rendering manual inspection, even when aided by heuristics, time consuming.

In this paper, we present a novel approach to automate the identification and classification of (proprietary) cryptographic primitives within binary code. Our approach is based on Data Flow Graph (DFG) isomorphism, previously proposed by Lestringant et al. Unfortunately, their DFG isomorphism approach is limited to known primitives only, and relies on heuristics for selecting code fragments for analysis. By combining the said approach with symbolic execution, we overcome all their limitations, and are able to extend the analysis into the domain of unknown, proprietary cryptographic primitives. To demonstrate that our proposal is practical, we develop various signatures, each targeted at a distinct class of cryptographic primitives, and present experimental evaluations for each of them on a set of binaries, both publicly available (and thus providing reproducible results), and proprietary ones. Lastly, we provide a free and open-source implementation of our approach, called Where's Crypto?, in the form of a plug-in for the popular IDA disassembler.

Towards Formal Verification of State Continuity for Enclave Programs

Mohit Kumar Jangid, The Ohio State University; Guoxing Chen, Shanghai Jiao Tong University; Yinqian Zhang, Southern University of Science and Technology; Zhiqiang Lin, The Ohio State University

Available Media

Trusted Execution Environments such as Intel SGX provide software applications with hardware support for preventing attacks from privileged software. However, these applications are still subject to rollback or replay attacks due to their lack of state continuity protection from the hardware. Therefore, maintaining state continuity has become a burden of software developers, which is not only challenging to implement but also difficult to validate. In this paper, we make the first attempt towards formally verifying the property of state continuity for SGX enclave programs by leveraging the symbolic verification tool, Tamarin Prover, to model SGX-specific program semantics and operations, and verify the property of state continuity with respect to monotonic counters, global variables, and sealed data, respectively. We apply this method to analyze these three types of state continuity issues exhibited in three open-source SGX applications. We show that our method can successfully identify the flaws that lead to failures of maintaining state continuity, and formally verify the corrected implementation with respect to the desired property. The discovered flaws have been reported to the developers and some have been addressed.

Protecting Cryptography Against Compelled Self-Incrimination

Sarah Scheffler and Mayank Varia, Boston University

Available Media

The information security community has devoted substantial effort to the design, development, and universal deployment of strong encryption schemes that withstand search and seizure by computationally-powerful nation-state adversaries. In response, governments are increasingly turning to a different tactic: issuing subpoenas that compel people to decrypt devices themselves, under the penalty of contempt of court if they do not comply. Compelled decryption subpoenas sidestep questions around government search powers that have dominated the Crypto Wars and instead touch upon a different (and still unsettled) area of the law: how encryption relates to a person's right to silence and against self-incrimination.

In this work, we provide a rigorous, composable definition of a critical piece of the law that determines whether cryptosystems are vulnerable to government compelled disclosure in the United States. We justify our definition by showing that it is consistent with prior court cases. We prove that decryption is often not compellable by the government under our definition. Conversely, we show that many techniques that bolster security overall can leave one more vulnerable to compelled disclosure.

As a result, we initiate the study of protecting cryptographic protocols against the threat of future compelled disclosure. We find that secure multi-party computation is particularly vulnerable to this threat, and we design and implement new schemes that are provably resilient in the face of government compelled disclosure. We believe this work should influence the design of future cryptographic primitives and contribute toward the legal debates over the constitutionality of compelled decryption.

CSProp: Ciphertext and Signature Propagation Low-Overhead Public-Key Cryptosystem for IoT Environments

Fatemah Alharbi, Taibah University, Yanbu; Arwa Alrawais, Prince Sattam Bin Abdulaziz University; Abdulrahman Bin Rabiah, University of California, Riverside, and King Saud University; Silas Richelson and Nael Abu-Ghazaleh, University of California, Riverside

Available Media

Cryptographic operations can be prohibitively expensive for IoT and other resource-constrained devices. We introduce a new cryptographic primitive which we call Ciphertext and Signature Propagation (CSProp) in order to deliver security to the weak end-devices. CSProp is a cryptographic propagation algorithm whereby an untrusted machine sitting upstream of a lightweight device can modify an authenticated message so it can be efficiently verified. Unlike proxy-based solutions, this upstream machine is stateless and untrusted (making it possible for any device to serve that role), and the propagated signature is mathematically guaranteed to be valid only if the original signature is also valid. CSProp relies on RSA security and can be used to optimize any operations using the public key such as signature validation and encryption, which our experiments show are the most common public key operations in IoT settings. We test CSProp by using it to extend DNSSEC to edge devices (validation), and to optimize the performance of TLS (validation and encryption) on a range of resource constrained devices. CSProp reduces DNSSEC validation latency by 78x and energy consumption by 47x on the Raspberry Pi Zero. It reduces TLS handshake latency and energy by an average of 8x each. On an Arduino-based IoT board, CSProp significantly outperforms traditional RSA public key operations (e.g., 57x and 36x reductions in latency and energy consumption, respectively, for encryption).

Track 3

Hardware Side Channel Attacks

Session Chairs: Xiali (Sharon) Hei, University of Louisiana at Lafayette, and Sara Rampazzi, University of Michigan and University of Florida

Automatic Extraction of Secrets from the Transistor Jungle using Laser-Assisted Side-Channel Attacks

Thilo Krachenfels and Tuba Kiyan, Technische Universität Berlin; Shahin Tajik, Worcester Polytechnic Institute; Jean-Pierre Seifert, Technische Universität Berlin; Fraunhofer SIT

Available Media

The security of modern electronic devices relies on secret keys stored on secure hardware modules as the root-of-trust (RoT). Extracting those keys would break the security of the entire system. As shown before, sophisticated side-channel analysis (SCA) attacks, using chip failure analysis (FA) techniques, can extract data from on-chip memory cells. However, since the chip's layout is unknown to the adversary in practice, secret key localization and reverse engineering are onerous tasks. Consequently, hardware vendors commonly believe that the ever-growing physical complexity of the integrated circuit (IC) designs can be a natural barrier against potential adversaries. In this work, we present a novel approach that can extract the secret key without any knowledge of the IC's layout, and independent from the employed memory technology as key storage. We automate the—traditionally very labor-intensive—reverse engineering and data extraction process. To that end, we demonstrate that black-box measurements captured using laser-assisted SCA techniques from a training device with known key can be used to profile the device for a later key prediction on other victim devices with unknown keys. To showcase the potential of our approach, we target keys on three different hardware platforms, which are utilized as RoT in different products.

Lord of the Ring(s): Side Channel Attacks on the CPU On-Chip Ring Interconnect Are Practical

Riccardo Paccagnella, Licheng Luo, and Christopher W. Fletcher, University of Illinois at Urbana-Champaign

Available Media

We introduce the first microarchitectural side channel attacks that leverage contention on the CPU ring interconnect. There are two challenges that make it uniquely difficult to exploit this channel. First, little is known about the ring interconnect's functioning and architecture. Second, information that can be learned by an attacker through ring contention is noisy by nature and has coarse spatial granularity. To address the first challenge, we perform a thorough reverse engineering of the sophisticated protocols that handle communication on the ring interconnect. With this knowledge, we build a cross-core covert channel over the ring interconnect with a capacity of over 4 Mbps from a single thread, the largest to date for a cross-core channel not relying on shared memory. To address the second challenge, we leverage the fine-grained temporal patterns of ring contention to infer a victim program's secrets. We demonstrate our attack by extracting key bits from vulnerable EdDSA and RSA implementations, as well as inferring the precise timing of keystrokes typed by a victim user.

Frontal Attack: Leaking Control-Flow in SGX via the CPU Frontend

Ivan Puddu, Moritz Schneider, Miro Haller, and Srdjan Čapkun, ETH Zurich

Available Media

We introduce a new timing side-channel attack on Intel CPU processors. Our Frontal attack exploits timing differences that arise from how the CPU frontend fetches and processes instructions while being interrupted. In particular, we observe that in modern Intel CPUs, some instructions' execution times will depend on which operations precede and succeed them, and on their virtual addresses. Unlike previous attacks that could only profile branches if they contained different code or had known branch targets, the Frontal attack allows the adversary to distinguish between instruction-wise identical branches. As the attack requires OS capabilities to set the interrupts, we use it to exploit SGX enclaves. Our attack further demonstrates that secret-dependent branches should not be used even alongside defenses to current controlled-channel attacks. We show that the adversary can use the Frontal attack to extract a secret from an SGX enclave if that secret was used as a branching condition for two instruction-wise identical branches. We successfully tested the attack on all the available Intel CPUs with SGX (until 10th gen) and used it to leak information from two commonly used cryptographic libraries.

Charger-Surfing: Exploiting a Power Line Side-Channel for Smartphone Information Leakage

Patrick Cronin, Xing Gao, and Chengmo Yang, University of Delaware; Haining Wang, Virginia Tech

Available Media

Touchscreen-based mobile devices such as smartphones and tablets are used daily by billions of people for productivity and entertainment. This paper uncovers a new security threat posed by a side-channel leakage through the power line, called Charger-Surfing, which targets these touchscreen devices. We reveal that while a smartphone is charging, its power trace, which can be measured via the USB charging cable, leaks information about the dynamic content on its screen. This information can be utilized to determine the location on the touchscreen where an animation is played by the mobile OS to indicate, for instance, that a button press has been registered. We develop a portable, low cost power trace collection system for the side-channel construction. This leakage channel is thoroughly evaluated on various smartphones running Android or iOS, equipped with the two most commonly used screen technologies (LCD and OLED). We validate the effectiveness of Charger-Surfing by conducting a case study on a passcode unlock screen. Our experiments show that an adversary can exploit Charger-Surfing across a wide range of smartphone models to achieve an average accuracy of 98.7% for single button inference, and an average of 95.1% or 92.8% accuracy on the first attempt when cracking a victim's 4-digit or 6-digit passcode, respectively. The inference accuracy increases to 99.3% (4-digit) or 96.9% (6-digit) within five trials. We further demonstrate the robustness of Charger-Surfing in realistic settings and discuss countermeasures against it.

VoltPillager: Hardware-based fault injection attacks against Intel SGX Enclaves using the SVID voltage scaling interface

Zitai Chen, Georgios Vasilakis, Kit Murdock, Edward Dean, David Oswald, and Flavio D. Garcia, School of Computer Science, University of Birmingham, UK

Available Media

Hardware-based fault injection attacks such as voltage and clock glitching have been thoroughly studied on embedded devices. Typical targets for such attacks include smartcards and low-power microcontrollers used in IoT devices. This paper presents the first hardware-based voltage glitching attack against a fully-fledged Intel CPU. The transition to complex CPUs is not trivial due to several factors, including: a complex operating system, large power consumption, multi-threading, and high clock speeds. To this end, we have built VoltPillager, a low-cost tool for injecting messages on the Serial Voltage Identification bus between the CPU and the voltage regulator on the motherboard. This allows us to precisely control the CPU core voltage. We leverage this powerful tool to mount fault-injection attacks that breach confidentiality and integrity of Intel SGX enclaves. We present proof-of-concept key-recovery attacks against cryptographic algorithms running inside SGX. We demonstrate that VoltPillager attacks are more powerful than recent software-only undervolting attacks against SGX (CVE-2019-11157) because they work on fully patched systems with all countermeasures against software undervolting enabled. Additionally, we are able to fault security-critical operations by delaying memory writes. Mitigation of VoltPillager is not straightforward and may require a rethink of the SGX adversarial model where a cloud provider is untrusted and has physical access to the hardware.

CIPHERLEAKS: Breaking Constant-time Cryptography on AMD SEV via the Ciphertext Side Channel

Mengyuan Li, The Ohio State University; Yinqian Zhang, Southern University of Science and Technology; Huibo Wang and Kang Li, Baidu Security; Yueqiang Cheng, NIO Security Research

Available Media

AMD's Secure Encrypted Virtualization (SEV) is a hardware extension available in AMD's EPYC server processors to support confidential cloud computing. While various prior studies have demonstrated attacks against SEV by exploiting its lack of encryption in the VM control block or the lack of integrity protection of the encrypted memory and nested page tables, these issues have been addressed in the subsequent releases of SEV-Encrypted State (SEV-ES) and SEV-Secure Nested Paging (SEV-SNP).

In this paper, we study a previously unexplored vulnerability of SEV, including both SEV-ES and SEV-SNP. The vulnerability is dubbed ciphertext side channels, which allows the privileged adversary to infer the guest VM's execution states or recover certain plaintext. To demonstrate the severity of the vulnerability, we present the CIPHERLEAKS attack, which exploits the ciphertext side channel to steal private keys from the constant-time implementation of the RSA and the ECDSA in the latest OpenSSL library.

Cross-VM and Cross-Processor Covert Channels Exploiting Processor Idle Power Management

Paizhuo Chen, Lei Li, and Zhice Yang, ShanghaiTech University

Available Media

To achieve power-efficient computing, processors engage idle power management mechanisms to turn on/off idle components according to the dynamics of the workload. A processor's hardware components are classified and managed through the core and the uncore. The uncore is the supporting hardware shared by the cores, hence the decision of turning it on/off depends on the cores' activities. Such dependency implies a covert channel threat in multi-core platforms. Specifically, the power status of the uncore reflects the workload pattern of the active core, and it can be probed by any process running on the processor. This allows the process to infer the workload information of the active core. We show this covert channel can work across processors and violate VM isolation. We validate the channel in in-house testbeds as well as proprietary cloud servers.

11:15 am–11:45 am

Break

11:45 am–1:30 pm

Track 1

Permissions and Passwords

Session Chairs: Hamza Harkous, Google, and Kassem Fawaz, University of Wisconsin—Madison

Can Systems Explain Permissions Better? Understanding Users' Misperceptions under Smartphone Runtime Permission Model

Bingyu Shen, University of California, San Diego; Lili Wei, The Hong Kong University of Science and Technology; Chengcheng Xiang, Yudong Wu, Mingyao Shen, and Yuanyuan Zhou, University of California, San Diego; Xinxin Jin, Whova, Inc.

Available Media

Current smartphone operating systems enable users to manage permissions according to their personal preferences with a runtime permission model. Nonetheless, the systems provide very limited information when requesting permissions, making it difficult for users to understand permissions' capabilities and potentially induced risks.

In this paper, we first investigated to what extent current system-provided information can help users understand the scope of permissions and their potential risks. We took a mixed-methods approach by collecting real permission settings from 4,636 Android users, an interview study of 20 participants, and large-scale Internet surveys of 1559 users. Our study identified several common misunderstandings on the runtime permission model among users. We found that only a very small percentage (6.1%) of users can infer the scope of permission groups accurately from the system-provided information. This indicates that the information provided by current systems is far from sufficient.

We thereby explored what extra information that systems can provide to help users make more informed permission decisions. By surveying users' common concerns on apps' permission requests, we identified five types of information (i.e., decision factors) that are helpful for users' decisions. We further studied the impact and helpfulness of the factors to users' permission decisions with both positive and negative messages. Our study shows that the background access factor helps most while the grant rate helps the least. Based on the findings, we provide suggestions for system designers to enhance future systems with more permission information.

"Shhh...be quiet!" Reducing the Unwanted Interruptions of Notification Permission Prompts on Chrome

Igor Bilogrevic, Balazs Engedy, Judson L. Porter III, Nina Taft, Kamila Hasanbega, Andrew Paseltiner, Hwi Kyoung Lee, Edward Jung, Meggyn Watkins, PJ McLachlan, and Jason James, Google

Available Media

Push notifications can be a very useful feature. On web browsers, they allow users to receive timely updates even if the website is not currently open. On Chrome, the feature has become extremely popular since its inception in 2015, but it is also the least likely to be accepted by users. Chrome telemetry shows that, although 74% of all permission prompts are about notifications, they are also the least likely to be granted with only a 12% grant rate on desktop and 23% grant rate on Android. In order to preserve its utility for websites and to reduce unwanted interruptions and potential abuses for the users, we designed and tested both a novel UI and its activation mechanism for notification permission prompts in Chrome.

To understand how users interact with such prompts, we conducted two large-scale studies with more than 300 million users in the wild. The first study showed that most of them block or ignore the prompts across all types of websites, which prompted us to rethink its UI and activation logic. The second study, based on an A/B test using behavioral data from more than 40 million users who interacted with more than 100 million prompts on more than 70 thousand websites, show that the new prompt is very effective at reducing unwanted interruptions and their frequency (up to 30% fewer unnecessary actions on the prompts), with a minimal impact (less than 5%) on the grant rates, across all types of users and websites. We achieve these results thanks to a novel adaptive activation mechanism coupled with a block list of interrupting websites, which is derived from crowd-sourced telemetry from Chrome clients.

Explanation Beats Context: The Effect of Timing & Rationales on Users' Runtime Permission Decisions

Yusra Elbitar, CISPA Helmholtz Center for Information Security, Saarland University; Michael Schilling, CISPA Helmholtz Center for Information Security; Trung Tin Nguyen, CISPA Helmholtz Center for Information Security, Saarland University; Michael Backes and Sven Bugiel, CISPA Helmholtz Center for Information Security

Available Media

Current mobile platforms leave it up to the app developer to decide when to request permissions (timing) and whether to provide explanations why and how users' private data are accessed (rationales). Given these liberties, it is important to understand how developers should use timing and rationales to effectively assist users in their permission decisions. While guidelines and recommendations for developers exist, no study has systematically investigated the actual influence of timing, rationales, and their combinations on users' decision-making process. In this work, we conducted a comparative online study with 473 participants who were asked to interact with mockup apps drawn from a pool of 120 variations of 30 apps. The study design was guided by developers' current permission request practices derived from a dynamic analysis of the top apps on Google Play. Our results show that there is a clear interplay between timing and rationales on users' permission decisions and the evaluation of their decisions, making the effect of rationales stronger when shown upfront and limiting the effect of timing when rationales are present. We therefore suggest adaptation to the available guidelines. We also find that permission decisions depend on the individuality of users, indicating that there is no one-fits-all permission request strategy, upon we suggest better individual support and outline one possible solution.

A Large Scale Study of User Behavior, Expectations and Engagement with Android Permissions

Weicheng Cao and Chunqiu Xia, University of Toronto; Sai Teja Peddinti, Google; David Lie, University of Toronto; Nina Taft, Google; Lisa M. Austin, University of Toronto

Available Media

We conduct a global study on the behaviors, expectations and engagement of 1,719 participants across 10 countries and regions towards Android application permissions. Participants were recruited using mobile advertising and used an application we designed for 30 days. Our app samples user behaviors (decisions made), rationales (via in-situ surveys), expectations, and attitudes, as well as some app provided explanations. We study the grant and deny decisions our users make, and build mixed effect logistic regression models to illustrate the many factors that influence this decision making. Among several interesting findings, we observed that users facing an unexpected permission request are more than twice as likely to deny it compared to a user who expects it, and that permission requests accompanied by an explanation have a deny rate that is roughly half the deny rate of app permission requests without explanations. These findings remain true even when controlling for other factors. To the best of our knowledge, this may be the first study of actual privacy behavior (not stated behavior) for Android apps, with users using their own devices, across multiple continents.

Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries

Dario Pasquini, Sapienza University of Rome, Institute of Applied Computing CNR; Marco Cianfriglia, Institute of Applied Computing CNR; Giuseppe Ateniese, Stevens Institute of Technology; Massimo Bernaschi, Institute of Applied Computing CNR

Available Media

Password security hinges on an in-depth understanding of the techniques adopted by attackers. Unfortunately, real-world adversaries resort to pragmatic guessing strategies such as dictionary attacks that are inherently difficult to model in password security studies. In order to be representative of the actual threat, dictionary attacks must be thoughtfully configured and tuned. However, this process requires a domain-knowledge and expertise that cannot be easily replicated. The consequence of inaccurately calibrating dictionary attacks is the unreliability of password security analyses, impaired by a severe measurement bias.

In the present work, we introduce a new generation of dictionary attacks that is consistently more resilient to inadequate configurations. Requiring no supervision or domain-knowledge, this technique automatically approximates the advanced guessing strategies adopted by real-world attackers. To achieve this: (1) We use deep neural networks to model the proficiency of adversaries in building attack configurations. (2) Then, we introduce dynamic guessing strategies within dictionary attacks. These mimic experts' ability to adapt their guessing strategies on the fly by incorporating knowledge on their targets.

Our techniques enable more robust and sound password strength estimates within dictionary attacks, eventually reducing overestimation in modeling real-world threats in password security.

Using Amnesia to Detect Credential Database Breaches

Ke Coby Wang, University of North Carolina at Chapel Hill; Michael K. Reiter, Duke University

Available Media

Known approaches for using decoy passwords (honeywords) to detect credential database breaches suffer from the need for a trusted component to recognize decoys when entered in login attempts, and from an attacker's ability to test stolen passwords at other sites to identify user-chosen passwords based on their reuse at those sites. Amnesia is a framework that resolves these difficulties. Amnesia requires no secret state to detect the entry of honeywords and additionally allows a site to monitor for the entry of its decoy passwords elsewhere. We quantify the benefits of Amnesia using probabilistic model checking and the practicality of this framework through measurements of a working implementation.

Incrementally Updateable Honey Password Vaults

Haibo Cheng, Wenting Li, and Ping Wang, Peking University; Chao-Hsien Chu, Pennsylvania State University; Kaitai Liang, Delft University of Technology

Available Media

Password vault applications allow a user to store multiple passwords in a vault and choose a master password to encrypt the vault. In practice, attackers may steal the storage file of the vault and further compromise all stored passwords by offline guessing the master password. Honey vaults have been proposed to address the threat. By producing plausible-looking decoy vaults for wrong master passwords, honey vaults force attackers to shift offline guessing to online verifications.

However, the existing honey vault schemes all suffer from intersection attacks in the multi-leakage case where an old version of the storage file (e.g., a backup) is stolen along with the current version. The attacker can offline identify the decoys and completely break the schemes. We design a generic construction based on a multi-similar-password model and further propose an incremental update mechanism. With our mechanism, the attacker cannot get any extra advantages from the old storage, and therefore degenerates to an attacker only with knowledge of the current version.

To further evaluate the security in the traditional single-leakage case where only the current version is stolen, we investigate the theoretically optimal strategy for online verifications, and propose practical attacks. Targeting the existing schemes, our attacks crack 33%—55% of real vaults via only one-time online guess and achieve 85%—94% accuracy in distinguishing real vaults from decoys. In contrast, our design reduces the values of the two metrics to 2% and 58% (close to the ideal values 0% and 50%), respectively. This indicates that the attackers needs to carry out 2.8x—7.5x online verifications to break our scheme.

Track 2

Private Computation and Differential Privacy

Session Chair: Tamara Bonaci, Northeastern University

Private Blocklist Lookups with Checklist

Dmitry Kogan, Stanford University; Henry Corrigan-Gibbs, MIT CSAIL

Available Media

This paper presents Checklist, a system for private blocklist lookups. In Checklist, a client can determine whether a particular string appears on a server-held blocklist of strings, without leaking its string to the server. Checklist is the first blocklist-lookup system that (1) leaks no information about the client's string to the server, (2) does not require the client to store the blocklist in its entirety, and (3) allows the server to respond to the client's query in time sublinear in the blocklist size. To make this possible, we construct a new two-server private-information-retrieval protocol that is both asymptotically and concretely faster, in terms of server-side time, than those of prior work. We evaluate Checklist in the context of Google's "Safe Browsing" blocklist, which all major browsers use to prevent web clients from visiting malware-hosting URLs. Today, lookups to this blocklist leak partial hashes of a subset of clients' visited URLs to Google's servers. We have modified Firefox to perform Safe-Browsing blocklist lookups via Checklist servers, which eliminates the leakage of partial URL hashes from the Firefox client to the blocklist servers. This privacy gain comes at the cost of increasing communication by a factor of 3.3×, and the server-side compute costs by 9.8×. Checklist reduces end-to-end server-side costs by 6.7×, compared to what would be possible with prior state-of-the-art two-server private information retrieval.

Identifying Harmful Media in End-to-End Encrypted Communication: Efficient Private Membership Computation

Anunay Kulshrestha and Jonathan Mayer, Princeton University

Available Media

End-to-end encryption (E2EE) poses a challenge for automated detection of harmful media, such as child sexual abuse material and extremist content. The predominant approach at present, perceptual hash matching, is not viable because in E2EE a communications service cannot access user content.

In this work, we explore the technical feasibility of privacy-preserving perceptual hash matching for E2EE services. We begin by formalizing the problem space and identifying fundamental limitations for protocols. Next, we evaluate the predictive performance of common perceptual hash functions to understand privacy risks to E2EE users and contextualize errors associated with the protocols we design.

Our primary contribution is a set of constructions for privacy-preserving perceptual hash matching. We design and evaluate client-side constructions for scenarios where disclosing the set of harmful hashes is acceptable. We then design and evaluate interactive protocols that optionally protect the hash set and do not disclose matches to users. The constructions that we propose are practical for deployment on mobile devices and introduce a limited additional risk of false negatives.

Fuzzy Labeled Private Set Intersection with Applications to Private Real-Time Biometric Search

Erkam Uzun, Simon P. Chung, Vladimir Kolesnikov, Alexandra Boldyreva, and Wenke Lee, Georgia Institute of Technology

Available Media

The explosive growth of biometrics use (e.g., in surveillance) poses a persistent challenge to keep biometric data private without sacrificing the apps' functionality. We consider private querying of a real-life biometric scan (e.g., a person's face) against a private biometric database. The querier learns only the label(s) of a matching scan(s) (e.g. a person's name), and the database server learns nothing. We formally define Fuzzy Labeled Private Set Intersection (FLPSI), a primitive computing the intersection of noisy input sets by considering closeness/similarity instead of equality. Our FLPSI protocol's communication is sublinear in database size and is concretely efficient. We implement it and apply it to facial search by integrating with our fine-tuned toolchain that maps face images into Hamming space. We have implemented and extensively tested our system, achieving high performance with concretely small network usage: for a 10K-row database, the query response time over WAN (resp. fast LAN) is 146ms (resp. 47ms), transferring 12.1MB; offline precomputation (with no communication) time is 0.94s. FLPSI scales well: for a 1M-row database, online time is 1.66s (WAN) and 1.46s (fast LAN) with 40.8MB of data transfer in online phase and 37.5s in offline precomputation. This improves the state-of-the-art work (SANNS) by 9-25× (on WAN) and 1.2-4× (on fast LAN). Our false non-matching rate is 0.75% for at most 10 false matches over 1M-row DB, which is comparable to underlying plaintext matching algorithm.

PrivSyn: Differentially Private Data Synthesis

Zhikun Zhang, Zhejiang University and CISPA Helmholtz Center for Information Security; Tianhao Wang, Ninghui Li, and Jean Honorio, Purdue University; Michael Backes, CISPA Helmholtz Center for Information Security; Shibo He and Jiming Chen, Zhejiang University and Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies; Yang Zhang, CISPA Helmholtz Center for Information Security

Available Media

In differential privacy (DP), a challenging problem is to generate synthetic datasets that efficiently capture the useful information in the private data. The synthetic dataset enables any task to be done without privacy concern and modification to existing algorithms. In this paper, we present PrivSyn, the first automatic synthetic data generation method that can handle general tabular datasets (with 100 attributes and domain size > 2500). PrivSyn is composed of a new method to automatically and privately identify correlations in the data, and a novel method to generate sample data from a dense graphic model. We extensively evaluate different methods on multiple datasets to demonstrate the performance of our method.

Data Poisoning Attacks to Local Differential Privacy Protocols

Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong, Duke University

Available Media

Local Differential Privacy (LDP) protocols enable an untrusted data collector to perform privacy-preserving data analytics. In particular, each user locally perturbs its data to preserve privacy before sending it to the data collector, who aggregates the perturbed data to obtain statistics of interest. In the past several years, researchers from multiple communities—such as security, database, and theoretical computer science—have proposed many LDP protocols. These studies mainly focused on improving the utility of the LDP protocols. However, the security of LDP protocols is largely unexplored.

In this work, we aim to bridge this gap. We focus on LDP protocols for frequency estimation and heavy hitter identification, which are two basic data analytics tasks. Specifically, we show that an attacker can inject fake users into an LDP protocol and the fake users send carefully crafted data to the data collector such that the LDP protocol estimates high frequencies for arbitrary attacker-chosen items or identifies them as heavy hitters. We call our attacks data poisoning attacks. We theoretically and/or empirically show the effectiveness of our attacks. We also explore three countermeasures against our attacks. Our experimental results show that they can effectively defend against our attacks in some scenarios but have limited effectiveness in others, highlighting the needs for new defenses against our attacks.

How to Make Private Distributed Cardinality Estimation Practical, and Get Differential Privacy for Free

Changhui Hu, Newcastle University; Jin Li, Guangzhou University; Zheli Liu, Xiaojie Guo, Yu Wei, and Xuan Guang, Nankai University; Grigorios Loukides, King's College London; Changyu Dong, Newcastle University

Available Media

Secure computation is a promising privacy enhancing technology, but it is often not scalable enough for data intensive applications. On the other hand, the use of sketches has gained popularity in data mining, because sketches often give rise to highly efficient and scalable sub-linear algorithms. It is natural to ask: what if we put secure computation and sketches together? We investigated the question and the findings are interesting: we can get security, we can get scalability, and somewhat unexpectedly, we can also get differential privacy—for free. Our study started from building a secure computation protocol based on the Flajolet-Martin (FM) sketches, for solving the Private Distributed Cardinality Estimation (PDCE) problem, which is a fundamental problem with applications ranging from crowd tracking to network monitoring. The state of art protocol for PDCE (Fenske et al. CCS '17) is computationally expensive and not scalable enough to cope with big data applications, which prompted us to design a better protocol. Our further analysis revealed that if the cardinality to be estimated is large enough, our protocol can achieve (ϵ,δ) differential privacy automatically, without requiring any additional manipulation of the output. The result signifies a new approach for achieving differential privacy that departs from the mainstream approach (i.e. adding noise to the result). Free differential privacy can be achieved because of two reasons: secure computation minimizes information leakage, and the intrinsic estimation variance of the FM sketch makes the output of our protocol uncertain. We further show that the result is not just theoretical: the minimal cardinality for differential privacy to hold is only 102 – 104 for typical parameters.

Locally Differentially Private Analysis of Graph Statistics

Jacob Imola, UC San Diego; Takao Murakami, AIST; Kamalika Chaudhuri, UC San Diego

Available Media

Differentially private analysis of graphs is widely used for releasing statistics from sensitive graphs while still preserving user privacy. Most existing algorithms however are in a centralized privacy model, where a trusted data curator holds the entire graph. As this model raises a number of privacy and security issues – such as, the trustworthiness of the curator and the possibility of data breaches, it is desirable to consider algorithms in a more decentralized local model where no server holds the entire graph.

In this work, we consider a local model, and present algorithms for counting subgraphs – a fundamental task for analyzing the connection patterns in a graph – with LDP (Local Differential Privacy). For triangle counts, we present algorithms that use one and two rounds of interaction, and show that an additional round can significantly improve the utility. For k-star counts, we present an algorithm that achieves an order optimal estimation error in the non-interactive local model. We provide new lower-bounds on the estimation error for general graph statistics including triangle counts and k-star counts. Finally, we perform extensive experiments on two real datasets, and show that it is indeed possible to accurately estimate subgraph counts in the local differential privacy model.

Track 3

Hardware Security

Session Chairs: Jon McCune, Google, and Kevin T. Kornegay, Morgan State University

SMASH: Synchronized Many-sided Rowhammer Attacks from JavaScript

Finn de Ridder, ETH Zurich and VU Amsterdam; Pietro Frigo, Emanuele Vannacci, Herbert Bos, and Cristiano Giuffrida, VU Amsterdam; Kaveh Razavi, ETH Zurich

Available Media

Despite their in-DRAM Target Row Refresh (TRR) mitigations, some of the most recent DDR4 modules are still vulnerable to many-sided Rowhammer bit flips. While these bit flips are exploitable from native code, triggering them in the browser from JavaScript faces three nontrivial challenges. First, given the lack of cache flushing instructions in JavaScript, existing eviction-based Rowhammer attacks are already slow for the older single- or double-sided variants and thus not always effective. With many-sided Rowhammer, mounting effective attacks is even more challenging, as it requires the eviction of many different aggressor addresses from the CPU caches. Second, the most effective many-sided variants, known as n-sided, require large physically-contiguous memory regions which are not available in JavaScript. Finally, as we show for the first time, eviction-based Rowhammer attacks require proper synchronization to bypass in-DRAM TRR mitigations.

Using a number of novel insights, we overcome these challenges to build SMASH (Synchronized MAny-Sided Hammering), a technique to succesfully trigger Rowhammer bit flips from JavaScript on modern DDR4 systems. To mount effective attacks, SMASH exploits high-level knowledge of cache replacement policies to generate optimal access patterns for eviction-based many-sided Rowhammer. To lift the requirement for large physically-contiguous memory regions, SMASH decomposes n-sided Rowhammer into multiple double-sided pairs, which we can identify using slice coloring. Finally, to bypass the in-DRAM TRR mitigations, SMASH carefully schedules cache hits and misses to successfully trigger synchronized many-sided Rowhammer bit flips. We showcase SMASH with an end-to-end JavaScript exploit which can fully compromise the Firefox browser in 15 minutes on average.

Database Reconstruction from Noisy Volumes: A Cache Side-Channel Attack on SQLite

Aria Shahverdi, University of Maryland; Mahammad Shirinov, Bilkent University; Dana Dachman-Soled, University of Maryland

Available Media

We demonstrate the feasibility of database reconstruction under a cache side-channel attack on SQLite. Specifically, we present a Flush+Reload attack on SQLite that obtains approximate (or "noisy") volumes of range queries made to a private database. We then present several algorithms that, taken together, reconstruct nearly the exact database in varied experimental conditions, given these approximate volumes. Our reconstruction algorithms employ novel techniques for the approximate/noisy setting, including a noise-tolerant clique-finding algorithm, a "Match & Extend" algorithm for extrapolating volumes that are omitted from the clique, and a "Noise Reduction Step" that makes use of the closest vector problem (CVP) solver to improve the overall accuracy of the reconstructed database. The time complexity of our attacks grows quickly with the size of the range of the queried attribute, but scales well to large databases. Experimental results show that we can reconstruct databases of size 100,000 and ranges of size 12 with an error percentage of 0.11% in under 12 hours on a personal laptop.

PTAuth: Temporal Memory Safety via Robust Points-to Authentication

Reza Mirzazade Farkhani, Mansour Ahmadi, and Long Lu, Northeastern University

Available Media

Temporal memory corruptions are commonly exploited software vulnerabilities that can lead to powerful attacks. Despite significant progress made by decades of research on mitigation techniques, existing countermeasures fall short due to either limited coverage or overly high overhead. Furthermore, they require external mechanisms (e.g., spatial memory safety) to protect their metadata. Otherwise, their protection can be bypassed or disabled.

To address these limitations, we present robust points-to authentication, a novel runtime scheme for detecting all kinds of temporal memory corruptions. We built a prototype system, called PTAuth, that realizes this scheme on ARM architectures. PTAuth contains a customized compiler for code analysis and instrumentation and a runtime library for performing the points-to authentication as a protected program runs. PTAuth leverages the Pointer Authentication Code (PAC) feature, provided by the ARMv8.3 and later CPUs, which serves as a simple hardware-based encryption primitive. PTAuth uses minimal in-memory metadata and protects its metadata without requiring spatial memory safety. We report our evaluation of PTAuth in terms of security, robustness and performance using 150 vulnerable programs from Juliet test suite and the SPEC CPU2006 benchmarks. PTAuth detects all three categories of heap-based temporal memory corruptions, generates zero false alerts, and slows down program execution by 26% (this number was measured based on software-emulated PAC; it is expected to decrease to 20% when using hardware-based PAC). We also show that PTAuth incurs 2% memory overhead thanks to the efficient use of metadata.

Does logic locking work with EDA tools?

Zhaokun Han, Muhammad Yasin, and Jeyavijayan (JV) Rajendran, Texas A&M University

Available Media

Logic locking is a promising solution against emerging hardware security threats, which entails protecting a Boolean circuit using a “keying” mechanism. The latest and hitherto unbroken logic-locking techniques are based on the “corrupt-and-correct (CAC)” principle, offering provable security against input-output query attacks. However, it remains unclear whether these techniques are susceptible to structural attacks. This paper exploits the properties of integrated circuit (IC) design tools, also termed electronic design automation (EDA) tools, to undermine the security of the CAC techniques. Our proposed attack can break all the CAC techniques, including the unbroken CACrem technique that 40+ hackers taking part in a competition for more than three months could not break. Our attack can break circuits processed with any EDA tools, which is alarming because, until now, none of the EDA tools can render a secure locking solution: logic locking cannot make use of the existing EDA tools. We also provide a security property to ensure resilience against structural attacks. The commonly-used circuits can satisfy this property but only in a few cases where they cannot even defeat brute-force; thus, questions arise on the use of these circuits as benchmarks to evaluate logic locking and other security techniques.

CURE: A Security Architecture with CUstomizable and Resilient Enclaves

Raad Bahmani, Ferdinand Brasser, Ghada Dessouky, Patrick Jauernig, Matthias Klimmek, Ahmad-Reza Sadeghi, and Emmanuel Stapf, Technische Universität Darmstadt

Available Media

Security architectures providing Trusted Execution Environments (TEEs) have been an appealing research subject for a wide range of computer systems, from low-end embedded devices to powerful cloud servers. The goal of these architectures is to protect sensitive services in isolated execution contexts, called enclaves. Unfortunately, existing TEE solutions suffer from significant design shortcomings. First, they follow a one-size-fits-all approach offering only a single enclave type, however, different services need flexible enclaves that can adjust to their demands. Second, they cannot efficiently support emerging applications (e.g., Machine Learning as a Service), which require secure channels to peripherals (e.g., accelerators), or the computational power of multiple cores. Third, their protection against cache sidechannel attacks is either an afterthought or impractical, i.e., no fine-grained mapping between cache resources and individual enclaves is provided.

In this work, we propose CURE, the first security architecture, which tackles these design challenges by providing different types of enclaves: (i) sub-space enclaves provide vertical isolation at all execution privilege levels, (ii) user-space enclaves provide isolated execution to unprivileged applications, and (iii) self-contained enclaves allow isolated execution environments that span multiple privilege levels. Moreover, CURE enables the exclusive assignment of system resources, e.g., peripherals, CPU cores, or cache resources to single enclaves. CURE requires minimal hardware changes while significantly improving the state of the art of hardware-assisted security architectures. We implemented CURE on a RISC-V-based SoC and thoroughly evaluated our prototype in terms of hardware and performance overhead. CURE imposes a geometric mean performance overhead of 15.33% on standard benchmarks.

DICE*: A Formally Verified Implementation of DICE Measured Boot

Zhe Tao, University of California, Davis; Aseem Rastogi, Naman Gupta, and Kapil Vaswani, Microsoft Research; Aditya V. Thakur, University of California, Davis

Available Media

Measured boot is an important class of boot protocols that ensure that each layer of firmware and software in a device's chain of trust is measured, and the measurements are reliably recorded for subsequent verification. This paper presents DICE*, a formal specification as well as a formally verified implementation of DICE, an industry standard measured boot protocol. DICE* is proved to be functionally correct, memory-safe, and resistant to timing- and cache-based side-channels. A key component of DICE* is a verified certificate creation library for a fragment of X.509. We have integrated DICE* into the boot firmware of an STM32H753ZI micro-controller. Our evaluation shows that using a fully verified implementation has minimal to no effect on the code size and boot time when compared to an existing unverified implementation.

PEARL: Plausibly Deniable Flash Translation Layer using WOM coding

Chen Chen, Anrin Chakraborti, and Radu Sion, Stony Brook University

Available Media

When adversaries are powerful enough to coerce users to reveal encryption keys, encryption alone becomes insufficient for data protection. Plausible deniability (PD) mechanisms resolve this by enabling users to hide the mere existence of sensitive data, often by providing plausible "cover texts" or "public data volumes" hosted on the same device.

Unfortunately, with the increasing prevalence of (NAND) flash as a high-performance cost-effective storage medium, PD becomes even more challenging in the presence of realistic adversaries who can usually access a device at multiple points in time ("multi-snapshot"). This is because read/write operations to flash do not result in intuitive corresponding changes to the underlying device state. The problem is further compounded by the fact that this behavior is mostly proprietary. For example, in a majority of commercially-available flash devices, an issued delete or overwrite operation from the upper layers almost certainly won't result in an actual immediate erase of the underlying flash cells.

To address these challenges, we designed a new class of write-once memory (WOM) codes to store hidden bits in the same physical locations as other public bits. This is made possible by the inherent nature of NAND flash and the possibility of issuing multiple writes to target cells that have not previous been written to in existing pages.

We designed PEARL, a general-purpose Flash Translation Layer (FTL) that allows users to plausibly deniably store hidden data in NAND flash devices. We implemented and evaluated PEARL on a widely used simulator FlashSim. PEARL performs well on real-world workloads, comparably to non-PD baselines. PEARL is the first system that achieves strong plausible deniability for NAND flash devices, secure against realistic multi-snapshot adversaries.

1:30 pm–1:45 pm

Break

1:45 pm–3:30 pm

Track 1

Usable Security and Privacy: Institutional Perspectives

Session Chairs: Mary Ellen Zurko, MIT Lincoln Laboratory, and Daniel Votipka, Tufts University

Examining the Efficacy of Decoy-based and Psychological Cyber Deception

Kimberly J. Ferguson-Walter, Laboratory for Advanced Cybersecurity Research; Maxine M. Major, Naval Information Warfare Center, Pacific; Chelsea K. Johnson, Arizona State University; Daniel H. Muhleman, Naval Information Warfare Center, Pacific

Available Media

The threat of cyber attacks is a growing concern across the world, leading to an increasing need for sophisticated cyber defense techniques. Attackers often rely on direct observation of cyber environments. This reliance provides opportunities for defenders to affect attacker perception and behavior by plying the powerful tools of defensive cyber deception. In this paper we analyze data from a controlled experiment designed to understand how defensive deception, both cyber and psychological, affects attackers [16]. Over 130 professional red teamers participated in a network penetration test in which both the presence and explicit mention of deceptive defensive techniques were controlled. While a detailed description of the experimental design and execution along with preliminary results related to red teamer characteristics has been published, it did not address any of the main hypotheses. Granted access to the cyber and self-report data collected from the experiment, this publication begins to address theses hypotheses by investigating the effectiveness of decoy systems for cyber defense through comparison of various measures of participant forward progress across the four experimental conditions. Results presented in this paper support a new finding that the combination of the presence of decoys and providing information that deception is present has the greatest impact on cyber attack behavior, when compared to a control condition in which no deception was used.

Helping Users Automatically Find and Manage Sensitive, Expendable Files in Cloud Storage

Mohammad Taha Khan, University of Illinois at Chicago / Washington & Lee University; Christopher Tran and Shubham Singh, University of Illinois at Chicago; Dimitri Vasilkov, University of Chicago; Chris Kanich, University of Illinois at Chicago; Blase Ur, University of Chicago; Elena Zheleva, University of Illinois at Chicago

Available Media

With the ubiquity of data breaches, forgotten-about files stored in the cloud create latent privacy risks. We take a holistic approach to help users identify sensitive, unwanted files in cloud storage. We first conducted 17 qualitative interviews to characterize factors that make humans perceive a file as sensitive, useful, and worthy of either protection or deletion. Building on our findings, we conducted a primarily quantitative online study. We showed 108 long-term users of Google Drive or Dropbox a selection of files from their accounts. They labeled and explained these files' sensitivity, usefulness, and desired management (whether they wanted to keep, delete, or protect them). For each file, we collected many metadata and content features, building a training dataset of 3,525 labeled files. We then built Aletheia, which predicts a file's perceived sensitivity and usefulness, as well as its desired management. Aletheia improves over state-of-the-art baselines by 26% to 159%, predicting users' desired file-management decisions with 79% accuracy. Notably, predicting subjective perceptions of usefulness and sensitivity led to a 10% absolute accuracy improvement in predicting desired file-management decisions. Aletheia's performance validates a human-centric approach to feature selection when using inference techniques on subjective security-related tasks. It also improves upon the state of the art in minimizing the attack surface of cloud accounts.

Adapting Security Warnings to Counter Online Disinformation

Ben Kaiser, Jerry Wei, Eli Lucherini, and Kevin Lee, Princeton University; J. Nathan Matias, Cornell University; Jonathan Mayer, Princeton University

Available Media

Disinformation is proliferating on the internet, and platforms are responding by attaching warnings to content. There is little evidence, however, that these warnings help users identify or avoid disinformation. In this work, we adapt methods and results from the information security warning literature in order to design and evaluate effective disinformation warnings.

In an initial laboratory study, we used a simulated search task to examine contextual and interstitial disinformation warning designs. We found that users routinely ignore contextual warnings, but users notice interstitial warnings---and respond by seeking information from alternative sources.

We then conducted a follow-on crowdworker study with eight interstitial warning designs. We confirmed a significant impact on user information-seeking behavior, and we found that a warning's design could effectively inform users or convey a risk of harm. We also found, however, that neither user comprehension nor fear of harm moderated behavioral effects.

Our work provides evidence that disinformation warnings can---when designed well---help users identify and avoid disinformation. We show a path forward for designing effective warnings, and we contribute repeatable methods for evaluating behavioral effects. We also surface a possible dilemma: disinformation warnings might be able to inform users and guide behavior, but the behavioral effects might result from user experience friction, not informed decision making.

"Why wouldn't someone think of democracy as a target?": Security practices & challenges of people involved with U.S. political campaigns

Sunny Consolvo, Patrick Gage Kelley, Tara Matthews, Kurt Thomas, Lee Dunn, and Elie Bursztein, Google

Distinguished Paper Award Winner

Available Media

People who are involved with political campaigns face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. Improving political campaign security is a vital part of protecting democracy. To identify campaign security issues, we conducted qualitative research with 28 participants across the U.S. political spectrum to understand the digital security practices, challenges, and perceptions of people involved in campaigns. A main, overarching finding is that a unique combination of threats, constraints, and work culture lead people involved with political campaigns to use technologies from across platforms and domains in ways that leave them—and democracy—vulnerable to security attacks. Sensitive data was kept in a plethora of personal and work accounts, with ad hoc adoption of strong passwords, two-factor authentication, encryption, and access controls. No individual company, committee, organization, campaign, or academic institution can solve the identified problems on their own. To this end, we provide an initial understanding of this complex problem space and recommendations for how a diverse group of experts can begin working together to improve security for political campaigns.

Security Obstacles and Motivations for Small Businesses from a CISO's Perspective

Flynn Wolf, University of Maryland, Baltimore County; Adam J. Aviv, The George Washington University; Ravi Kuber, University of Maryland, Baltimore County

Available Media

Small businesses (SBs) are often ill-informed and under-resourced against increasing online threats. Chief Information Security Officers (CISOs) have a key role in contextualizing trade-offs between competing costs and priorities for SB management. To explore the challenges CISOs' face when guiding SBs towards improved security we conducted two interview studies. Firstly, an exploratory study with CISOs with SB experience to identify themes related to their work (n=8). Secondly, we refined ourethods and conducted broader structured interviews with a larger non-overlapping group of similarly qualified SB CISOs (n=19) to validate those themes and extend outcomes. We found CISOs confirmed common observations that SBs are generally unprepared for online threats, and uninformed about issues such as insurance and regulation. We also found that despite perceived usability problems with language and formatting, the effectiveness of government-authored guidance (a key reference source for CISOs and SBs) was deemed on par with commercial resources. These observations yield recommendations for better formatting, prioritizing, and timing of security guidance for SBs, such as better tailoring checklists, investment suggestions, and scenario-based exercises.

Strategies and Perceived Risks of Sending Sensitive Documents

Noel Warford, University of Maryland; Collins W. Munyendo, The George Washington University; Ashna Mediratta, University of Maryland; Adam J. Aviv, The George Washington University; Michelle L. Mazurek, University of Maryland

Available Media

People are frequently required to send documents, forms, or other materials containing sensitive data (e.g., personal information, medical records, financial data) to remote parties, sometimes without a formal procedure to do so securely. The specific transmission mechanisms end up relying on the knowledge and preferences of the parties involved. Through two online surveys (n=60 and n=250), we explore the various methods used to transmit sensitive documents, as well as the perceived risk and satisfaction with those methods. We find that users are more likely to recognize risk to data-at-rest after receipt (but not at the sender, namely, themselves). When not using an online portal provided by the recipient, participants primarily envision transmitting sensitive documents in person or via email, and have little experience using secure, privacy-preserving alternatives. Despite recognizing general risks, participants express high privacy satisfaction and convenience with actually experienced situations. These results suggest opportunities to design new solutions to promote securely sending sensitive materials, perhaps as new utilities within standard email workflows.

A Large-Scale Interview Study on Information Security in and Attacks against Small and Medium-sized Enterprises

Nicolas Huaman, Leibniz University Hannover; CISPA Helmholtz Center for Information Security; Bennet von Skarczinski, PwC Germany; Christian Stransky and Dominik Wermke, Leibniz University Hannover; Yasemin Acar, Leibniz University Hannover; Max Planck Institute for Security and Privacy Arne Dreißigacker, Criminological Research Institute of Lower Saxony; Sascha Fahl, Leibniz University Hannover; CISPA Helmholtz Center for Information Security

Available Media

Cybercrime is on the rise. Attacks by hackers, organized crime and nation-state adversaries are an economic threat for companies world-wide. Small and medium-sized enterprises (SMEs) have increasingly become victims of cyberattacks in recent years. SMEs often lack the awareness and resources to deploy extensive information security measures. However, the health of SMEs is critical for society: For example, in Germany, 38.8% of all employees work in SMEs, which contributed 31.9% of the German annual gross domestic product in 2018. Many guidelines and recommendations encourage companies to invest more into their information security measures. However, there is a lack of understanding of the adoption of security measures in SMEs, their risk perception with regards to cybercrime and their experiences with cyberattacks. To address this gap in research, we performed 5,000 computer-assisted telephone-interviews (CATIs) with representatives of SMEs in Germany. We report on their experiences with cybercrime, management of information security and risk perception. We present and discuss empirical results of the adoption of both technical and organizational security measures and risk awareness in SMEs. We find that many technical security measures and basic awareness have been deployed in the majority of companies. We uncover differences in reporting cybercrime incidences for SMEs based on their industry sector, company size and security awareness. We conclude our work with a discussion of recommendations for future research, industry and policy makers.

Track 2

Cryptocurrencies and Smart Contracts

Session Chairs: Arthur Gervais, Imperial College London, and Shaanan Cohney, Princeton University and University of Melbourne

On the Routing-Aware Peering against Network-Eclipse Attacks in Bitcoin

Muoi Tran and Akshaye Shenoi, National University of Singapore; Min Suk Kang, KAIST

Available Media

Safeguarding blockchain peer-to-peer (P2P) networks is more critical than ever in light of recent network attacks. Bitcoin has been successfully handling traditional Sybil and eclipse attacks; however, a recent Erebus attack [Tran et al. IEEE S&P'20] shows that effectively eclipsing a Bitcoin node is possible when the attack is combined with a network-Sybil capability; i.e., a malicious transit network can create millions or more Sybil identities. Given the immediate availability and stealthiness of the Erebus attack, Bitcoin Core has quickly implemented a few simple protocol/parameter changes to mitigate it. Our large-scale evaluations of these quick patches and three similar carefully-designed protocol tweaks confirm that, unfortunately, no simple solution can effectively handle the attack. This paper focuses on a more fundamental solution called routing-aware peering (or RAP), a proven silver bullet in detecting and circumventing similar network adversaries in other P2P networks. However, we show that, contrary to our expectation, preventing the Erebus attacks with RAP is only wishful thinking. We discover that Erebus adversaries can exploit a tiny portion of route inference errors in any RAP implementations, which gives an asymmetric advantage to the network adversaries and renders all RAP approaches ineffective. To that end, we propose an integrated defense framework that composes the available simple protocol tweaks and RAP implementation. In particular, we show that a highly customizable defense profile is required for individual Bitcoin nodes because RAP's efficacy depends significantly on where a Bitcoin node is located on the Internet topology. We present an algorithm that outputs a custom optimal defense profile that prevents most of Erebus attacks from the top-100 large transit networks.

EOSAFE: Security Analysis of EOSIO Smart Contracts

Ningyu He, Key Lab on HCST (MOE), Peking University; Ruiyi Zhang, PeckShield, Inc.; Haoyu Wang, Beijing University of Posts and Telecommunications; Lei Wu, Zhejiang University; Xiapu Luo, The Hong Kong Polytechnic University; Yao Guo, Key Lab on HCST (MOE), Peking University; Ting Yu, Qatar Computing Research Institute; Xuxian Jiang, PeckShield, Inc.

Available Media

The EOSIO blockchain, one of the representative Delegated Proof-of-Stake (DPoS) blockchain platforms, has grown rapidly recently. Meanwhile, a number of vulnerabilities and high-profile attacks against top EOSIO DApps and their smart contracts have also been discovered and observed in the wild, resulting in serious financial damages. Most of the EOSIO smart contracts are not open-sourced and typically compiled to WebAssembly (Wasm) bytecode, thus making it challenging to analyze and detect the presence of possible vulnerabilities. In this paper, we propose EOSAFE, the first static analysis framework that can be used to automatically detect vulnerabilities in EOSIO smart contracts at the bytecode level. Our framework includes a practical symbolic execution engine for Wasm, a customized library emulator for EOSIO smart contracts, and four heuristic-driven detectors to identify the presence of the four most popular vulnerabilities in EOSIO smart contracts. Experiments have shown that EOSAFE achieves promising results in detecting vulnerabilities, with an F1-measure of 98%. We have applied EOSAFE to all active 53,666 smart contracts in the ecosystem (as of November 15, 2019). Our results show that over 25% of the smart contracts are labeled vulnerable. We further analyze possible exploitation attempts on these vulnerable smart contracts and identify 48 in-the-wild attacks (27 of them have been confirmed by DApp developers), which have resulted in financial loss of at least 1.7 million USD.

EVMPatch: Timely and Automated Patching of Ethereum Smart Contracts

Michael Rodler, University of Duisburg-Essen; Wenting Li and Ghassan O. Karame, NEC Laboratories Europe; Lucas Davi, University of Duisburg-Essen

Available Media

Recent attacks exploiting errors in smart contract code had devastating consequences thereby questioning the benefits of this technology. It is currently highly challenging to fix errors and deploy a patched contract in time. Instant patching is especially important since smart contracts are always online due to the distributed nature of blockchain systems. They also manage considerable amounts of assets, which are at risk and often beyond recovery after an attack. Existing solutions to upgrade smart contracts depend on manual and error-prone processes. This paper presents a framework, called EVMPatch, to instantly and automatically patch faulty smart contracts. EVMPatch features a bytecode rewriting engine for the popular Ethereum blockchain, and transparently/automatically rewrites common off-the-shelf contracts to upgradable contracts. The proof-of-concept implementation of EVMPatch automatically hardens smart contracts that are vulnerable to integer over/underflows and access control errors, but can be easily extended to cover more bug classes. Our evaluation on 14,000 real-world contracts demonstrates that our approach successfully blocks attack transactions launched on contracts, while keeping the intended functionality of the contract intact. We perform a study with experienced software developers, showing that EVMPatch is practical, and reduces the time for converting a given Solidity smart contract to an upgradable contract by 97.6 %, while ensuring functional equivalence to the original contract.

Evil Under the Sun: Understanding and Discovering Attacks on Ethereum Decentralized Applications

Liya Su, Indiana University Bloomington; Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Xinyue Shen, Indiana University Bloomington and Alibaba Group; Xiangyu Du, Indiana University Bloomington; Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Xiaojing Liao, XiaoFeng Wang, and Luyi Xing, Indiana University Bloomington; Baoxu Liu, Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences

Available Media

The popularity of Ethereum decentralized applications (Dapps) also brings in new security risks: it has been reported that these Dapps have been under various kinds of attacks from cybercriminals to gain profit. To the best of our knowledge, little has been done so far to understand this new cybercrime, in terms of its scope, criminal footprints and attack operational intents, not to mention any efforts to investigate these attack incidents automatically on a large scale. In this paper, we performed the first measurement study on real-world Dapp attack instances to recover critical threat intelligence (e.g., kill chain and attack patterns). Utilizing such threat intelligence, we proposed the first technique DEFIER to automatically investigate attack incidents on a large scale. Running DEFIER on 2.3 million transactions from 104 Ethereum on-chain Dapps, we were able to identify 476,342 exploit transactions on 85 target Dapps, which related to 75 0-day victim Dapps and 17K previously-unknown attacker EOAs. To the best of our knowledge, it is the largest Ethereum on-chain Dapp attack incidents dataset ever reported.

Smart Contract Vulnerabilities: Vulnerable Does Not Imply Exploited

Daniel Perez and Benjamin Livshits, Imperial College London

Available Media

In recent years, we have seen a great deal of both academic and practical interest in the topic of vulnerabilities in smart contracts, particularly those developed for the Ethereum blockchain. While most of the work has focused on detecting vulnerable contracts, in this paper, we focus on finding how many of these vulnerable contracts have actually been exploited. We survey the 23,327 vulnerable contracts reported by six recent academic projects and find that, despite the amounts at stake, only 1.98% of them have been exploited since deployment. This corresponds to at most 8,487 ETH (~1.7 million USD 1 ), or only 0.27% of the 3 million ETH (600 million USD) at stake. We explain these results by demonstrating that the funds are very concentrated in a small number of contracts which are not exploitable in practice.

Frontrunner Jones and the Raiders of the Dark Forest: An Empirical Study of Frontrunning on the Ethereum Blockchain

Christof Ferreira Torres, SnT, University of Luxembourg; Ramiro Camino, Luxembourg Institute of Science and Technology; Radu State, SnT, University of Luxembourg

Available Media

Ethereum prospered the inception of a plethora of smart contract applications, ranging from gambling games to decentralized finance. However, Ethereum is also considered a highly adversarial environment, where vulnerable smart contracts will eventually be exploited. Recently, Ethereum's pool of pending transaction has become a far more aggressive environment. In the hope of making some profit, attackers continuously monitor the transaction pool and try to frontrun their victims' transactions by either displacing or suppressing them, or strategically inserting their transactions. This paper aims to shed some light into what is known as a dark forest and uncover these predators' actions. We present a methodology to efficiently measure the three types of frontrunning: displacement, insertion, and suppression. We perform a largescale analysis on more than 11M blocks and identify almost 200K attacks with an accumulated profit of 18.41M USD for the attackers, providing evidence that frontrunning is both, lucrative and a prevalent issue.

SmarTest: Effectively Hunting Vulnerable Transaction Sequences in Smart Contracts through Language Model-Guided Symbolic Execution

Sunbeom So, Seongjoon Hong, and Hakjoo Oh, Korea University

Available Media

We present SmarTest, a novel symbolic execution technique for effectively hunting vulnerable transaction sequences in smart contracts. Because smart contracts are stateful programs whose states are altered by transactions, diagnosing and understanding nontrivial vulnerabilities requires generating sequences of transactions that demonstrate the flaws. However, finding such vulnerable transaction sequences is challenging as the number of possible combinations of transactions is intractably large. As a result, most existing tools for smart contract analysis use abstractions and merely point out the locations of vulnerabilities, which in turn imposes a steep burden on users of understanding the bugs, or have limited power in generating transaction sequences. In this paper, we aim to overcome this challenge by combining symbolic execution with a language model for vulnerable transaction sequences, so that symbolic execution effectively prioritizes program paths that are likely to reveal vulnerabilities. Experimental results with real-world smart contracts show that SmarTest significantly outperforms existing tools by finding more vulnerable transaction sequences including critical zero-day vulnerabilities.

Track 3

Hardware Side Channel Defenses

Session Chairs: Anil Kurmus, IBM Research - Zurich, and Sara Rampazzi, University of Michigan and University of Florida

MIRAGE: Mitigating Conflict-Based Cache Attacks with a Practical Fully-Associative Design

Gururaj Saileshwar and Moinuddin Qureshi, Georgia Institute of Technology

Available Media

Shared caches in processors are vulnerable to conflict-based side-channel attacks, whereby an attacker can monitor the access pattern of a victim by evicting victim cache lines using cache-set conflicts. Recent mitigations propose randomized mapping of addresses to cache lines, to obfuscate the locations of set-conflicts. However, these are vulnerable to newer attack algorithms that discover conflicting sets of addresses despite such mitigations, because these designs select candidates for eviction from a small set of conflicting lines.

This paper presents Mirage, a practical design for a fully associative cache, wherein eviction candidates are selected randomly from among all the lines resident in the cache, to be immune to set-conflicts. A key challenge in enabling such a design for large shared caches (containing tens of thousands of resident cache lines) is managing the complexity of cache-lookup, as a naive design can require searching through all the resident lines. Mirage achieves full-associativity while retaining practical set-associative lookups by decoupling placement and replacement, using pointer-based indirection from tag-store to data-store to allow a newly installed address to globally evict the data of any random resident line. To eliminate set-conflicts, Mirage provisions extra invalid tags in a skewed-associative tag-store design where lines can be installed without set-conflict, along with a load-aware skew-selection policy that guarantees the availability of sets with invalid tags. Our analysis shows Mirage provides the global eviction property of a fully-associative cache throughout system lifetime (violations of full-associativity, i.e. set-conflicts, occur less than once in 10^4 to 10^17 years), thus offering a principled defense against any eviction-set discovery and any potential conflict based attacks. Mirage incurs limited slowdown (2%) and 17– 20% extra storage compared to a non-secure cache.

DOLMA: Securing Speculation with the Principle of Transient Non-Observability

Kevin Loughlin, Ian Neal, Jiacheng Ma, Elisa Tsai, Ofir Weisse, Satish Narayanasamy, and Baris Kasikci, University of Michigan

Available Media

Modern processors allow attackers to leak data during transient (i.e., mis-speculated) execution through microarchitectural covert timing channels. While initial defenses were channel-specific, recent solutions employ speculative information flow control in an attempt to automatically mitigate attacks via any channel. However, we demonstrate that the current state-of-the-art defense fails to mitigate attacks using speculative stores, still allowing arbitrary data leakage during transient execution. Furthermore, we show that the state of the art does not scale to protect data in registers, incurring 30.8–63.4% overhead on SPEC 2017, depending on the threat model.

We then present DOLMA, the first defense to automatically provide comprehensive protection against all known transient execution attacks. DOLMA combines a lightweight speculative information flow control scheme with a set of secure performance optimizations. By enforcing a novel principle of transient non-observability, DOLMA ensures that a time slice on a core provides a unit of isolation in the context of existing attacks. Accordingly, DOLMA can allow speculative TLB/L1 cache accesses and variable-time arithmetic without loss of security. On SPEC 2017, DOLMA achieves comprehensive protection of data in memory at 10.2–29.7% overhead, adding protection for data in registers at 22.6–42.2% overhead (8.2–21.2% less than the state of the art, with greater security).

Osiris: Automated Discovery of Microarchitectural Side Channels

Daniel Weber, Ahmad Ibrahim, Hamed Nemati, Michael Schwarz, and Christian Rossow, CISPA Helmholtz Center for Information Security

Available Media

In the last years, a series of side channels have been discovered on CPUs. These side channels have been used in powerful attacks, e.g., on cryptographic implementations, or as building blocks in transient-execution attacks such as Spectre or Meltdown. However, in many cases, discovering side channels is still a tedious manual process.

In this paper, we present Osiris, a fuzzing-based framework to automatically discover microarchitectural side channels. Based on a machine-readable specification of a CPU's ISA, Osiris generates instruction-sequence triples and automatically tests whether they form a timing-based side channel. Furthermore, Osiris evaluates their usability as a side channel in transient-execution attacks, i.e., as the microarchitectural encoding for attacks like Spectre. In total, we discover four novel timing-based side channels on Intel and AMD CPUs. Based on these side channels, we demonstrate exploitation in three case studies. We show that our microarchitectural KASLR break using non-temporal loads, FlushConflict, even works on the new Intel Ice Lake and Comet Lake microarchitectures. We present a cross-core cross-VM covert channel that is not relying on the memory subsystem and transmits up to 1 kbit/s. We demonstrate this channel on the AWS cloud, showing that it is stealthy and noise resistant. Finally, we demonstrate Stream+Reload, a covert channel for transient-execution attacks that, on average, allows leaking 7.83 bytes within a transient window, improving state-of-the-art attacks that only leak up to 3 bytes.

Swivel: Hardening WebAssembly against Spectre

Shravan Narayan and Craig Disselkoen, UC San Diego; Daniel Moghimi, Worcester Polytechnic Institute and UC San Diego; Sunjay Cauligi, Evan Johnson, and Zhao Gang, UC San Diego; Anjo Vahldiek-Oberwagner, Intel Labs; Ravi Sahita, Intel; Hovav Shacham, UT Austin; Dean Tullsen and Deian Stefan, UC San Diego

Available Media

We describe Swivel, a new compiler framework for hardening WebAssembly (Wasm) against Spectre attacks. Outside the browser, Wasm has become a popular lightweight, in-process sandbox and is, for example, used in production to isolate different clients on edge clouds and function-as-a-service platforms. Unfortunately, Spectre attacks can bypass Wasm's isolation guarantees. Swivel hardens Wasm against this class of attacks by ensuring that potentially malicious code can neither use Spectre attacks to break out of the Wasm sandbox nor coerce victim code—another Wasm client or the embedding process—to leak secret data.

We describe two Swivel designs, a software-only approach that can be used on existing CPUs, and a hardware-assisted approach that uses extension available in Intel® 11th generation CPUs. For both, we evaluate a randomized approach that mitigates Spectre and a deterministic approach that eliminates Spectre altogether. Our randomized implementations impose under 10.3% overhead on the Wasm-compatible subset of SPEC 2006, while our deterministic implementations impose overheads between 3.3% and 240.2%. Though high on some benchmarks, Swivel's overhead is still between 9× and 36.3× smaller than existing defenses that rely on pipeline fences.

Rage Against the Machine Clear: A Systematic Analysis of Machine Clears and Their Implications for Transient Execution Attacks

Hany Ragab, Enrico Barberis, Herbert Bos, and Cristiano Giuffrida, Vrije Universiteit Amsterdam

Distinguished Paper Award Winner

Available Media

Since the discovery of the Spectre and Meltdown vulnerabilities, transient execution attacks have increasingly gained momentum. However, while the community has investigated several variants to trigger attacks during transient execution, much less attention has been devoted to the analysis of the root causes of transient execution itself. Most attack variants simply build on well-known root causes, such as branch misprediction and aborts of Intel TSX—which are no longer supported on many recent processors. In this paper, we tackle the problem from a new perspective, closely examining the different root causes of transient execution rather than focusing on new attacks based on known transient windows. Our analysis specifically focuses on the class of transient execution based on machine clears (MC), reverse engineering previously unexplored root causes such as Floating Point MC, Self-Modifying Code MC, Memory Ordering MC, and Memory Disambiguation MC. We show these events not only originate new transient execution windows that widen the horizon for known attacks, but also yield entirely new attack primitives to inject transient values (Floating Point Value Injection or FPVI) and executing stale code (Speculative Code Store Bypass or SCSB). We present an end-to-end FPVI exploit on the latest Mozilla SpiderMonkey JavaScript engine with all the mitigations enabled, disclosing arbitrary memory in the browser through attacker-controlled and transiently-injected floating-point results. We also propose mitigations for both attack primitives and evaluate their performance impact. Finally, as a by-product of our analysis, we present a new root cause-based classification of all known transient execution paths.

Coco: Co-Design and Co-Verification of Masked Software Implementations on CPUs

Barbara Gigerl, Vedad Hadzic, and Robert Primas, Graz University of Technology; Stefan Mangard, Graz University of Technology, Lamarr Security Research; Roderick Bloem, Graz University of Technology

Available Media

The protection of cryptographic implementations against power analysis attacks is of critical importance for many applications in embedded systems. The typical approach of protecting against these attacks is to implement algorithmic countermeasures, like masking. However, implementing these countermeasures in a secure and correct manner is challenging. Masking schemes require the independent processing of secret shares, which is a property that is often violated by CPU microarchitectures in practice. In order to write leakage-free code, the typical approach in practice is to iteratively explore instruction sequences and to empirically verify whether there is leakage caused by the hardware for this instruction sequence or not. Clearly, this approach is neither efficient, nor does it lead to rigorous security statements.

In this paper, we overcome the current situation and present the first approach for co-design and co-verification of masked software implementations on CPUs. First, we present Coco, a tool that allows us to provide security proofs at the gate-level for the execution of a masked software implementation on a concrete CPU. Using Coco, we analyze the popular 32-bit RISC-V Ibex core, identify all design aspects that violate the security of our tested masked software implementations and perform corrections, mostly in hardware. The resulting secured Ibex core has an area overhead around 10%, the runtime of software on this core is largely unaffected, and the formal verification with Coco of an, e.g., first-order masked Keccak S-box running on the secured Ibex core takes around 156 seconds. To demonstrate the effectiveness of our suggested design modifications, we perform practical leakage assessments using an FPGA evaluation board.

Thursday, August 12

7:00 am–8:45 am

Track 1

Machine Learning: Backdoor and Poisoning

Session Chair: Damith Ranasinghe, The University of Adelaide

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers

Giorgio Severi, Northeastern University; Jim Meyer, Xailient Inc.; Scott Coull, FireEye Inc.; Alina Oprea, Northeastern University

Available Media

Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point. In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and values to create effective backdoor triggers in a model-agnostic fashion. Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models and evaluate the effect of various constraints imposed on the attacker. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary's functionality, and we leverage similar behavior-preserving alteration methodologies for Android and PDF files. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these attacks, especially when the attacks blend in with the legitimate sample distribution.

Blind Backdoors in Deep Learning Models

Eugene Bagdasaryan and Vitaly Shmatikov, Cornell Tech

Available Media

We investigate a new method for injecting backdoors into machine learning models, based on compromising the loss-value computation in the model-training code. We use it to demonstrate new classes of backdoors strictly more powerful than those in the prior literature: single-pixel and physical backdoors in ImageNet models, backdoors that switch the model to a covert, privacy-violating task, and backdoors that do not require inference-time input modifications.

Our attack is blind: the attacker cannot modify the training data, nor observe the execution of his code, nor access the resulting model. The attack code creates poisoned training inputs "on the fly," as the model is training, and uses multi-objective optimization to achieve high accuracy on both the main and backdoor tasks. We show how a blind attack can evade any known defense and propose new ones.

Graph Backdoor

Zhaohan Xi and Ren Pang, Pennsylvania State University; Shouling Ji, Zhejiang University; Ting Wang, Pennsylvania State University

Available Media

One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks—a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning normally otherwise. Despite the plethora of prior work on DNNs for continuous data (e.g., images), the vulnerability of graph neural networks (GNNs) for discrete-structured data (e.g., graphs) is largely unexplored, which is highly concerning given their increasing use in security-sensitive domains.

To bridge this gap, we present GTA, the first backdoor attack on GNNs. Compared with prior work, GTA departs in significant ways: graph-oriented—it defines triggers as specific subgraphs, including both topological structures and descriptive features, entailing a large design spectrum for the adversary; input-tailored—it dynamically adapts triggers to individual graphs, thereby optimizing both attack effectiveness and evasiveness; downstream model-agnostic—it can be readily launched without knowledge regarding downstream models or fine-tuning strategies; and attack-extensible—it can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks, constituting severe threats for a range of security-critical applications. Through extensive evaluation using benchmark datasets and state-of-the-art models, we demonstrate the effectiveness of GTA. We further provide analytical justification for its effectiveness and discuss potential countermeasures, pointing to several promising research directions.

Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

Di Tang, Chinese University of Hong Kong; XiaoFeng Wang and Haixu Tang, Indiana University; Kehuan Zhang, Chinese University of Hong Kong

Available Media

A security threat to deep neural networks (DNN) is data contamination attack, in which an adversary poisons the training data of the target model to inject a backdoor so that images carrying a specific trigger will always be given a specific label. We discover that prior defense on this problem assumes the dominance of the trigger in model's representation space, which causes any image with the trigger to be classified to the target label. Such dominance comes from the unique representations of trigger-carrying images, which are assumed to be significantly different from what benign images produce. Our research, however, shows that this assumption can be broken by a targeted contamination TaCT that obscures the difference between those two kinds of representations and causes the attack images to be less distinguishable from benign ones, thereby evading existing protection.

In our research, we observe that TaCT can affect the representation distribution of the target class but can hardly change the distribution across all classes, allowing us to build new defense performing a statistic analysis on the global information. More specifically, we leverage an EM algorithm to decompose an images into its identity part (e.g., person) and variation part (e.g., poses). Then the distribution of the variation, based upon the global information across all classes, is utilized by a likelihood-ratio test to analyze the representations in each class, identifying those more likely to be characterized by a mixture model resulted from adding attack samples into the legitimate image pool of the current class. Our research illustrates that our approach can effectively detect data contamination attacks, not only the known ones but the new TaCT attack discovered in our study.

You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

Roei Schuster, Tel Aviv University and Cornell Tech; Congzheng Song, Cornell University; Eran Tromer, Tel Aviv University and Columbia University; Vitaly Shmatikov, Cornell Tech

Distinguished Paper Award Winner

Available Media

Code autocompletion is an integral feature of modern code editors and IDEs. The latest generation of autocompleters uses neural language models, trained on public open-source code repositories, to suggest likely (not just statically feasible) completions given the current context.

We demonstrate that neural code autocompleters are vulnerable to poisoning attacks. By adding a few specially-crafted files to the autocompleter's training corpus (data poisoning), or else by directly fine-tuning the autocompleter on these files (model poisoning), the attacker can influence its suggestions for attacker-chosen contexts. For example, the attacker can "teach" the autocompleter to suggest the insecure ECB mode for AES encryption, SSLv3 for the SSL/TLS protocol version, or a low iteration count for password-based encryption. Moreover, we show that these attacks can be targeted: an autocompleter poisoned by a targeted attack is much more likely to suggest the insecure completion for files from a specific repo or specific developer.

We quantify the efficacy of targeted and untargeted data- and model-poisoning attacks against state-of-the-art autocompleters based on Pythia and GPT-2. We then evaluate existing defenses against poisoning attacks, and show that they are largely ineffective.

Poisoning the Unlabeled Dataset of Semi-Supervised Learning

Nicholas Carlini, Google

Distinguished Paper Award Winner and Second Prize winner of the 2021 Internet Defense Prize

Available Media

Semi-supervised machine learning models learn from a (small) set of labeled training examples, and a (large) set of unlabeled training examples. State-of-the-art models can reach within a few percentage points of fully-supervised training, while requiring 100x less labeled data.

We study a new class of vulnerabilities: poisoning attacks that modify the unlabeled dataset. In order to be useful, un-labeled datasets are given strictly less review than labeled datasets, and adversaries can therefore poison them easily. By inserting maliciously-crafted unlabeled examples totaling just 0.1% of the dataset size, we can manipulate a model trained on this poisoned dataset to misclassify arbitrary examples at test time (as any desired label). Our attacks are highly effective across datasets and semi-supervised learning methods.

We find that more accurate methods (thus more likely to be used) are significantly more vulnerable to poisoning attacks, and as such better training methods are unlikely to prevent this attack. To counter this we explore the space of defenses, and propose two methods that mitigate our attack.

Double-Cross Attacks: Subverting Active Learning Systems

Jose Rodrigo Sanchez Vicarte, Gang Wang, and Christopher W. Fletcher, University of Illinois at Urbana-Champaign

Available Media

Active learning is widely used in data labeling services to support real-world machine learning applications. By selecting and labeling the samples that have the highest impact on model retraining, active learning can reduce labeling efforts, and thus reduce cost.

In this paper, we present a novel attack called Double Cross, which aims to manipulate data labeling and model training in active learning settings. To perform a double-cross attack, the adversary crafts inputs with a special trigger pattern and sends the triggered inputs to the victim model retraining pipeline. The goals of the triggered inputs are (1) to get selected for labeling and retraining by the victim; (2) to subsequently mislead human annotators into assigning an adversary-selected label; and (3) to change the victim model's behavior after retraining occurs. After retraining, the attack causes the victim to mislabel any samples with this trigger pattern to the adversary-chosen label. At the same time, labeling other samples, without the trigger pattern, is not affected. We develop a trigger generation method that simultaneously achieves these three goals. We evaluate the attack on multiple existing image classifiers and demonstrate that both gray-box and black-box attacks are successful. Furthermore, we perform experiments on a real-world machine learning platform (Amazon SageMaker) to evaluate the attack with human annotators in the loop, to confirm the practicality of the attack. Finally, we discuss the implications of the results and the open research questions moving forward.

Track 2

Program Analysis

Session Chairs: Jacob Torrey, DARPA, and Nathan Dautenhahn, Rice University

Fine Grained Dataflow Tracking with Proximal Gradients

Gabriel Ryan, Abhishek Shah, and Dongdong She, Columbia University; Koustubha Bhat, Vrije Universiteit Amsterdam; Suman Jana, Columbia University

Available Media

Dataflow tracking with Dynamic Taint Analysis (DTA) is an important method in systems security with many applications, including exploit analysis, guided fuzzing, and side-channel information leak detection. However, DTA is fundamentally limited by the Boolean nature of taint labels, which provide no information about the significance of detected dataflows and lead to false positives/negatives on complex real world programs.

We introduce proximal gradient analysis (PGA), a novel, theoretically grounded approach that can track more accurate and fine-grained dataflow information. PGA uses proximal gradients, a generalization of gradients for non-differentiable functions, to precisely compose gradients over non-differentiable operations in programs. Composing gradients over programs eliminates many of the dataflow propagation errors that occur in DTA and provides richer information about how each measured dataflow effects a program.

We compare our prototype PGA implementation to three state of the art DTA implementations on 7 real-world programs. Our results show that PGA can improve the F1 accuracy of data flow tracking by up to 33% over taint tracking (20% on average) without introducing any significant overhead (< 5% on average). We further demonstrate the effectiveness of PGA by discovering 22 bugs (20 confirmed by developers) and 2 side-channel leaks, and identifying exploitable dataflows in 19 existing CVEs in the tested programs.

Static Detection of Unsafe DMA Accesses in Device Drivers

Jia-Ju Bai and Tuo Li, Tsinghua University; Kangjie Lu, University of Minnesota; Shi-Min Hu, Tsinghua University

Available Media

Direct Memory Access (DMA) is a popular mechanism for improving hardware I/O performance, and it has been widely used by many existing device drivers. However, DMA accesses can be unsafe, from two aspects. First, without proper synchronization of DMA buffers with hardware registers and CPU cache, the buffer data stored in CPU cache and hardware registers can be inconsistent, which can cause unexpected hardware behaviors. Second, a malfunctioning or untrusted hardware device can write bad data into system memory, which can trigger security bugs (such as buffer overflow and invalid pointer access), if the driver uses the data without correct validation. To detect unsafe DMA accesses, some key challenges need to be solved. For example, because each DMA access is implemented as a regular variable access in the driver code, identifying DMA accesses is difficult.

In this paper, we propose a static-analysis approach named SADA, to automatically and accurately detect unsafe DMA accesses in device drivers. SADA consists of three basic steps. First, SADA uses a field-based alias analysis to identify DMA accesses, according to the information of DMA-buffer creation. Second, SADA uses a flow-sensitive and pattern-based analysis to check the safety of each DMA access, to detect possible unsafe DMA accesses. Finally, SADA uses a SMT solver to validate the code-path condition of each possible unsafe DMA access, to drop false positives. We have evaluated SADA on the driver code of Linux 5.6, and found 284 real unsafe DMA accesses. Among them, we highlight that 121 can trigger buffer-overflow bugs and 36 can trigger invalid-pointer accesses causing arbitrary read or write. We have reported these unsafe DMA accesses to Linux driver developers, and 105 of them have been confirmed.

MAZE: Towards Automated Heap Feng Shui

Yan Wang, {CAS-KLONAT, BKLONSPT}, Institute of Information Engineering, Chinese Academy of Sciences; WeiRan Lab, Huawei Technologies; Chao Zhang, BNRist & Institute for Network Science and Cyberspace, Tsinghua University; Tsinghua University-QI-ANXIN Group JCNS; Zixuan Zhao, Bolun Zhang, Xiaorui Gong, and Wei Zou, {CAS-KLONAT, BKLONSPT}, Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences

Available Media

A large number of memory corruption vulnerabilities, e.g., heap overflow and use after free (UAF), could only be exploited in specific heap layouts via techniques like heap feng shui. To pave the way for automated exploit generation (AEG), automated heap layout manipulation is demanded.

In this paper, we present a novel solution MAZE to manipulate proof-of-concept (POC) samples' heap layouts. It first identifies heap layout primitives (i.e., input fragments or code snippets) available for users to manipulate the heap. Then, it applies a novel Dig & Fill algorithm, which models the problem as a Linear Diophantine Equation and solves it deterministically, to infer a primitive operation sequence that is able to generate target heap layout.

We implemented a prototype of MAZE based on the analysis engine S2E, and evaluated it on the PHP, Python and Perl interpreters and a set of CTF (capture the flag) programs, as well as a large micro-benchmark. Results showed that MAZE could generate expected heap layouts for over 90% of them.

SelectiveTaint: Efficient Data Flow Tracking With Static Binary Rewriting

Sanchuan Chen, Zhiqiang Lin, and Yinqian Zhang, The Ohio State University

Available Media

Taint analysis has been widely used in many security applications such as exploit detection, information flow tracking, malware analysis, and protocol reverse engineering. State-of-the-art taint analysis tools are usually built atop dynamic binary instrumentation, which instruments at every possible instruction, and rely on runtime information to decide whether a particular instruction involves taint or not, thereby usually having high performance overhead. This paper presents SelectiveTaint, an efficient selective taint analysis framework for binary executables. The key idea is to selectively instrument the instructions involving taint analysis using static binary rewriting instead of dynamic binary instrumentation. At a high level, SelectiveTaint statically scans taint sources of interest in the binary code, leverages value set analysis to conservatively determine whether an instruction operand needs to be tainted or not, and then selectively taints the instructions of interest. We have implemented SelectiveTaint and evaluated it with a set of binary programs including 16 coreutils (focusing on file I/O) and five network daemon programs (focusing on network I/O) such as nginx web server. Our evaluation results show that the binaries statically instrumented by SelectiveTaint has superior performance compared to the state-of-the-art dynamic taint analysis frameworks (e.g., 1.7xfaster than that of libdft).

Breaking Through Binaries: Compiler-quality Instrumentation for Better Binary-only Fuzzing

Stefan Nagy, Virginia Tech; Anh Nguyen-Tuong, Jason D. Hiser, and Jack W. Davidson, University of Virginia; Matthew Hicks, Virginia Tech

Available Media

Coverage-guided fuzzing is one of the most effective software security testing techniques. Fuzzing takes on one of two forms: compiler-based or binary-only, depending on the availability of source code. While the fuzzing community has improved compiler-based fuzzing with performance- and feedback-enhancing program transformations, binary-only fuzzing lags behind due to the semantic and performance limitations of instrumenting code at the binary level. Many fuzzing use cases are binary-only (i.e., closed source). Thus, applying fuzzing-enhancing program transformations to binary-only fuzzing—without sacrificing performance—remains a compelling challenge. This paper examines the properties required to achieve compiler-quality binary-only fuzzing instrumentation. Based on our findings, we design ZAFL: a platform for applying fuzzing-enhancing program transformations to binary-only targets—maintaining compiler-level performance. We showcase ZAFL's capabilities in an implementation for the popular fuzzer AFL, including five compiler-style fuzzing-enhancing transformations, and evaluate it against the leading binary-only fuzzing instrumenters AFL-QEMU and AFL-Dyninst. Across LAVA-M and real-world targets, ZAFL improves crash-finding by 26–96% and 37–131%; and throughput by 48– 78% and 159–203% compared to AFL-Dyninst and AFL-QEMU, respectively—while maintaining compiler-level of overhead of 27%. We also show that ZAFL supports real-world open- and closed-source software of varying size (10K– 100MB), complexity (100–1M basic blocks), platform (Linux and Windows), and format (e.g., stripped and PIC).

MBA-Blast: Unveiling and Simplifying Mixed Boolean-Arithmetic Obfuscation

Binbin Liu, University of Science and Technology of China & University of New Hampshire; Junfu Shen, University of New Hampshire; Jiang Ming, University of Texas at Arlington; Qilong Zheng and Jing Li, University of Science and Technology of China; Dongpeng Xu, University of New Hampshire

Available Media

Mixed Boolean-Arithmetic (MBA) obfuscation is a method to perform a semantics-preserving transformation from a simple expression to a representation that is hard to understand and analyze. More specifically, this obfuscation technique consists of the mixture usage of arithmetic operations (e.g., ADD and IMUL) and Boolean operations (e.g., AND, OR, and NOT). Binary code with MBA obfuscation can effectively hide the secret data/algorithm from both static and dynamic reverse engineering, including advanced analyses utilizing SMT solvers. Unfortunately, deobfuscation research against MBA is still in its infancy: state-of-the-art solutions such as pattern matching, bit-blasting, and program synthesis either suffer from severe performance penalties, are designed for specific MBA patterns, or generate too many false simplification results in practice.

In this paper, we first demystify the underlying mechanism of MBA obfuscation. Our in-depth study reveals a hidden two-way feature regarding MBA transformation between 1-bit and n-bit variables. We exploit this feature and propose a viable solution to efficiently deobfuscate code with MBA obfuscation. Our key insight is that MBA transformations behave in the same way on 1-bit and n-bit variables. We provide a mathematical proof to guarantee the correctness of this finding. We further develop a novel technique to simplify MBA expressions to a normal simple form by arithmetic reduction in 1-bit space. We have implemented this idea as an open-source prototype, named MBA-Blast, and evaluated it on a comprehensive dataset with about 10,000 MBA expressions. We also tested our method in real-world, binary code deobfuscation scenarios, which demonstrate that MBA-Blast can assist human analysts to harness the full strength of SMT solvers. Compared with existing work, MBA-Blast is the most generic and efficient MBA deobfuscation technique; it has a solid theoretical underpinning, as well as, the highest success rate with negligible overhead.

VScape: Assessing and Escaping Virtual Call Protections

Kaixiang Chen, Institute for Network Science and Cyberspace, Tsinghua University; Chao Zhang, Institute for Network Science and Cyberspace, Tsinghua University/Beijing National Research Center for Information Science and Technology/Tsinghua University-QI-ANXIN Group JCNS; Tingting Yin and Xingman Chen, Institute for Network Science and Cyberspace, Tsinghua University; Lei Zhao, School of Cyber Science and Engineering, Wuhan University

Available Media

Many control-flow integrity (CFI) solutions have been proposed to protect indirect control transfers (ICT), including C++ virtual calls. Assessing the security guarantees of these defenses is thus important but hard. In practice, for a (strong) defense, it usually requires abundant manual efforts to assess whether it could be bypassed, when given a specific (weak) vulnerability. Existing automated exploit generation solutions, which are proposed to assess the exploitability of vulnerabilities, have not addressed this issue yet.

In this paper, we point out that a wide range of virtual call protections, which do not break the C++ ABI (application binary interface), are vulnerable to an advanced attack COOPLUS, even if the given vulnerabilities are weak. Then, we present a solution VScape to assess the effectiveness of virtual call protections against this attack. We developed a prototype of VScape, and utilized it to assess 11 CFI solutions and 14 C++ applications (including Firefox and PyQt) with known vulnerabilities. Results showed that real-world applications have a large set of exploitable virtual calls, and VScape could be utilized to generate working exploits to bypass deployed defenses via weak vulnerabilities.

Track 3

Privacy Enhancing Technologies

Session Chairs: Sherman S. M. Chow, The Chinese University of Hong Kong, and Qiang Tang, New Jersey Institute of Technology

Pretty Good Phone Privacy

Paul Schmitt, Princeton University; Barath Raghavan, University of Southern California

Available Media

To receive service in today's cellular architecture, phones uniquely identify themselves to towers and thus to operators. This is now a cause of major privacy violations, as operators sell and leak identity and location data of hundreds of millions of mobile users.

In this paper, we take an end-to-end perspective on the cellular architecture and find key points of decoupling that enable us to protect user identity and location privacy with no changes to physical infrastructure, no added latency, and no requirement of direct cooperation from existing operators. In our architecture, we alter commonly attacked permanent identifiers that are widely used in today's mobile networks to values that no longer individually identify users, while maintaining connectivity and compatibility with existing infrastructure.

We describe Pretty Good Phone Privacy (PGPP) and demonstrate how our modified backend stack (NGC) works with real phones to provide ordinary yet privacy-preserving connectivity. We explore inherent privacy and efficiency tradeoffs in a simulation of a large metropolitan region. We show how PGPP maintains today's control overheads while significantly improving user identity and location privacy.

KeyForge: Non-Attributable Email from Forward-Forgeable Signatures

Michael A. Specter, MIT; Sunoo Park, MIT & Harvard; Matthew Green, Johns Hopkins University

Available Media

Email breaches are commonplace, and they expose a wealth of personal, business, and political data whose release may have devastating consequences. Such damage is compounded by email's strong attributability: today, any attacker who gains access to your email can easily prove to others that the stolen messages are authentic, a property arising from a necessary anti-spam/anti-spoofing protocol called DKIM. This greatly increases attackers' capacity to do harm by selling the stolen information to third parties, blackmail, or publicly releasing intimate or sensitive messages — all with built-in cryptographic proof of authenticity.

This paper introduces non-attributable email, which guarantees that a wide class of adversaries are unable to convince discerning third parties of the authenticity of stolen emails. We formally define non-attributability, and present two system proposals — KeyForge and TimeForge — that provably achieve non-attributability while maintaining the important spam/spoofing protections currently provided by DKIM. Finally, we implement both and evaluate their speed and bandwidth performance overhead. We demonstrate the practicality of KeyForge, which achieves reasonable verification overhead while signing faster and requiring 42% less bandwidth per message than DKIM's RSA-2048.

Express: Lowering the Cost of Metadata-hiding Communication with Cryptographic Privacy

Saba Eskandarian, Stanford University; Henry Corrigan-Gibbs, MIT CSAIL; Matei Zaharia and Dan Boneh, Stanford University

Available Media

Existing systems for metadata-hiding messaging that provide cryptographic privacy properties have either high communication costs, high computation costs, or both. In this paper, we introduce Express, a metadata-hiding communication system that significantly reduces both communication and computation costs. Express is a two-server system that provides cryptographic security against an arbitrary number of malicious clients and one malicious server. In terms of communication, Express only incurs a constant-factor overhead per message sent regardless of the number of users, whereas previous cryptographically-secure systems Pung and Riposte had communication costs proportional to roughly the square root of the number of users. In terms of computation, Express only uses symmetric key cryptographic primitives and makes both practical and asymptotic improvements on protocols employed by prior work. These improvements enable Express to increase message throughput, reduce latency, and consume over 100x less bandwidth than Pung and Riposte, dropping the end to end cost of running a realistic whistleblowing application by 6x.

Kalεido: Real-Time Privacy Control for Eye-Tracking Systems

Jingjie Li, Amrita Roy Chowdhury, Kassem Fawaz, and Younghyun Kim, University of Wisconsin–Madison

Available Media

Recent advances in sensing and computing technologies have led to the rise of eye-tracking platforms. Ranging from mobiles to high-end mixed reality headsets, a wide spectrum of interactive systems now employs eye-tracking. However, eye gaze data is a rich source of sensitive information that can reveal an individual's physiological and psychological traits. Prior approaches to protecting eye-tracking data suffer from two major drawbacks: they are either incompatible with the current eye-tracking ecosystem or provide no formal privacy guarantee. In this paper, we propose Kalεido, an eye-tracking data processing system that (1) provides a formal privacy guarantee, (2) integrates seamlessly with existing eye-tracking ecosystems, and (3) operates in real-time. Kalεido acts as an intermediary protection layer in the software stack of eye-tracking systems. We conduct a comprehensive user study and trace-based analysis to evaluate Kalεido. Our user study shows that the users enjoy a satisfactory level of utility from Kalεido. Additionally, we present empirical evidence of Kalεido's effectiveness in thwarting real-world attacks on eye-tracking data.

Communication–Computation Trade-offs in PIR

Asra Ali, Google; Tancrède Lepoint; Sarvar Patel, Mariana Raykova, Phillipp Schoppmann, Karn Seth, and Kevin Yeo, Google

Available Media

We study the computation and communication costs and their possible trade-offs in various constructions for private information retrieval (PIR), including schemes based on homomorphic encryption and the Gentry–Ramzan PIR (ICALP'05).

We improve over the construction of SealPIR (S&P'18) using compression techniques and a new oblivious expansion, which reduce the communication bandwidth by 80% while preserving essentially the same computation cost. We then present MulPIR, a PIR protocol additionally leveraging multiplicative homomorphism to implement the recursion steps in PIR. While using the multiplicative homomorphism has been considered in prior work, we observe that in combination with our other techniques, it introduces a meaningful tradeoff by significantly reducing communication, at the cost of an increased computational cost for the server, when the databases have large entries. For some applications, we show that this could reduce the total monetary server cost by up to 35%.

On the other end of the communication–computation spectrum, we take a closer look at Gentry–Ramzan PIR, a scheme with asymptotically optimal communication rate. Here, the bottleneck is the server's computation, which we manage to reduce significantly. Our optimizations enable a tunable tradeoff between communication and computation, which allows us to reduce server computation by as much as 85%, at the cost of an increased query size.

Finally, we introduce new ways to handle PIR over sparse databases (keyword PIR), based on different hashing techniques. We implement all of our constructions, and compare their communication and computation overheads with respect to each other for several application scenarios.

I Always Feel Like Somebody's Sensing Me! A Framework to Detect, Identify, and Localize Clandestine Wireless Sensors

Akash Deep Singh, University of California, Los Angeles; Luis Garcia, University of California, Los Angeles, and USC ISI; Joseph Noor and Mani Srivastava, University of California, Los Angeles

Available Media

The increasing ubiquity of low-cost wireless sensors has enabled users to easily deploy systems to remotely monitor and control their environments. However, this raises privacy concerns for third-party occupants, such as a hotel room guest who may be unaware of deployed clandestine sensors. Previous methods focused on specific modalities such as detecting cameras but do not provide a generalized and comprehensive method to capture arbitrary sensors which may be "spying" on a user. In this work, we propose SnoopDog, a framework to not only detect common Wi-Fi-based wireless sensors that are actively monitoring a user, but also classify and localize each device. SnoopDog works by establishing causality between patterns in observable wireless traffic and a trusted sensor in the same space, e.g., an inertial measurement unit (IMU) that captures a user's movement. Once causality is established, SnoopDog performs packet inspection to inform the user about the monitoring device. Finally, SnoopDog localizes the clandestine device in a 2D plane using a novel trial-based localization technique. We evaluated SnoopDog across several devices and various modalities and were able to detect causality for snooping devices 95.2% of the time and localize devices to a sufficiently reduced sub-space.

The Complexities of Healing in Secure Group Messaging: Why Cross-Group Effects Matter

Cas Cremers, CISPA Helmholtz Center for Information Security; Britta Hale, Naval Postgraduate School (NPS); Konrad Kohbrok, Aalto University

Available Media

Modern secure messaging protocols can offer strong security guarantees such as Post-Compromise Security (PCS), which enables participants to heal after compromise. The core PCS mechanism in protocols like Signal is designed for pairwise communication, making it inefficient for large groups, while recently proposed designs for secure group messaging, ART, IETF's MLS Draft-11/TreeKEM, use group keys derived from tree structures to efficiently provide PCS to large groups. Until now, research on PCS designs only considered healing behaviour within a single group.

In this work we provide the first analysis of the healing behaviour when a user participates in multiple groups. Surprisingly, our analysis reveals that the currently proposed protocols based on group keys, such as ART and TreeKEM/MLS Draft-11, provide significantly weaker PCS guarantees than group protocols based on pairwise PCS channels. In fact, we show that if new users can be created dynamically, ART, TreeKEM, and MLS Draft-11 never fully heal authentication.

We map the design space of healing mechanisms, analyzing security and overhead of possible solutions. This leads us to a promising solution based on (i) global updates that affect all current and future groups, and (ii) post-compromise secure signatures. Our solution allows group messaging protocols such ART and MLS to achieve substantially stronger PCS guarantees. We provide a security definition for post-compromise secure signatures and an instantiation.

8:45 am–9:00 am

Break

9:00 am–10:45 am

Track 1

Machine Learning: Adversarial Examples and Model Extraction

Session Chair: Florian Tramèr, Stanford University

SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

Giulio Lovisotto, Henry Turner, and Ivo Sluganovic, University of Oxford; Martin Strohmeier, armasuisse; Ivan Martinovic, University of Oxford

Available Media

Research into adversarial examples (AE) has developed rapidly, yet static adversarial patches are still the main technique for conducting attacks in the real world, despite being obvious, semi-permanent and unmodifiable once deployed.

In this paper, we propose Short-Lived Adversarial Perturbations (SLAP), a novel technique that allows adversaries to realize physically robust real-world AE by using a projector. Attackers can project specifically crafted adversarial perturbations onto real-world objects, transforming them into AE. This grants adversaries greater control over the attack compared to adversarial patches, as projections can be turned on and off as needed and leave no obvious trace of an attack.

We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks, focusing on the detection of stop signs. We conduct experiments in a variety of ambient light conditions, including outdoors, showing how in non-bright settings the proposed method generates AE that are extremely robust, causing misclassifications on state-of-the-art neural networks with up to 99% success rate. Our experiments show that SLAP-generated AE do not present detectable behaviours seen in adversarial patches and therefore bypass SentiNet, a physical AE detection method. We evaluate other defences including an adaptive defender using adversarial learning which is able to thwart the attack effectiveness up to 80% even in favourable attacker conditions.

Adversarial Policy Training against Deep Reinforcement Learning

Xian Wu, Wenbo Guo, Hua Wei, and Xinyu Xing, The Pennsylvania State University

Available Media

Reinforcement learning is a set of goal-oriented learning algorithms, through which an agent could learn to behave in an environment, by performing certain actions and observing the reward which it gets from those actions. Integrated with deep neural networks, it becomes deep reinforcement learning, a new paradigm of learning methods. Recently, deep reinforcement learning demonstrates great potential in many applications such as playing video games, mastering GO competition, and even performing autonomous pilot. However, coming together with these great successes is adversarial attacks, in which an adversary could force a well-trained agent to behave abnormally by tampering the input to the agent's policy network or training an adversarial agent to exploit the weakness of the victim.

In this work, we show existing adversarial attacks against reinforcement learning either work in an impractical setting or perform less effectively when being launched in a two-agent zero-sum game. Motivated by this, we propose a new method to train adversarial agents. Technically speaking, our approach extends the Proximal Policy Optimization (PPO) algorithm and then utilizes an explainable AI technique to guide an attacker to train an adversarial agent. In comparison with the adversarial agent trained by the state-of-the-art technique, we show that our adversarial agent exhibits a much stronger capability in exploiting the weakness of victim agents. Besides, we demonstrate that our adversarial attack introduces less variation in the training process and exhibits less sensitivity to the selection of initial states.

DRMI: A Dataset Reduction Technology based on Mutual Information for Black-box Attacks

Yingzhe He, Guozhu Meng, Kai Chen, Xingbo Hu, and Jinwen He, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences/School of Cyber Security, University of Chinese Academy of Sciences

Available Media

It is non-trivial to attack deep neural networks in black-box settings without any model detail disclosed. Prior studies on black-box attacks leverage a number of queries to the target model for probing the target model or generating adversarial examples. Queries are usually limited and costly so that the adversary probably fails to mount an effective attack. However, not all the queries have to be made since there exist repetitions or redundancies that induce many inefficient queries. Therefore, it leaves a lot of room for data reduction and more efficient queries. To this end, we first propose to use mutual information to measure the data redundancy between two data samples, and then develop a data reduction technique based on mutual information, termed as DRMI. We implement an efficient optimization algorithm in DRMI, so as to obtain a particular subset of data samples, of which the mutual information in between is minimized. We conduct extensive experiments on MNIST, CIFAR10, and ImageNet, and six types of deep neural networks, and evaluate DRMI in model extraction and adversarial attacks. The results demonstrate its high effectiveness in these attacks, surpassing a state-of-the-art approach by raising 7% of model accuracy and two times more transferability of adversarial examples. Through the comparison experiments with other three strategies, we identify what properties of data have been preserved and removed, to some extent reveal the essences of deep neural networks.

Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA

Adnan Siraj Rakin, Arizona State University; Yukui Luo and Xiaolin Xu, Northeastern University; Deliang Fan, Arizona State University

Available Media

The wide deployment of Deep Neural Networks (DNN) in high-performance cloud computing platforms brought to light multi-tenant cloud field-programmable gate arrays (FPGA) as a popular choice of accelerator to boost performance due to its hardware reprogramming flexibility. Such a multi-tenant FPGA setup for DNN acceleration potentially exposes DNN interference tasks under severe threat from malicious users. This work, to the best of our knowledge, is the first to explore DNN model vulnerabilities in multi-tenant FPGAs. We propose a novel adversarial attack framework: Deep-Dup, in which the adversarial tenant can inject adversarial faults to the DNN model in the victim tenant of FPGA. Specifically, she can aggressively overload the shared power distribution system of FPGA with malicious power-plundering circuits, achieving adversarial weight duplication (AWD) hardware attack that duplicates certain DNN weight packages during data transmission between off-chip memory and on-chip buffer, to hijack the DNN function of the victim tenant. Further, to identify the most vulnerable DNN weight packages for a given malicious objective, we propose a generic vulnerable weight package searching algorithm, called Progressive Differential Evolution Search (P-DES), which is, for the first time, adaptive to both deep learning white-box and black-box attack models. The proposed Deep-Dup is experimentally validated in a developed multi-tenant FPGA prototype, for two popular deep learning applications, i.e., Object Detection and Image Classification. Successful attacks are demonstrated in six popular DNN architectures (e.g., YOLOv2, ResNet-50, MobileNet, etc.) on three datasets (COCO, CIFAR-10, and ImageNet).

Entangled Watermarks as a Defense against Model Extraction

Hengrui Jia and Christopher A. Choquette-Choo, University of Toronto and Vector Institute; Varun Chandrasekaran, University of Wisconsin-Madison; Nicolas Papernot, University of Toronto and Vector Institute

Available Media

Machine learning involves expensive data collection and training procedures. Model owners may be concerned that valuable intellectual property can be leaked if adversaries mount model extraction attacks. As it is difficult to defend against model extraction without sacrificing significant prediction accuracy, watermarking instead leverages unused model capacity to have the model overfit to outlier input-output pairs. Such pairs are watermarks, which are not sampled from the task distribution and are only known to the defender. The defender then demonstrates knowledge of the input-output pairs to claim ownership of the model at inference. The effectiveness of watermarks remains limited because they are distinct from the task distribution and can thus be easily removed through compression or other forms of knowledge transfer.

We introduce Entangled Watermarking Embeddings (EWE). Our approach encourages the model to learn features for classifying data that is sampled from the task distribution and data that encodes watermarks. An adversary attempting to remove watermarks that are entangled with legitimate data is also forced to sacrifice performance on legitimate data. Experiments on MNIST, Fashion-MNIST, CIFAR-10, and Speech Commands validate that the defender can claim model ownership with 95% confidence with less than 100 queries to the stolen copy, at a modest cost below 0.81 percentage points on average in the defended model's performance.

Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps

Zhichuang Sun, Ruimin Sun, Long Lu, and Alan Mislove, Northeastern University

Available Media

On-device machine learning (ML) is quickly gaining popularity among mobile apps. It allows offline model inference while preserving user privacy. However, ML models, considered as core intellectual properties of model owners, are now stored on billions of untrusted devices and subject to potential thefts. Leaked models can cause both severe financial loss and security consequences. This paper presents the first empirical study of ML model protection on mobile devices. Our study aims to answer three open questions with quantitative evidence: How widely is model protection used in apps? How robust are existing model protection techniques? What impacts can (stolen) models incur? To that end, we built a simple app analysis pipeline and analyzed 46,753 popular apps collected from the US and Chinese app markets. We identified 1,468 ML apps spanning all popular app categories. We found that, alarmingly, 41% of ML apps do not protect their models at all, which can be trivially stolen from app packages. Even for those apps that use model protection or encryption, we were able to extract the models from 66% of them via unsophisticated dynamic analysis techniques. The extracted models are mostly commercial products and used for face recognition, liveness detection, ID/bank card recognition, and malware detection. We quantitatively estimated the potential financial and security impact of a leaked model, which can amount to millions of dollars for different stakeholders. Our study reveals that on-device models are currently at high risk of being leaked; attackers are highly motivated to steal such models. Drawn from our large-scale study, we report our insights into this emerging security problem and discuss the technical challenges, hoping to inspire future research on robust and practical model protection for mobile devices.

Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

Yuankun Zhu, The University of Texas at Dallas; Yueqiang Cheng, Baidu Security; Husheng Zhou, VMware; Yantao Lu, Syracuse University

Available Media

Deep Neural Network (DNN) models become one of the most valuable enterprise assets due to their critical roles in all aspects of applications. With the trend of privatization deployment of DNN models, the data leakage of the DNN models is becoming increasingly severe and widespread. All existing model-extraction attacks can only leak parts of targeted DNN models with low accuracy or high overhead. In this paper, we first identify a new attack surface -- unencrypted PCIe traffic, to leak DNN models. Based on this new attack surface, we propose a novel model-extraction attack, namely Hermes Attack, which is the first attack to fully steal the whole victim DNN model. The stolen DNN models have the same hyper-parameters, parameters, and semantically identical architecture as the original ones. It is challenging due to the closed-source CUDA runtime, driver, and GPU internals, as well as the undocumented data structures and the loss of some critical semantics in the PCIe traffic. Additionally, there are millions of PCIe packets with numerous noises and chaos orders. Our Hermes Attack addresses these issues by massive reverse engineering efforts and reliable semantic reconstruction, as well as skillful packet selection and order correction. We implement a prototype of the Hermes Attack and evaluate two sequential DNN models (i.e., MINIST and VGG) and one non-sequential DNN model (i.e., ResNet) on three NVIDIA GPU platforms, i.e., NVIDIA Geforce GT 730, NVIDIA Geforce GTX 1080 Ti, and NVIDIA Geforce RTX 2080 Ti. The evaluation results indicate that our scheme can efficiently and completely reconstruct ALL of them by making inferences on any one image. Evaluated with Cifar10 test dataset that contains 10, 000 images, the experiment results show that the stolen models have the same inference accuracy as the original ones (i.e., lossless inference accuracy).

Track 2

Automated Security Analysis of Source Code and Binaries

Session Chair: Qiang Zeng, University of South Carolina

ARCUS: Symbolic Root Cause Analysis of Exploits in Production Systems

Carter Yagemann, Georgia Institute of Technology; Matthew Pruett, Georgia Tech Research Institute; Simon P. Chung, Georgia Institute of Technology; Kennon Bittick, Georgia Tech Research Institute; Brendan Saltaformaggio and Wenke Lee, Georgia Institute of Technology

Available Media

End-host runtime monitors (e.g., CFI, system call IDS) flag processes in response to symptoms of a possible attack. Unfortunately, the symptom (e.g., invalid control transfer) may occur long after the root cause (e.g., buffer overflow), creating a gap whereby bug reports received by developers contain (at best) a snapshot of the process long after it executed the buggy instructions. To help system administrators provide developers with more concise reports, we propose ARCUS, an automated framework that performs root cause analysis over the execution flagged by the end-host monitor. ARCUS works by testing “what if” questions to detect vulnerable states, systematically localizing bugs to their concise root cause while finding additional enforceable checks at the program binary level to demonstrably block them. Using hardware-supported processor tracing, ARCUS decouples the cost of analysis from host performance.

We have implemented ARCUS and evaluated it on 31 vulnerabilities across 20 programs along with over 9,000 test cases from the RIPE and Juliet suites. ARCUS identifies the root cause of all tested exploits — with 0 false positives or negatives — and even finds 4 new 0-day vulnerabilities in traces averaging 4,000,000 basic blocks. ARCUS handles programs compiled from upwards of 810,000 lines of C/C++ code without needing concrete inputs or re-execution.

Automatic Firmware Emulation through Invalidity-guided Knowledge Inference

Wei Zhou, National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; Le Guan, Department of Computer Science, University of Georgia; Peng Liu, College of Information Sciences and Technology, The Pennsylvania State University; Yuqing Zhang, National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; School of Cyber Engineering, Xidian University; School of Computer Science and Cyberspace Security, Hainan University

Available Media

Emulating firmware for microcontrollers is challenging due to the tight coupling between the hardware and firmware. This has greatly impeded the application of dynamic analysis tools to firmware analysis. The state-of-the-art work automatically models unknown peripherals by observing their access patterns, and then leverages heuristics to calculate the appropriate responses when unknown peripheral registers are accessed. However, we empirically found that this approach and the corresponding heuristics are frequently insufficient to emulate firmware. In this work, we propose a new approach called µEmu to emulate firmware with unknown peripherals. Unlike existing work that attempts to build a general model for each peripheral, our approach learns how to correctly emulate firmware execution at individual peripheral access points. It takes the image as input and symbolically executes it by representing unknown peripheral registers as symbols. During symbolic execution, it infers the rules to respond to unknown peripheral accesses. These rules are stored in a knowledge base, which is referred to during the dynamic firmware analysis. µEmu achieved a passing rate of 93% in a set of unit tests for peripheral drivers without any manual assistance. We also evaluated µEmu with real-world firmware samples and new bugs were discovered.

Finding Bugs Using Your Own Code: Detecting Functionally-similar yet Inconsistent Code

Mansour Ahmadi, Reza Mirzazade Farkhani, Ryan Williams, and Long Lu, Northeastern University

Available Media

Probabilistic classification has shown success in detecting known types of software bugs. However, the works following this approach tend to require a large amount of specimens to train their models. We present a new machine learning-based bug detection technique that does not require any external code or samples for training. Instead, our technique learns from the very codebase on which the bug detection is performed, and therefore, obviates the need for the cumbersome task of gathering and cleansing training samples (e.g., buggy code of certain kinds). The key idea behind our technique is a novel two-step clustering process applied on a given codebase. This clustering process identifies code snippets in a project that are functionally-similar yet appear in inconsistent forms. Such inconsistencies are found to cause a wide range of bugs, anything from missing checks to unsafe type conversions. Unlike previous works, our technique is generic and not specific to one type of inconsistency or bug. We prototyped our technique and evaluated it using 5 popular open source software, including QEMU and OpenSSL. With a minimal amount of manual analysis on the inconsistencies detected by our tool, we discovered 22 new unique bugs, despite the fact that many of these programs are constantly undergoing bug scans and new bugs in them are believed to be rare.

Understanding and Detecting Disordered Error Handling with Precise Function Pairing

Qiushi Wu, Aditya Pakki, Navid Emamdoost, Stephen McCamant, and Kangjie Lu, University of Minnesota

Available Media

Software programs may frequently encounter various errors such as allocation failures. Error handling aims to gracefully deal with the errors to avoid security and reliability issues, thus it is prevalent and vital. However, because of its complexity and corner cases, error handling itself is often erroneous, and prior research has primarily focused on finding bugs in the handling part, such as incorrect error-code returning or missing error propagation.

In this paper, we propose and investigate a class of bugs in error-handling code from a different perspective. In particular, we find that programs often perform "cleanup" operations before the actual error handling, such as freeing memory or decreasing refcount. Critical bugs occur when these operations are performed (1) in an incorrect order, (2) redundantly, or (3) inadequately. We refer to such bugs as Disordered Error Handling (DiEH). Our investigation reveals that DiEH bugs are not only common but can also cause security problems such as privilege escalation, memory corruption, and denial-of-service. Based on the findings from the investigation, we then develop a system, HERO (Handling ERrors Orderly), to automatically detect DiEH. The core of HERO is a novel technique that precisely pairs both common and custom functions based on the unique error-handling structures, which allows us to infer expected cleanup functions. With HERO, we found 239 DiEH bugs in the Linux kernel, the FreeBSD kernel, and OpenSSL, which can cause security and reliability issues. The evaluation results show that DiEH is critical and widely exists in system software, and HERO is effective in detecting DiEH. We also believe that the precise function pairing is of independent interest in other research areas such as temporal-rule inference and race detection.

Precise and Scalable Detection of Use-after-Compacting-Garbage-Collection Bugs

HyungSeok Han, Andrew Wesie, and Brian Pak, Theori Inc.

Available Media

Compacting garbage collection (compact-gc) is a method that improves memory utilization and reduces memory fragmentation by rearranging live objects and updating their references using an address table. A critical use-after-free bug may exist if an object reference that is not registered in the address table is used after compact-gc, as the live object may be moved but the reference will not be updated after compact-gc. We refer to this as a use-after-compact-gc (use-after-cgc) bug. Prior tools have attempted to statically detect these bugs with target-specific heuristics. However, due to their path-insensitive analysis and imprecise target-specific heuristics, they have high false-positives and false-negatives.

In this paper, we present a precise and scalable static analyzer, named CGSan, for finding use-after-cgc bugs. CGSan detects use-after-cgc bug candidates by intra-procedural static symbolic taint analysis and checks their feasibility by under-constrained directed symbolic execution. To mitigate the incompleteness of intra-procedural analysis, we employ a type-based taint policy. For scalability, we propose using directed inter-procedural control-flow graphs, which reduce search spaces by excluding paths irrelevant to checking feasibility, and directed scheduling, which prioritizes paths to quickly check feasibility. We evaluated CGSan on Google V8 and Mozilla SpiderMonkey, and we found 13 unique use-after-cgc bugs with only 2 false-positives while two prior tools missed 10 bugs and had 34 false-positives in total.

Reducing Test Cases with Attention Mechanism of Neural Networks

Xing Zhang, Jiongyi Chen, Chao Feng, Ruilin Li, Yunfei Su, Bin Zhang, Jing Lei, and Chaojing Tang, National University of Defense Technology

Available Media

As fuzzing techniques become more effective at triggering program crashes, how to triage crashes with less human efforts has become increasingly imperative. To this aim, test case reduction which reduces a crashing input to its minimal form plays an important role, especially when analyzing programs with random, complex, or large inputs. However, existing solutions rely on random algorithms or pre-defined rules, which are inaccurate and error-prone in many cases because of the implementation variance in program internals.

In this paper, we present SCREAM, a new approach that leverages neural networks to reduce test cases. In particular, by feeding the network with a program's crashing inputs and non-crashing inputs, the network learns to approximate the computation from the program entry point to the crash point and implicitly denotes the input bytes that are significant to the crash. With the invisibility of the trained network's parameters, we leverage the attention mechanism to explain the network, namely extracting the significance of each input byte to the crash. At the end, the significant input bytes are re-assembled as the failure-inducing input.

The cost of our approach is to design a proper dataset augmentation algorithm and a suitable network structure. To this end, we develop a unique dataset augmentation technique that can generate adequate and highly-differentiable samples and expand the search space of crashing input. Highlights of our research also include a novel network structure that can capture dependence of input blocks in long sequences.

We evaluated SCREAM on 41 representative programs. The results show that SCREAM outperforms state-of-the-art solutions regarding accuracy and efficiency. Such improvement is made possible by the network's capability to summarize the significance of input bytes from multiple rounds of mutation, which tolerates perturbation occurred in random reduction of single crashing input.

FlowDist: Multi-Staged Refinement-Based Dynamic Information Flow Analysis for Distributed Software Systems

Xiaoqin Fu and Haipeng Cai, Washington State University, Pullman, WA

Available Media

Dynamic information flow analysis (DIFA) supports various security applications such as malware analysis and vulnerability discovery. Yet traditional DIFA approaches have limited utility for distributed software due to applicability, portability, and scalability barriers. We present FlowDist, a DIFA for common distributed software that overcomes these challenges. FlowDist works at purely application level to avoid platform customizations hence achieve high portability. It infers implicit, interprocess dependencies from global partially ordered execution events to address applicability to distributed software. Most of all, it introduces a multi-staged refinement-based scheme for application-level DIFA, where an otherwise expensive data flow analysis is reduced by method-level results from a cheap pre-analysis, to achieve high scalability while remaining effective. Our evaluation of FlowDist on 12 real-world distributed systems against two peer tools revealed its superior effectiveness with practical efficiency and scalability. It has found 18 known and 24 new vulnerabilities, with 17 confirmed and 2 fixed. We also present and evaluate two alternative designs of FlowDist for both design justification and diverse subject accommodations.

Track 3

Secure Multiparty Computation

Session Chair: Mayank Varia, Boston University

Privacy and Integrity Preserving Computations with CRISP

Sylvain Chatel, Apostolos Pyrgelis, Juan Ramón Troncoso-Pastoriza, and Jean-Pierre Hubaux, EPFL

Available Media

In the digital era, users share their personal data with service providers to obtain some utility, e.g., access to high-quality services. Yet, the induced information flows raise privacy and integrity concerns. Consequently, cautious users may want to protect their privacy by minimizing the amount of information they disclose to curious service providers. Service providers are interested in verifying the integrity of the users' data to improve their services and obtain useful knowledge for their business. In this work, we present a generic solution to the trade-off between privacy, integrity, and utility, by achieving authenticity verification of data that has been encrypted for offloading to service providers. Based on lattice-based homomorphic encryption and commitments, as well as zero-knowledge proofs, our construction enables a service provider to process and reuse third-party signed data in a privacy-friendly manner with integrity guarantees. We evaluate our solution on different use cases such as smart-metering, disease susceptibility, and location-based activity tracking, thus showing its versatility. Our solution achieves broad generality, quantum-resistance, and relaxes some assumptions of state-of-the-art solutions without affecting performance.

Senate: A Maliciously-Secure MPC Platform for Collaborative Analytics

Rishabh Poddar and Sukrit Kalra, UC Berkeley; Avishay Yanai, VMware Research; Ryan Deng, Raluca Ada Popa, and Joseph M. Hellerstein, UC Berkeley

Available Media

Many organizations stand to benefit from pooling their data together in order to draw mutually beneficial insights—e.g., for fraud detection across banks, better medical studies across hospitals, etc. However, such organizations are often prevented from sharing their data with each other by privacy concerns, regulatory hurdles, or business competition.

We present Senate, a system that allows multiple parties to collaboratively run analytical SQL queries without revealing their individual data to each other. Unlike prior works on secure multi-party computation (MPC) that assume that all parties are semi-honest, Senate protects the data even in the presence of malicious adversaries. At the heart of Senate lies a new MPC decomposition protocol that decomposes the cryptographic MPC computation into smaller units, some of which can be executed by subsets of parties and in parallel, while preserving its security guarantees. Senate then provides a new query planning algorithm that decomposes and plans the cryptographic computation effectively, achieving a performance of up to 145 × faster than the state-of-the-art.

GForce: GPU-Friendly Oblivious and Rapid Neural Network Inference

Lucien K. L. Ng and Sherman S. M. Chow, The Chinese University of Hong Kong, Hong Kong

Available Media

Neural-network classification is getting more pervasive. It captures data of the subjects to be classified, e.g., appearance for facial recognition, which is personal and often sensitive. Oblivious inference protects the data privacy of both the query and the model. However, it is not as fast and as accurate as its plaintext counterpart. A recent cryptographic solution Delphi (Usenix Security 2020) strives for low latency by using GPU on linear layers and replacing some non-linear units in the model at a price of accuracy. It can handle a query on CIFAR-100 with ~68% accuracy in 14s or ~66% accuracy in 2.6s.

We propose GForce, tackling the latency issue from the root causes instead of approximating non-linear computations. With the SWALP training approach (ICML 2019), we propose stochastic rounding and truncation (SRT) layers, which fuse quantization with dequantization between non-linear and linear layers and free us from floating-point operations for efficiency. They also ensure high accuracy while working over the severely-finite cryptographic field. We further propose a suite of GPU-friendly secure online/offline protocols for common operations, including comparison and wrap-around handling, which benefit non-linear layers, including our SRT.

With our two innovations, GForce supports VGG16, attaining ~73% accuracy over CIFAR-100 for the first time, in 0.4s. Compared with the prior best non-approximated solution (Usenix Security 2018), GForce speeds up non-linear~layers~in VGG by >34×. Our techniques shed light on a new direction that utilizes GPU throughout the model to minimize latency.

ABY2.0: Improved Mixed-Protocol Secure Two-Party Computation

Arpita Patra, Indian Institute of Science; Thomas Schneider, TU Darmstadt; Ajith Suresh, Indian Institute of Science; Hossein Yalame, TU Darmstadt

Available Media

Secure Multi-party Computation (MPC) allows a set of mutually distrusting parties to jointly evaluate a function on their private inputs while maintaining input privacy. In this work, we improve semi-honest secure two-party computation (2PC) over rings, with a focus on the efficiency of the online phase.

We propose an efficient mixed-protocol framework, outperforming the state-of-the-art 2PC framework of ABY. Moreover, we extend our techniques to multi-input multiplication gates without inflating the online communication, i.e., it remains independent of the fan-in. Along the way, we construct efficient protocols for several primitives such as scalar product, matrix multiplication, comparison, maxpool, and equality testing. The online communication of our scalar product is two ring elements irrespective of the vector dimension, which is a feature achieved for the first time in the 2PC literature.

The practicality of our new set of protocols is showcased with four applications: i) AES S-box, ii) Circuit-based Private Set Intersection, iii) Biometric Matching, and iv) Privacy-preserving Machine Learning (PPML). Most notably, for PPML, we implement and benchmark training and inference of Logistic Regression and Neural Networks over LAN and WAN networks. For training, we improve online runtime (both for LAN and WAN) over SecureML (Mohassel et al., IEEE S&P '17) in the range 1.5x–6.1x, while for inference, the improvements are in the range of 2.5x–754.3x.

Fantastic Four: Honest-Majority Four-Party Secure Computation With Malicious Security

Anders Dalskov, Aarhus University & Partisia; Daniel Escudero, Aarhus University; Marcel Keller, CSIRO's Data61

Available Media

This work introduces a novel four-party honest-majority MPC protocol with active security that achieves comparable efficiency to equivalent protocols in the same setting, while having a much simpler design and not relying on function-dependent preprocessing. Our initial protocol satisfies security with abort, but we present some extensions to achieve guaranteed output delivery. Unlike previous works, we do not achieve this by delegating the computation to one single party that is identified to be honest, which is likely to hinder the adoption of these technologies as it centralizes sensitive data. Instead, our novel approach guarantees termination of the protocol while ensuring that no single party (honest or corrupt) learns anything beyond the output.

We implement our four-party protocol with abort in the MP-SPDZ framework for multi-party computation and benchmark multiple applications like MNIST classification training and ImageNet inference. Our results show that our four-party protocol performs similarly to an efficient honest-majority three-party protocol that only provides semi-honest/passive security, which suggests that adding a fourth party can be an effective method to achieve active security without harming performance.

Muse: Secure Inference Resilient to Malicious Clients

Ryan Lehmkuhl and Pratyush Mishra, UC Berkeley; Akshayaram Srinivasan, Tata Institute of Fundamental Research; Raluca Ada Popa, UC Berkeley

Available Media

The increasing adoption of machine learning inference in applications has led to a corresponding increase in concerns about the privacy guarantees offered by existing mechanisms for inference. Such concerns have motivated the construction of efficient secure inference protocols that allow parties to perform inference without revealing their sensitive information. Recently, there has been a proliferation of such proposals, rapidly improving efficiency. However, most of these protocols assume that the client is semi-honest, that is, the client does not deviate from the protocol; yet in practice, clients are many, have varying incentives, and can behave arbitrarily.

To demonstrate that a malicious client can completely break the security of semi-honest protocols, we first develop a new model-extraction attack against many state-of-the-art secure inference protocols. Our attack enables a malicious client to learn model weights with 22x--312x fewer queries than the best black-box model-extraction attack and scales to much deeper networks.

Motivated by the severity of our attack, we design and implement MUSE, an efficient two-party secure inference protocol resilient to malicious clients. MUSE introduces a novel cryptographic protocol for conditional disclosure of secrets to switch between authenticated additive secret shares and garbled circuit labels, and an improved Beaver's triple generation procedure which is 8x--12.5x faster than existing techniques.

These protocols allow MUSE to push a majority of its cryptographic overhead into a preprocessing phase: compared to the equivalent semi-honest protocol (which is close to state-of-the-art), MUSE's online phase is only 1.7x--2.2x slower and uses 1.4x more communication. Overall, MUSE is 13.4x--21x faster and uses 2x--3.6x less communication than existing secure inference protocols which defend against malicious clients.

ObliCheck: Efficient Verification of Oblivious Algorithms with Unobservable State

Jeongseok Son, Griffin Prechter, Rishabh Poddar, Raluca Ada Popa, and Koushik Sen, University of California, Berkeley

Available Media

Encryption of secret data prevents an adversary from learning sensitive information by observing the transferred data. Even though the data itself is encrypted, however, an attacker can watch which locations of the memory, disk, and network are accessed and infer a significant amount of secret information.

To defend against attacks based on this access pattern leakage, a number of oblivious algorithms have been devised. These algorithms transform the access pattern in a way that the access sequences are independent of the secret input data. Since oblivious algorithms tend to be slow, a go-to optimization for algorithm designers is to leverage space unobservable to the attacker. However, one can easily miss a subtle detail and violate the oblivious property in the process of doing so.

In this paper, we propose ObliCheck, a checker verifying whether a given algorithm is indeed oblivious. In contrast to existing checkers, ObliCheck distinguishes the observable and unobservable state of an algorithm. It employs symbolic execution to check whether all execution paths exhibit the same observable behavior. To achieve accuracy and efficiency, ObliCheck introduces two key techniques: Optimistic State Merging to quickly check if the algorithm is oblivious, and Iterative State Unmerging to iteratively refine its judgment if the algorithm is reported as not oblivious. ObliCheck achieves ×50300 of performance improvement over conventional symbolic execution without sacrificing accuracy.

10:45 am–11:15 am

Break

11:15 am–1:00 pm

Track 1

Adversarial Machine Learning: Defenses

Session Chair: Bimal Viswanath, Virginia Tech

PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking

Chong Xiang, Princeton University; Arjun Nitin Bhagoji, University of Chicago; Vikash Sehwag and Prateek Mittal, Princeton University

Available Media

Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image. Such attacks can be realized in the physical world by attaching the adversarial patch to the object to be misclassified, and defending against such attacks is an unsolved/open problem. In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches. The cornerstone of PatchGuard involves the use of CNNs with small receptive fields to impose a bound on the number of features corrupted by an adversarial patch. Given a bounded number of corrupted features, the problem of designing an adversarial patch defense reduces to that of designing a secure feature aggregation mechanism. Towards this end, we present our robust masking defense that robustly detects and masks corrupted features to recover the correct prediction. Notably, we can prove the robustness of our defense against any adversary within our threat model. Our extensive evaluation on ImageNet, ImageNette (a 10-class subset of ImageNet), and CIFAR-10 datasets demonstrates that our defense achieves state-of-the-art performance in terms of both provable robust accuracy and clean accuracy.

T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification

Ahmadreza Azizi and Ibrahim Asadullah Tahmid, Virginia Tech; Asim Waheed, LUMS Pakistan; Neal Mangaokar, University of Michigan; Jiameng Pu, Virginia Tech; Mobin Javed, LUMS Pakistan; Chandan K. Reddy and Bimal Viswanath, Virginia Tech

Available Media

Deep Neural Network (DNN) classifiers are known to be vulnerable to Trojan or backdoor attacks, where the classifier is manipulated such that it misclassifies any input containing an attacker-determined Trojan trigger. Backdoors compromise a model's integrity, thereby posing a severe threat to the landscape of DNN-based classification. While multiple defenses against such attacks exist for classifiers in the image domain, there have been limited efforts to protect classifiers in the text domain.

We present Trojan-Miner (T-Miner) --- a defense framework for Trojan attacks on DNN-based text classifiers. T-Miner employs a sequence-to-sequence (seq-2-seq) generative model that probes the suspicious classifier and learns to produce text sequences that are likely to contain the Trojan trigger. T-Miner then analyzes the text produced by the generative model to determine if they contain trigger phrases, and correspondingly, whether the tested classifier has a backdoor. T-Miner requires no access to the training dataset or clean inputs of the suspicious classifier, and instead uses synthetically crafted "nonsensical" text inputs to train the generative model. We extensively evaluate T-Miner on 1100 model instances spanning 3 ubiquitous DNN model architectures, 5 different classification tasks, and a variety of trigger phrases. We show that T-Miner detects Trojan and clean models with a 98.75% overall accuracy, while achieving low false positives on clean models. We also show that T-Miner is robust against a variety of targeted, advanced attacks from an adaptive attacker.

WaveGuard: Understanding and Mitigating Audio Adversarial Examples

Shehzeen Hussain, Paarth Neekhara, Shlomo Dubnov, Julian McAuley, and Farinaz Koushanfar, University of California, San Diego

Available Media

There has been a recent surge in adversarial attacks on deep learning based automatic speech recognition (ASR) systems. These attacks pose new challenges to deep learning security and have raised significant concerns in deploying ASR systems in safety-critical applications. In this work, we introduce WaveGuard: a framework for detecting adversarial inputs that are crafted to attack ASR systems. Our framework incorporates audio transformation functions and analyses the ASR transcriptions of the original and transformed audio to detect adversarial inputs. We demonstrate that our defense framework is able to reliably detect adversarial examples constructed by four recent audio adversarial attacks, with a variety of audio transformation functions. With careful regard for best practices in defense evaluations, we analyze our proposed defense and its strength to withstand adaptive and robust attacks in the audio domain. We empirically demonstrate that audio transformations that recover audio from perceptually informed representations can lead to a strong defense that is robust against an adaptive adversary even in a complete white-box setting. Furthermore, WaveGuard can be used out-of-the box and integrated directly with any ASR model to efficiently detect audio adversarial examples, without the need for model retraining.

Cost-Aware Robust Tree Ensembles for Security Applications

Yizheng Chen, Shiqi Wang, Weifan Jiang, Asaf Cidon, and Suman Jana, Columbia University

Available Media

There are various costs for attackers to manipulate the features of security classifiers. The costs are asymmetric across features and to the directions of changes, which cannot be precisely captured by existing cost models based on Lp-norm robustness. In this paper, we utilize such domain knowledge to increase the attack cost of evading classifiers, specifically, tree ensemble models that are widely used by security tasks. We propose a new cost modeling method to capture the feature manipulation cost as constraint, and then we integrate the cost-driven constraint into the node construction process to train robust tree ensembles. During the training process, we use the constraint to find data points that are likely to be perturbed given the feature manipulation cost, and we use a new robust training algorithm to optimize the quality of the trees. Our cost-aware training method can be applied to different types of tree ensembles, including gradient boosted decision trees and random forest models. Using Twitter spam detection as the case study, our evaluation results show that we can increase the attack cost by 10.6X compared to the baseline. Moreover, our robust training method using cost-driven constraint can achieve higher accuracy, lower false positive rate, and stronger cost-aware robustness than the state-of-the-art training method using L-norm cost model. Our code is available at https://github.com/surrealyz/growtrees.

Dompteur: Taming Audio Adversarial Examples

Thorsten Eisenhofer, Lea Schönherr, and Joel Frank, Ruhr University Bochum; Lars Speckemeier, University College London; Dorothea Kolossa and Thorsten Holz, Ruhr University Bochum

Available Media

Adversarial examples seem to be inevitable. These specifically crafted inputs allow attackers to arbitrarily manipulate machine learning systems. Even worse, they often seem harmless to human observers. In our digital society, this poses a significant threat. For example, Automatic Speech Recognition (ASR) systems, which serve as hands-free interfaces to many kinds of systems, can be attacked with inputs incomprehensible for human listeners. The research community has unsuccessfully tried several approaches to tackle this problem.

In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners. By applying the principles of psychoacoustics, we can remove semantically irrelevant information from the ASR input and train a model that resembles human perception more closely. We implement our idea in a tool named DOMPTEUR and demonstrate that our augmented system, in contrast to an unmodified baseline, successfully focuses on perceptible ranges of the input signal. This change forces adversarial examples into the audible range, while using minimal computational overhead and preserving benign performance. To evaluate our approach, we construct an adaptive attacker that actively tries to avoid our augmentations and demonstrate that adversarial examples from this attacker remain clearly perceivable. Finally, we substantiate our claims by performing a hearing test with crowd-sourced human listeners.

CADE: Detecting and Explaining Concept Drift Samples for Security Applications

Limin Yang, University of Illinois at Urbana-Champaign; Wenbo Guo, The Pennsylvania State University; Qingying Hao, University of Illinois at Urbana-Champaign; Arridhana Ciptadi and Ali Ahmadzadeh, Blue Hexagon; Xinyu Xing, The Pennsylvania State University; Gang Wang, University of Illinois at Urbana-Champaign

Available Media

Concept drift poses a critical challenge to deploy machine learning models to solve practical security problems. Due to the dynamic behavior changes of attackers (and/or the benign counterparts), the testing data distribution is often shifting from the original training data over time, causing major failures to the deployed model.

To combat concept drift, we present a novel system CADE aiming to 1) detect drifting samples that deviate from existing classes, and 2) provide explanations to reason the detected drift. Unlike traditional approaches (that require a large number of new labels to determine concept drift statistically), we aim to identify individual drifting samples as they arrive. Recognizing the challenges introduced by the high-dimensional outlier space, we propose to map the data samples into a low-dimensional space and automatically learn a distance function to measure the dissimilarity between samples. Using contrastive learning, we can take full advantage of existing labels in the training dataset to learn how to compare and contrast pairs of samples. To reason the meaning of the detected drift, we develop a distance-based explanation method. We show that explaining "distance" is much more effective than traditional methods that focus on explaining a "decision boundary" in this problem context. We evaluate CADE with two case studies: Android malware classification and network intrusion detection. We further work with a security company to test CADE on its malware database. Our results show that CADE can effectively detect drifting samples and provide semantically meaningful explanations.

SIGL: Securing Software Installations Through Deep Graph Learning

Xueyuan Han, Harvard University; Xiao Yu, NEC Laboratories America; Thomas Pasquier, University of Bristol; Ding Li, Peking University; Junghwan Rhee, NEC Laboratories America; James Mickens, Harvard University; Margo Seltzer, University of British Columbia; Haifeng Chen, NEC Laboratories America

Available Media

Many users implicitly assume that software can only be exploited after it is installed. However, recent supply-chain attacks demonstrate that application integrity must be ensured during installation itself. We introduce SIGL, a new tool for detecting malicious behavior during software installation. SIGL collects traces of system call activity, building a data provenance graph that it analyzes using a novel autoencoder architecture with a graph long short-term memory network (graph LSTM) for the encoder and a standard multilayer perceptron for the decoder. SIGL flags suspicious installations as well as the specific installation-time processes that are likely to be malicious. Using a test corpus of 625 malicious installers containing real-world malware, we demonstrate that SIGL has a detection accuracy of 96%, outperforming similar systems from industry and academia by up to 87% in precision and recall and 45% in accuracy. We also demonstrate that SIGL can pinpoint the processes most likely to have triggered malicious behavior, works on different audit platforms and operating systems, and is robust to training data contamination and adversarial attack. It can be used with application-specific models, even in the presence of new software versions, as well as application-agnostic meta-models that encompass a wide range of applications and installers.

Track 2

Operating Systems Security

Session Chairs: Ardalan Amiri Sani, University of California, Irvine, and Dave (Jing) Tian, Purdue University

ExpRace: Exploiting Kernel Races through Raising Interrupts

Yoochan Lee, Seoul National University; Changwoo Min, Virginia Tech; Byoungyoung Lee, Seoul National University

Available Media

A kernel data race is notoriously challenging to detect, reproduce, and diagnose, mainly caused by nondeterministic thread interleaving. The kernel data race has a critical security implication since it often leads to memory corruption, which can be abused to launch privilege escalation attacks. Interestingly, due to the challenges above, the exploitation of the kernel data race is also challenging. Specifically, we find that some kernel races are nearly impossible to exploit due to its unique requirement on execution orders, which is almost impossible to happen without manual intervention.

This paper develops a generic exploitation technique for kernel data races. To this end, we first analyze kernel data races, which finds an intrinsic condition classifying easy-to-exploit and hard-to-exploit races. Then we develop ExpRace a generic race exploitation technique for various kernels including Linux, Microsoft Windows, and Mac OS X. ExpRace turns hard-to-exploit races into easy-to-exploit races by manipulating an interrupt mechanism during the exploitation. According to our evaluation with 10 real-world hard-to-exploit races, ExpRace was able to exploit all of those within 10 to 118 seconds, while an exploitation without ExpRace failed for all given 24 hours.

Undo Workarounds for Kernel Bugs

Seyed Mohammadjavad Seyed Talebi, Zhihao Yao, and Ardalan Amiri Sani, UC Irvine; Zhiyun Qian, UC Riverside; Daniel Austin, Atlassian

Available Media

OS kernels are full of bugs resulting in security, reliability, and usability issues. Several kernel fuzzers have recently been developed to find these bugs and have proven to be effective. Yet, bugs take several months to be patched once they are discovered. In this window of vulnerability, bugs continue to pose concerns. We present workarounds for kernel bugs, called bowknots, which maintain the functionality of the system even when bugs are triggered, are applicable to many kernel bugs, do not cause noticeable performance overhead, and have a small kernel footprint. The key idea behind bowknots is to undo the side effects of the in-flight syscall that triggers a bug, effectively neutralizing the syscall. We also present a static analysis tool, called Hecaton, that generates bowknots automatically and inserts them into the kernel. Through extensive evaluations on the kernel of Android devices as well as x86 upstream kernels, we demonstrate that bowknots are effective in mitigating kernel bugs and vulnerabilities. We also show that Hecaton is capable of generating the right bowknots fully automatically in majority of cases, and requires minimal help from the analyst for the rest. Finally, we demonstrate the benefits of bowknots in improving the efficiency of kernel fuzzing by eliminating repetitive reboots.

An Analysis of Speculative Type Confusion Vulnerabilities in the Wild

Ofek Kirzner and Adam Morrison, Tel Aviv University

Distinguished Paper Award Winner and First Prize winner of the 2021 Internet Defense Prize

Available Media

Spectre v1 attacks, which exploit conditional branch misprediction, are often identified with attacks that bypass array bounds checking to leak data from a victim's memory. Generally, however, Spectre v1 attacks can exploit any conditional branch misprediction that makes the victim execute code incorrectly. In this paper, we investigate speculative type confusion, a Spectre v1 attack vector in which branch mispredictions make the victim execute with variables holding values of the wrong type and thereby leak memory content.

We observe that speculative type confusion can be inadvertently introduced by a compiler, making it extremely hard for programmers to reason about security and manually apply Spectre mitigations. We thus set out to determine the extent to which speculative type confusion affects the Linux kernel. Our analysis finds exploitable and potentially-exploitable arbitrary memory disclosure vulnerabilities. We also find many latent vulnerabilities, which could become exploitable due to innocuous system changes, such as coding style changes.

Our results suggest that Spectre mitigations which rely on statically/manually identifying "bad" code patterns need to be rethought, and more comprehensive mitigations are needed.

Blinder: Partition-Oblivious Hierarchical Scheduling

Man-Ki Yoon, Mengqi Liu, Hao Chen, Jung-Eun Kim, and Zhong Shao, Yale University

Available Media

Hierarchical scheduling enables modular reasoning about the temporal behavior of individual applications by partitioning CPU time and thus isolating potential misbehavior. However, conventional time-partitioning mechanisms fail to achieve strong temporal isolation from a security perspective; variations in the executions of partitions can be perceived by others, which enables an algorithmic covert timing-channel between partitions that are completely isolated from each other in the utilization of time. Thus, we present a run-time algorithm that makes partitions oblivious to others' varying behaviors even when an adversary has full control over their timings. It enables the use of dynamic time-partitioning mechanisms that provide improved responsiveness, while guaranteeing the algorithmic-level non-interference that static approaches would achieve. From an implementation on an open-source operating system, we evaluate the costs of the solution in terms of the responsiveness as well as scheduling overhead.

SHARD: Fine-Grained Kernel Specialization with Context-Aware Hardening

Muhammad Abubakar, Adil Ahmad, Pedro Fonseca, and Dongyan Xu, Purdue University

Available Media

With growing hardware complexity and ever-evolving user requirements, the kernel is increasingly bloated which increases its attack surface. Despite its large size, for specific applications and workloads, only a small subset of the kernel code is actually required. Kernel specialization approaches exploit this observation to either harden the kernel or restrict access to its code (debloating) on a per-application basis. However, existing approaches suffer from coarse specialization granularity and lack strict enforcement which limits their effectiveness.

This paper presents SHARD, a practical framework to enforce fine-grain kernel specialization. SHARD specializes at both the application and system call levels to significantly restrict the kernel code exposed to attackers. Furthermore, SHARD introduces context-aware hardening to dynamically enable code hardening during suspicious execution contexts. SHARD implements an instance of a context-aware hardening scheme using control-flow integrity (CFI), which provides near-native performance for non-hardened executions and strong security guarantees. Our analysis of the kernel attack surface reduction with SHARD as well as concrete attacks shows that SHARD exposes 181× less kernel code than the native kernel, an order of magnitude better than existing work, and prevents 90% of the evaluated attacks. Our evaluation shows that the average performance overhead ofSHARD on real-world applications is moderate — 10% to 36% on NGINX, 3% to 10% on Redis, and 0% to 2.7% on the SPEC CPU 2006 benchmarks.

Preventing Use-After-Free Attacks with Fast Forward Allocation

Brian Wickman, GTRI; Hong Hu, PennState; Insu Yun, Daehee Jang, and JungWon Lim, GeorgiaTech; Sanidhya Kashyap, EPFL; Taesoo Kim, GeorgiaTech

Available Media

Memory-unsafe languages are widely used to implement critical systems like kernels and browsers, leading to thousands of memory safety issues every year. A use-after-free bug is a temporal memory error where the program accidentally visits a freed memory location. Recent studies show that use-after-free is one of the most exploited memory vulnerabilities. Unfortunately, previous efforts to mitigate use-after-free bugs are not widely deployed in real-world programs due to either inadequate accuracy or high performance overhead.

In this paper, we propose to resurrect the idea of one-time allocation (OTA) and provide a practical implementation with efficient execution and moderate memory overhead. With one-time allocation, the memory manager always returns a distinct memory address for each request. Since memory locations are not reused, attackers cannot reclaim freed objects, and thus cannot exploit use-after-free bugs. We utilize two techniques to render OTA practical: batch page management and the fusion of bump-pointer and fixed-size bins memory allocation styles. Batch page management helps reduce the number of system calls which negatively impact performance, while blending the two allocation methods mitigates the memory overhead and fragmentation issues. We implemented a prototype, called FFmalloc, to demonstrate our techniques. We evaluated FFmalloc on widely used benchmarks and real-world large programs. FFmalloc successfully blocked all tested use-after-free attacks while introducing moderate overhead. The results show that OTA can be a strong and practical solution to thwart use-after-free threats.

Detecting Kernel Refcount Bugs with Two-Dimensional Consistency Checking

Xin Tan, Yuan Zhang, and Xiyu Yang, Fudan University; Kangjie Lu, University of Minnesota; Min Yang, Fudan University

Available Media

In the Linux kernel, reference counting (refcount) has become a default mechanism that manages resource objects. A refcount of a tracked object is incremented when a new reference is assigned and decremented when a reference becomes invalid. Since the kernel manages a large number of shared resources, refcount is prevalent. Due to the inherent complexity of the kernel and resource sharing, developers often fail to properly update refcounts, leading to refcount bugs. Researchers have shown that refcount bugs can cause critical security impacts like privilege escalation; however, the detection of refcount bugs remains an open problem.

In this paper, we propose CID, a new mechanism that employs two-dimensional consistency checking to automatically detect refcount bugs. By checking if callers consistently use a refcount function, CID detects deviating cases as potential bugs, and by checking how a caller uses a refcount function, CID infers the condition-aware rules for the function to correspondingly operate the refcount, and thus a violating case is a potential bug. More importantly, CID's consistency checking does not require complicated semantic understanding, inter-procedural data-flow tracing, or refcount-operation reasoning. CID also features an automated mechanism that systematically identifies refcount fields and functions in the whole kernel. We implement CID and apply it to the Linux kernel. The tool found 44 new refcount bugs that may cause severe security issues, most of which have been confirmed by the maintainers.

Track 3

Web Security 1; Software Security

Session Chairs: Devdatta Akhawe, Figma, and Gunes Acar, Katholieke Universiteit Leuven

Effective Notification Campaigns on the Web: A Matter of Trust, Framing, and Support

Max Maass and Alina Stöver, TU Darmstadt; Henning Pridöhl, Universität Bamberg; Sebastian Bretthauer, Goethe-Universität Frankfurt; Dominik Herrmann, Universität Bamberg; Matthias Hollick, TU Darmstadt; Indra Spiecker, Goethe-Universität Frankfurt

Available Media

Misconfigurations and outdated software are a major cause of compromised websites and data leaks. Past research has proposed and evaluated sending automated security notifications to the operators of misconfigured websites, but encountered issues with reachability, mistrust, and a perceived lack of importance. In this paper, we seek to understand the determinants of effective notifications. We identify a data protection misconfiguration that affects 12.7 % of the 1.3 million websites we scanned and opens them up to legal liability. Using a subset of 4754 websites, we conduct a multivariate randomized controlled notification experiment, evaluating contact medium, sender, and framing of the message. We also include a link to a public web-based self-service tool that is run by us in disguise and conduct an anonymous survey of the notified website owners (N=477) to understand their perspective.

We find that framing a misconfiguration as a problem of legal compliance can increase remediation rates, especially when the notification is sent as a letter from a legal research group, achieving remediation rates of 76.3 % compared to 33.9 % for emails sent by computer science researchers warning about a privacy issue. Across all groups, 56.6 % of notified owners remediated the issue, compared to 9.2 % in the control group. In conclusion, we present factors that lead website owners to trust a notification, show what framing of the notification brings them into action, and how they can be supported in remediating the issue.

Fingerprinting in Style: Detecting Browser Extensions via Injected Style Sheets

Pierre Laperdrix, Univ. Lille, CNRS, Inria; Oleksii Starov, Palo Alto Networks; Quan Chen and Alexandros Kapravelos, North Carolina State University; Nick Nikiforakis, Stony Brook University

Available Media

Browser extensions enhance the web experience and have seen great adoption from users in the past decade. At the same time, past research has shown that online trackers can use various techniques to infer the presence of installed extensions and abuse them to track users as well as uncover sensitive information about them.

In this work we present a novel extension-fingerprinting vector showing how style modifications from browser extensions can be abused to identify installed extensions. We propose a pipeline that analyzes extensions both statically and dynamically and pinpoints their injected style sheets. Based on these, we craft a set of triggers that uniquely identify browser extensions from the context of the visited page. We analyzed 116K extensions from Chrome's Web Store and report that 6,645 of them inject style sheets on any website that users visit. Our pipeline has created triggers that uniquely identify 4,446 of these extensions, 1,074 (24%) of which could not be fingerprinted with previous techniques. Given the power of this new extension-fingerprinting vector, we propose specific countermeasures against style fingerprinting that have minimal impact on the overall user experience.

JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals

Soheil Khodayari and Giancarlo Pellegrino, CISPA Helmholtz Center for Information Security

Available Media

Client-side CSRF is a new type of CSRF vulnerability where the adversary can trick the client-side JavaScript program to send a forged HTTP request to a vulnerable target site by modifying the program's input parameters. We have little-to-no knowledge of this new vulnerability, and exploratory security evaluations of JavaScript-based web applications are impeded by the scarcity of reliable and scalable testing techniques. This paper presents JAW, a framework that enables the analysis of modern web applications against client-side CSRF leveraging declarative traversals on hybrid property graphs, a canonical, hybrid model for JavaScript programs. We use JAW to evaluate the prevalence of client-side CSRF vulnerabilities among all (i.e., 106) web applications from the Bitnami catalog, covering over 228M lines of JavaScript code. Our approach uncovers 12,701 forgeable client-side requests affecting 87 web applications in total. For 203 forgeable requests, we successfully created client-side CSRF exploits against seven web applications that can execute arbitrary server-side state-changing operations or enable cross-site scripting and SQL injection, that are not reachable via the classical attack vectors. Finally, we analyzed the forgeable requests and identified 25 request templates, highlighting the fields that can be manipulated and the type of manipulation.

AdCube: WebVR Ad Fraud and Practical Confinement of Third-Party Ads

Hyunjoo Lee, Jiyeon Lee, and Daejun Kim, Korea Advanced Institute of Science and Technology; Suman Jana, Columbia University; Insik Shin and Sooel Son, Korea Advanced Institute of Science and Technology

Available Media

Web technology has evolved to offer 360-degree immersive browsing experiences. This new technology, called WebVR, enables virtual reality by rendering a three-dimensional world on an HTML canvas. Unfortunately, there exists no browser-supported way of sharing this canvas between different parties. Assuming an abusive ad service provider who exploits this absence, we present four new ad fraud attack methods. Our user study demonstrates that the success rates of our attacks range from 88.23% to 100%, confirming their effectiveness. To mitigate the presented threats, we propose AdCube, which allows publishers to specify the behaviors of third-party ad code and enforce this specification. We show that AdCube is able to block the presented threats with a small page loading latency of 236 msec and a negligible frame-per-second (FPS) drop for nine WebVR official demo sites.

CACTI: Captcha Avoidance via Client-side TEE Integration

Yoshimichi Nakatsuka and Ercan Ozturk, University of California, Irvine; Andrew Paverd, Microsoft Research; Gene Tsudik, University of California, Irvine

Available Media

Preventing abuse of web services by bots is an increasingly important problem, as abusive activities grow in both volume and variety. CAPTCHAs are the most common way for thwarting bot activities. However, they are often ineffective against bots and frustrating for humans. In addition, some recent CAPTCHA techniques diminish user privacy. Meanwhile, client-side Trusted Execution Environments (TEEs) are becoming increasingly widespread (notably, ARM TrustZone and Intel SGX), allowing establishment of trust in a small part (trust anchor or TCB) of client-side hardware. This prompts the question: can a TEE help reduce (or remove entirely) user burden of solving CAPTCHAs?

In this paper, we design CACTI: CAPTCHA Avoidance via Client-side TEE Integration. Using client-side TEEs, CACTI allows legitimate clients to generate unforgeable rate-proofs demonstrating how frequently they have performed specific actions. These rate-proofs can be sent to web servers in lieu of solving CAPTCHAs. CACTI provides strong client privacy guarantees, since the information is only sent to the visited website and authenticated using a group signature scheme. Our evaluations show that overall latency of generating and verifying a CACTI rate-proof is less than 0.25 sec, while CACTI's bandwidth overhead is over 98% lower than that of current CAPTCHA systems.

PolyScope: Multi-Policy Access Control Analysis to Compute Authorized Attack Operations in Android Systems

Yu-Tsung Lee, Penn State University; William Enck, North Carolina State University; Haining Chen, Google; Hayawardh Vijayakumar, Samsung Research; Ninghui Li, Purdue University; Zhiyun Qian and Daimeng Wang, UC Riverside; Giuseppe Petracca, Lyft; Trent Jaeger, Penn State University

Available Media

Android's filesystem access control provides a foundation for system integrity. It combines mandatory (e.g., SEAndroid)and discretionary (e.g., Unix permissions) access control, protecting both the Android platform from Android/OEM services and Android/OEM services from third-party applications. However, OEMs often introduce vulnerabilities when they add market-differentiating features and fail to correctly reconfigure this complex combination of policies. In this paper, we propose the PolyScope tool to triage Android systems for vulnerabilities using their filesystem access control policies by: (1) identifying the resources that subjects are authorized to use that may be modified by their adversaries, both with and without policy manipulations, and (2) determining the attack operations on those resources that are actually available to adversaries to reveal the specific cases that need vulnerability testing. A key insight is that adversaries may exploit discretionary elements in Android access control to expand the permissions available to themselves and/or victims to launch attack operations, which we call permission expansion. We apply PolyScope to five Google and five OEM Android releases and find that permission expansion increases the privilege available to launch attacks, sometimes by more than 10x, but a significant fraction (about 15-20%) cannot be converted into attack operations due to other system configurations. Based on this analysis, we describe two previously un-known vulnerabilities and show how PolyScope helps OEMs triage the complex combination of access control policies down to attack operations worthy of testing

Nyx: Greybox Hypervisor Fuzzing using Fast Snapshots and Affine Types

Sergej Schumilo, Cornelius Aschermann, Ali Abbasi, Simon Wör­ner, and Thorsten Holz, Ruhr-Universität Bochum

Available Media

A hypervisor (also know as virtual machine monitor, VMM) enforces the security boundaries between different virtual machines (VMs) running on the same physical machine. A malicious user who is able to run her own kernel on a cloud VM can interact with a large variety of attack surfaces. Exploiting a software fault in any of these surfaces leads to full access to all other VMs that are co-located on the same host. Hence, the efficient detection of hypervisor vulnerabilities is crucial for the security of the modern cloud infrastructure. Recent work showed that blind fuzzing is the most efficient approach to identify security issues in hypervisors, mainly due to an outstandingly high test throughput.

In this paper we present the design and implementation of NYX, a highly optimized, coverage-guided hypervisor fuzzer. We show how a fast snapshot restoration mechanism that allows us to reload the system under test thousands of times per second is key to performance. Furthermore, we introduce a novel mutation engine based on custom bytecode programs, encoded as directed acyclic graphs (DAG), and affine types, that enables the required flexibility to express complex interactions. Our evaluation shows that, while NYX has a lower throughput than the state-of-the-art hypervisor fuzzer, it performs competitively on simple targets: NYX typically requires only a few minutes longer to achieve the same test coverage. On complex devices, however, our approach is able to significantly outperform existing works. Moreover, we are able to uncover substantially more bugs: in total, we uncovered 44 new bugs with 22 CVEs requested. Our results demonstrate that coverage guidance is highly valuable, even if a blind fuzzer can be significantly faster.

1:00 pm–1:15 pm

Break

1:15 pm–3:00 pm

Track 1

Machine Learning: Privacy Issues

Session Chairs: Yuan Tian, University of Virginia, and Esfandiar Mohammadi, University of Lübeck

Systematic Evaluation of Privacy Risks of Machine Learning Models

Liwei Song and Prateek Mittal, Princeton University

Available Media

Machine learning models are prone to memorizing sensitive data, making them vulnerable to membership inference attacks in which an adversary aims to guess if an input sample was used to train the model. In this paper, we show that prior work on membership inference attacks may severely underestimate the privacy risks by relying solely on training custom neural network classifiers to perform attacks and focusing only on the aggregate results over data samples, such as the attack accuracy. To overcome these limitations, we first propose to benchmark membership inference privacy risks by improving existing non-neural network based inference attacks and proposing a new inference attack method based on a modification of prediction entropy. We propose to supplement existing neural network based attacks with our proposed benchmark attacks to effectively measure the privacy risks. We also propose benchmarks for defense mechanisms by accounting for adaptive adversaries with knowledge of the defense and also accounting for the trade-off between model accuracy and privacy risks. Using our benchmark attacks, we demonstrate that existing defense approaches against membership inference attacks are not as effective as previously reported.

Next, we introduce a new approach for fine-grained privacy analysis by formulating and deriving a new metric called the privacy risk score. Our privacy risk score metric measures an individual sample's likelihood of being a training member, which allows an adversary to identify samples with high privacy risks and perform membership inference attacks with high confidence. We propose to combine both existing aggregate privacy analysis and our proposed fine-grained privacy analysis for systematically measuring privacy risks. We experimentally validate the effectiveness of the privacy risk score metric and demonstrate that the distribution of privacy risk scores across individual samples is heterogeneous. Finally, we perform an in-depth investigation to understand why certain samples have high privacy risk scores, including correlations with model properties such as model sensitivity, generalization error, and feature embeddings. Our work emphasizes the importance of a systematic and rigorous evaluation of privacy risks of machine learning models. We publicly release our code at https://github.com/inspire-group/membership-inference-evaluation and our evaluation mechanisms have also been integrated in Google's TensorFlow Privacy library.

Extracting Training Data from Large Language Models

Nicholas Carlini, Google; Florian Tramèr, Stanford University; Eric Wallace, UC Berkeley; Matthew Jagielski, Northeastern University; Ariel Herbert-Voss, OpenAI and Harvard University; Katherine Lee and Adam Roberts, Google; Tom Brown, OpenAI; Dawn Song, UC Berkeley; Úlfar Erlingsson, Apple; Alina Oprea, Northeastern University; Colin Raffel, Google

Available Media

It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.

We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. Worryingly, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.

SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning

Nishat Koti, Mahak Pancholi, Arpita Patra, and Ajith Suresh, Indian Institute of Science, Bangalore

Available Media

Performing machine learning (ML) computation on private data while maintaining data privacy, aka Privacy-preserving Machine Learning (PPML), is an emergent field of research. Recently, PPML has seen a visible shift towards the adoption of the Secure Outsourced Computation (SOC) paradigm due to the heavy computation that it entails. In the SOC paradigm, computation is outsourced to a set of powerful and specially equipped servers that provide service on a pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for a range of ML algorithms in SOC setting, that guarantees output delivery to the users irrespective of any adversarial behaviour. Robustness, a highly desirable feature, evokes user participation without the fear of denial of service.

At the heart of our framework lies a highly-efficient, maliciously-secure, three-party computation (3PC) over rings that provides guaranteed output delivery (GOD) in the honest-majority setting. To the best of our knowledge, SWIFT is the first robust and efficient PPML framework in the 3PC setting. SWIFT is as fast as (and is strictly better in some cases than) the best-known 3PC framework BLAZE (Patra et al. NDSS '20), which only achieves fairness. We extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS '20) and twice faster than the best-known robust 4PC framework FLASH (Byali et al. PETS '20).

We demonstrate our framework's practical relevance by benchmarking popular ML algorithms such as Logistic Regression and deep Neural Networks such as VGG16 and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results testify to our claims that we provide improved security guarantee while incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.

Stealing Links from Graph Neural Networks

Xinlei He, CISPA Helmholtz Center for Information Security; Jinyuan Jia, Duke University; Michael Backes, CISPA Helmholtz Center for Information Security; Neil Zhenqiang Gong, Duke University; Yang Zhang, CISPA Helmholtz Center for Information Security

Available Media

Graph data, such as chemical networks and social networks, may be deemed confidential/private because the data owner often spends lots of resources collecting the data or the data contains sensitive information, e.g., social relationships. Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs). Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection. In this work, we propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph. Specifically, given a black-box access to a GNN model, our attacks can infer whether there exists a link between any pair of nodes in the graph used to train the model. We call our attacks link stealing attacks. We propose a threat model to systematically characterize an adversary's background knowledge along three dimensions which in total leads to a comprehensive taxonomy of 8 different link stealing attacks. We propose multiple novel methods to realize these 8 attacks. Extensive experiments on 8 real-world datasets show that our attacks are effective at stealing links, e.g., AUC (area under the ROC curve) is above 0.95 in multiple cases. Our results indicate that the outputs of a GNN model reveal rich information about the structure of the graph used to train the model.

Leakage of Dataset Properties in Multi-Party Machine Learning

Wanrong Zhang, Georgia Institute of Technology; Shruti Tople, Microsoft Research; Olga Ohrimenko, The University of Melbourne

Available Media

Secure multi-party machine learning allows several parties to build a model on their pooled data to increase utility while not explicitly sharing data with each other. We show that such multi-party computation can cause leakage of global dataset properties between the parties even when parties obtain only black-box access to the final model. In particular, a "curious" party can infer the distribution of sensitive attributes in other parties' data with high accuracy. This raises concerns regarding the confidentiality of properties pertaining to the whole dataset as opposed to individual data records. We show that our attack can leak population-level properties in datasets of different types, including tabular, text, and graph data. To understand and measure the source of leakage, we consider several models of correlation between a sensitive attribute and the rest of the data. Using multiple machine learning models, we show that leakage occurs even if the sensitive attribute is not included in the training data and has a low correlation with other attributes or the target variable.

Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations

Milad Nasr, Alireza Bahramali, and Amir Houmansadr, University of Massachusetts Amherst

Available Media

Deep neural networks (DNNs) are commonly used for various traffic analysis problems, such as website fingerprinting and flow correlation, as they outperform traditional (e.g., statistical) techniques by large margins. However, deep neural networks are known to be vulnerable to adversarial examples: adversarial inputs to the model that get labeled incorrectly by the model due to small adversarial perturbations. In this paper, for the first time, we show that an adversary can defeat DNN-based traffic analysis techniques by applying adversarial perturbations on the patterns of live network traffic.

Applying adversarial perturbations (examples) on traffic analysis classifiers faces two major challenges. First, the perturbing party (i.e., the adversary) should be able to apply the adversarial network perturbations on live traffic, with no need to buffering traffic or having some prior knowledge about upcoming network packets. We design a systematic approach to create adversarial perturbations that are independent of their target network connections, and therefore can be applied in real-time on live traffic. We therefore call such adversarial perturbations blind.

Second, unlike image classification applications, perturbing traffic features is not straight-forward as this needs to be done while preserving the correctness of dependent traffic features. We address this challenge by introducing remapping functions that we use to enforce different network constraints while creating blind adversarial perturbations.

Our blind adversarial perturbations algorithm is generic and can be applied on various types of traffic classifiers. We demonstrate this by implementing a Tor pluggable transport that applies adversarial perturbations on live Tor connections to defeat DNN-based website fingerprinting and flow correlation techniques, the two most-studied types of traffic analysis. We show that our blind adversarial perturbations are even transferable between different models and architectures, so they can be applied by blackbox adversaries. Finally, we show that existing countermeasures perform poorly against blind adversarial perturbations, therefore, we introduce a tailored countermeasure.

Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning

Wenting Zheng, UC Berkeley/CMU; Ryan Deng, Weikeng Chen, and Raluca Ada Popa, UC Berkeley; Aurojit Panda, New York University; Ion Stoica, UC Berkeley

Available Media

Many organizations need large amounts of high quality data for their applications, and one way to acquire such data is to combine datasets from multiple parties. Since these organizations often own sensitive data that cannot be shared in the clear with others due to policy regulation and business competition, there is increased interest in utilizing secure multi-party computation (MPC). MPC allows multiple parties to jointly compute a function without revealing their inputs to each other. We present Cerebro, an end-to-end collaborative learning platform that enables parties to compute learning tasks without sharing plaintext data. By taking an end-to-end approach to the system design, Cerebro allows multiple parties with complex economic relationships to safely collaborate on machine learning computation through the use of release policies and auditing, while also enabling users to achieve good performance without manually navigating the complex performance tradeoffs between MPC protocols.

Track 2

Fuzzing

Session Chair: Ruoyu Wang, Arizona State University

SyzVegas: Beating Kernel Fuzzing Odds with Reinforcement Learning

Daimeng Wang, Zheng Zhang, Hang Zhang, Zhiyun Qian, Srikanth V. Krishnamurthy, and Nael Abu-Ghazaleh, University of California, Riverside

Available Media

Fuzzing embeds a large number of decisions requiring finetuned and hard-coded parameters to maximize its efficiency. This is especially true for kernel fuzzing due to (1) OS kernels' sheer size and complexity, (2) a unique syscall interface that requires special handling (e.g., encoding explicit dependencies among syscalls), and (3) behaviors of inputs (i.e., test cases) are often not reproducible due to the stateful nature of OS kernels. Hence, Syzkaller, the state-of-art gray-box kernel fuzzer, incorporates numerous procedures, decision points, and hard-coded parameters master-crafted by domain experts. Unfortunately, hard-coded strategies cannot adjust to factors such as different fuzzing environments/targets and the dynamically changing potency of tasks and/or seeds, limiting the overall effectiveness of the fuzzer. In this paper, we propose SYZVEGAS, a fuzzer that dynamically and automatically adapts two of the most critical decision points in Syzkaller, task selection and seed selection, to remarkably improve coverage reached per unit-time. SYZVEGAS's adaptation leverages multi-armed-bandit (MAB) algorithms along with a novel reward assessment model. Our extensive evaluations of SYZVEGAS on the latest Linux Kernel and its subsystems demonstrate that it (i) finds up to 38.7% more coverage than the default Syzkaller, (ii) better discovers bugs/crashes (8 more unique crashes) and (iii) has very low 2.1% performance overhead. We reported our findings to Google's Syzkaller team and are actively working on pushing our changes upstream.

Android SmartTVs Vulnerability Discovery via Log-Guided Fuzzing

Yousra Aafer, University of Waterloo; Wei You, Renmin University of China; Yi Sun, Yu Shi, and Xiangyu Zhang, Purdue University; Heng Yin, UC Riverside

Available Media

The recent rise of Smart IoT devices has opened new doors for cyber criminals to achieve damages unique to the ecosystem. SmartTVs, the most widely adopted home-based IoT devices, are no exception. Albeit their popularity, little has been done to evaluate their security and associated risks. To proactively address the problem, we propose a systematic evaluation of Android SmartTVs security. We overcome a number of prominent challenges such as most of the added TV related functionalities are (partially) implemented in the native layer and many security problems only manifest themselves on the physical aspect without causing any misbehaviors inside the OS. We develop a novel dynamic fuzzing approach, which features an on-the-fly log-based input specification derivation and feedback collection. Our solution further introduces a novel external Observer that monitors the TV-related physical symptoms (i.e., visual and auditory) to detect potential physical anomalies. We leverage our technique to analyze 11 Android TV Boxes. Our analysis reveals 37 unique vulnerabilities, leading to high-impact cyber threats (e.g., corrupting critical boot environment settings and accessing highly-sensitive data), memory corruptions, and even visual and auditory disturbances (e.g., persistent display content corruption and audio muting)

UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers

Yuwei Li, Zhejiang University; Shouling Ji, Zhejiang University/Zhejiang University NGICS Platform; Yuan Chen, Zhejiang University; Sizhuang Liang, Georgia Institute of Technology; Wei-Han Lee, IBM Research; Yueyao Chen and Chenyang Lyu, Zhejiang University; Chunming Wu, Zhejiang University/Zhejiang Lab, Hangzhou, China; Raheem Beyah, Georgia Institute of Technology; Peng Cheng, Zhejiang University NGICS Platform/Zhejiang University; Kangjie Lu, University of Minnesota; Ting Wang, Pennsylvania State University

Available Media

A flurry of fuzzing tools (fuzzers) have been proposed in the literature, aiming at detecting software vulnerabilities effectively and efficiently. To date, it is however still challenging to compare fuzzers due to the inconsistency of the benchmarks, performance metrics, and/or environments for evaluation, which buries the useful insights and thus impedes the discovery of promising fuzzing primitives. In this paper, we design and develop UNIFUZZ, an open-source and metrics-driven platform for assessing fuzzers in a comprehensive and quantitative manner. Specifically, UNIFUZZ to date has incorporated 35 usable fuzzers, a benchmark of 20 real-world programs, and six categories of performance metrics. We first systematically study the usability of existing fuzzers, find and fix a number of flaws, and integrate them into UNIFUZZ. Based on the study, we propose a collection of pragmatic performance metrics to evaluate fuzzers from six complementary perspectives. Using UNIFUZZ, we conduct in-depth evaluations of several prominent fuzzers including AFL [1], AFLFast [2], Angora [3], Honggfuzz [4], MOPT [5], QSYM [6], T-Fuzz [7] and VUzzer64 [8]. We find that none of them outperforms the others across all the target programs, and that using a single metric to assess the performance of a fuzzer may lead to unilateral conclusions, which demonstrates the significance of comprehensive metrics. Moreover, we identify and investigate previously overlooked factors that may significantly affect a fuzzer's performance, including instrumentation methods and crash analysis tools. Our empirical results show that they are critical to the evaluation of a fuzzer. We hope that our findings can shed light on reliable fuzzing evaluation, so that we can discover promising fuzzing primitives to effectively facilitate fuzzer designs in the future.

Token-Level Fuzzing

Christopher Salls, UC Santa Barbara; Chani Jindal, Microsoft; Jake Corina, Seaside Security; Christopher Kruegel and Giovanni Vigna, UC Santa Barbara

Available Media

Fuzzing has become a commonly used approach to identifying bugs in complex, real-world programs. However, interpreters are notoriously difficult to fuzz effectively, as they expect highly structured inputs, which are rarely produced by most fuzzing mutations. For this class of programs, grammar-based fuzzing has been shown to be effective. Tools based on this approach can find bugs in the code that is executed after parsing the interpreter inputs, by following language-specific rules when generating and mutating test cases.

Unfortunately, grammar-based fuzzing is often unable to discover subtle bugs associated with the parsing and handling of the language syntax. Additionally, if the grammar provided to the fuzzer is incomplete, or does not match the implementation completely, the fuzzer will fail to exercise important parts of the available functionality.

In this paper, we propose a new fuzzing technique, called Token-Level Fuzzing. Instead of applying mutations either at the byte level or at the grammar level, Token-Level Fuzzing applies mutations at the token level. Evolutionary fuzzers can leverage this technique to both generate inputs that are parsed successfully and generate inputs that do not conform strictly to the grammar. As a result, the proposed approach can find bugs that neither byte-level fuzzing nor grammar-based fuzzing can find. We evaluated Token-Level Fuzzing by modifying AFL and fuzzing four popular JavaScript engines, finding 29 previously unknown bugs, several of which could not be found with state-of-the-art byte-level and grammar-based fuzzers.

APICraft: Fuzz Driver Generation for Closed-source SDK Libraries

Cen Zhang, Nanyang Technological University; Xingwei Lin, Ant Group; Yuekang Li, Nanyang Technological University; Yinxing Xue, University of Science and Technology of China; Jundong Xie, Ant Group; Hongxu Chen, Nanyang Technological University; Xinlei Ying and Jiashui Wang, Ant Group; Yang Liu, Nanyang Technological University

Available Media

Fuzz drivers are needed for fuzzing libraries. A fuzz driver is a program which can execute library functions by feeding them with inputs provided by the fuzzer. In practice, fuzz drivers are written by security experts and the drivers' quality depends on the skill of their authors. To relieve manual efforts and ensure test quality, different techniques have been proposed to automatically generate fuzz drivers. However, existing techniques mostly rely on static analysis of source code, leaving the fuzz driver generation for closed-source SDK libraries an open problem. Fuzz driver generation for closed-source libraries is faced with two major challenges: 1) only limited information can be extracted from the library; 2) the semantic relations among API functions are complex yet their correctness needs to be ensured.

To address these challenges, we propose APICRAFT, an automated fuzz driver generation technique. The core strategy of APICRAFT is collect – combine. First, APICRAFT leverages both static and dynamic information (headers, binaries, and traces) to collect control and data dependencies for API functions in a practical manner. Then, it uses a multi-objective genetic algorithm to combine the collected dependencies and build high-quality fuzz drivers. We implemented APICRAFT as a fuzz driver generation framework and evaluated it with five attack surfaces from the macOS SDK. In the evaluation, the fuzz drivers generated by APICRAFT demonstrate superior code coverage than the manually written ones, with an improvement of 64% on average. We further carried out a long-term fuzzing campaign with the fuzz drivers generated by APICRAFT. After around eight month's fuzzing, we've so far discovered 142 vulnerabilities with 54 assigned CVEs in macOS SDK, which can affect popular Apple products such as Safari, Messages, Preview and so on.

The Use of Likely Invariants as Feedback for Fuzzers

Andrea Fioraldi, EURECOM; Daniele Cono D'Elia, Sapienza University of Rome; Davide Balzarotti, EURECOM

Available Media

While fuzz testing proved to be a very effective technique to find software bugs, open challenges still exist. One of the its main limitations is the fact that popular coverage-guided designs are optimized to reach different parts of the program under test, but struggle when reachability alone is insufficient to trigger a vulnerability. In reality, many bugs require a specific program state that involve not only the control flow, but also the values of some of the program variables. Unfortunately, alternative exploration strategies that have been proposed in the past to capture the program state are of little help in practice, as they immediately result in a state explosion.

In this paper, we propose a new feedback mechanism that augments code coverage by taking into account the usual values and relationships among program variables. For this purpose, we learn likely invariants over variables at the basicblock level, and partition the program state space accordingly. Our feedback can distinguish when an input violates one or more invariants and reward it, thus refining the program state approximation that code coverage normally offers.

We implemented our technique in a prototype called INVSCOV, developed on top of LLVM and AFL++. Our experiments show that our approach can find more, and different, bugs with respect to fuzzers that use a pure code-coverage feedback. Furthermore, they led to the discovery of two vulnerabilities in a library tested daily on OSS-Fuzz, and still present at the time in its latest version.

ICSFuzz: Manipulating I/Os and Repurposing Binary Code to Enable Instrumented Fuzzing in ICS Control Applications

Dimitrios Tychalas, NYU Tandon School of Engineering; Hadjer Benkraouda and Michail Maniatakos, New York University Abu Dhabi

Available Media

Industrial Control Systems (ICS) have seen a rapid proliferation in the last decade amplified by the advent of the 4th Industrial Revolution. At the same time, several notable cybersecurity incidents in industrial environments have underlined the lack of depth in security evaluation of industrial devices such as Programmable Logic Controllers (PLC). Modern PLCs are based on widely used microprocessors and deploy commodity operating systems (e.g., ARM on Linux). Thus, threats from the information technology domain can be readily ported to industrial environments. PLC application binaries in particular have never been considered as regular programs able to introduce traditional security threats, such as buffer overflows. In this work, we investigate the feasibility of exploiting PLC binaries as well as their surrounding PLC-specific environment. We examine binaries produced by all available IEC 61131-3 control system programming languages for compilation-based differences and introduced vulnerabilities. Driven by this analysis, we develop a fuzzing framework to perform security evaluation of the PLC binaries along with the host functions they interact with. Fuzzing such non-executable binaries is non-trivial, as they operate with real-time constraints and receive their inputs from peripherals. To prove the correctness of our fuzzing tool, we use a database of in-house developed binaries in addition to functional control applications collected from online repositories. We showcase the efficacy of our technique by demonstrating uncovered vulnerabilities in both control application binaries and their runtime system. Furthermore, we demonstrate an exploitation methodology for an in-house as well as a regular control binary, based on the uncovered vulnerabilities.

Track 3

Web Security 2

Session Chairs: Ben Stock, CISPA Helmholtz Center for Information Security, and Deian Stefan, University of California, San Diego

Prime+Probe 1, JavaScript 0: Overcoming Browser-based Side-Channel Defenses

Anatoly Shusterman, Ben-Gurion University of the Negev; Ayush Agarwal, University of Michigan; Sioli O'Connell, University of Adelaide; Daniel Genkin, University of Michigan; Yossi Oren, Ben-Gurion University of the Negev; Yuval Yarom, University of Adelaide and Data61

Available Media

The "eternal war in cache" has reached browsers, with multiple cache-based side-channel attacks and countermeasures being suggested. A common approach for countermeasures is to disable or restrict JavaScript features deemed essential for carrying out attacks.

To assess the effectiveness of this approach, in this work we seek to identify those JavaScript features which are essential for carrying out a cache-based attack. We develop a sequence of attacks with progressively decreasing dependency on JavaScript features, culminating in the first browser-based side-channel attack which is constructed entirely from Cascading Style Sheets (CSS) and HTML, and works even when script execution is completely blocked. We then show that avoiding JavaScript features makes our techniques architecturally agnostic, resulting in microarchitectural website fingerprinting attacks that work across hardware platforms including Intel Core, AMD Ryzen, Samsung Exynos, and Apple M1 architectures.

As a final contribution, we evaluate our techniques in hardened browser environments including the Tor browser, DeterFox (Cao et al., CCS 2017), and Chrome Zero (Schwartz et al., NDSS 2018). We confirm that none of these approaches completely defend against our attacks. We further argue that the protections of Chrome Zero need to be more comprehensively applied, and that the performance and user experience of Chrome Zero will be severely degraded if this approach is taken.

Saphire: Sandboxing PHP Applications with Tailored System Call Allowlists

Alexander Bulekov, Rasoul Jahanshahi, and Manuel Egele, Boston University

Available Media

Interpreted languages, such as PHP, power a host of platform-independent applications, including websites, instant messengers, video games, and development environments. With the flourishing popularity of these applications, attackers have honed in on finding and exploiting vulnerabilities in interpreted code. Generally, all parts of an interpreted application execute with uniform and superfluous privileges, increasing the potential damage from an exploit. This lack of privilege-separation is in stark violation of the principle of least privilege(PoLP).

Despite 1,980 web app remote code execution (RCE) vulnerabilities discovered in 2018 alone [25], current defenses rely on incomplete detection of vulnerable code, or extensive collections of benign inputs. Considering the limitations of bug-finding systems, the violation of the PoLP exposes systems to unnecessarily-high risks.

In this paper, we identify the current challenges with applying the PoLP to interpreted PHP applications, and propose a novel generic approach for automatically deriving system-call policies for individual interpreted programs. This effectively reduces the attack surface (i.e., set of system-calls) an exploit can leverage to the system-calls the script needs to perform its benign functionality.

We name our implementation of this approach, Saphire, and thoroughly evaluate the prototype with respect to its security and performance characteristics. Our evaluation on 21 known vulnerable web apps and plugins shows that Saphire successfully prevents RCE exploits, and is able to do so with negligible performance overhead (i.e., <2% in the worst case) for real-world web apps. Saphire performs its service without causing false positives over automatically and manually generated benign traffic to each web app.

SandTrap: Securing JavaScript-driven Trigger-Action Platforms

Mohammad M. Ahmadpanah, Chalmers University of Technology; Daniel Hedin, Chalmers University of Technology and Mälardalen University; Musard Balliu, KTH Royal Institute of Technology; Lars Eric Olsson and Andrei Sabelfeld, Chalmers University of Technology

Available Media

Trigger-Action Platforms (TAPs) seamlessly connect a wide variety of otherwise unconnected devices and services, ranging from IoT devices to cloud services and social networks. TAPs raise critical security and privacy concerns because a TAP is effectively a "person-in-the-middle" between trigger and action services. Third-party code, routinely deployed as "apps" on TAPs, further exacerbates these concerns. This paper focuses on JavaScript-driven TAPs. We show that the popular IFTTT and Zapier platforms and an open-source alternative Node-RED are susceptible to attacks ranging from exfiltrating data from unsuspecting users to taking over the entire platform. We report on the changes by the platforms in response to our findings and present an empirical study to assess the implications for Node-RED. Motivated by the need for a secure yet flexible way to integrate third-party JavaScript apps, we propose SandTrap, a novel JavaScript monitor that securely combines the Node.js vm module with fully structural proxy-based two-sided membranes to enforce fine-grained access control policies. To aid developers, SandTrap includes a policy generation mechanism. We instantiate SandTrap to IFTTT, Zapier, and Node-RED and illustrate on a set of benchmarks how SandTrap enforces a variety of policies while incurring a tolerable runtime overhead.

Can I Take Your Subdomain? Exploring Same-Site Attacks in the Modern Web

Marco Squarcina, Mauro Tempesta, and Lorenzo Veronese, TU Wien; Stefano Calzavara, Università Ca' Foscari Venezia & OWASP; Matteo Maffei, TU Wien

Available Media

Related-domain attackers control a sibling domain of their target web application, e.g., as the result of a subdomain takeover. Despite their additional power over traditional web attackers, related-domain attackers received only limited attention from the research community. In this paper we define and quantify for the first time the threats that related-domain attackers pose to web application security. In particular, we first clarify the capabilities that related-domain attackers can acquire through different attack vectors, showing that different instances of the related-domain attacker concept are worth attention. We then study how these capabilities can be abused to compromise web application security by focusing on different angles, including cookies, CSP, CORS, postMessage, and domain relaxation. By building on this framework, we report on a large-scale security measurement on the top 50k domains from the Tranco list that led to the discovery of vulnerabilities in 887 sites, where we quantified the threats posed by related-domain attackers to popular web applications.

U Can't Debug This: Detecting JavaScript Anti-Debugging Techniques in the Wild

Marius Musch and Martin Johns, TU Braunschweig

Available Media

Through security contests such as Pwn2Own, we are constantly reminded that no complex piece of software should ever be considered secure. As we execute untrusted code in our browser every day, browser exploits written in JavaScript remain a constant threat to the security of our systems. In particular, evasive malware that detects analysis systems and then changes its behavior is a well-known problem.

However, there are also anti-debugging techniques that interfere with the manual analysis of a website in a real browser. These techniques try to prevent, or at least slow down, any attempts at manually inspecting and debugging the JavaScript code of a website. For example, such a technique could constantly trigger breakpoints at random locations to effectively hinder single-stepping while debugging the code. More cunningly, it could also find out whether the browser's integrated Developer Tools are open by using certain side-channels available in JavaScript. With this knowledge, it is possible to subtly alter or suppress any malicious behavior while under analysis.

In this paper, we systematically explore this phenomenon. To this end, we introduce 9 anti-debugging techniques and discuss their advantages and drawbacks. We then conduct a large-scale study on 6 of them, to investigate the prevalence of these techniques in the wild. We find that as many as 1 out of 550 websites contain severe anti-debugging measures, with multiple of these techniques active on the same site. Moreover, we present a novel approach based on a deterministic website replay and a comparison of JavaScript code coverage. The approach can automatically detect the remaining 3 timing- based anti-debugging techniques, which use side-channels to learn if the DevTools are open. In a targeted study on 2000 websites with anti-debugging techniques, we discover over 200 of them indeed execute different code when under analysis.

Abusing Hidden Properties to Attack the Node.js Ecosystem

Feng Xiao, Georgia Tech; Jianwei Huang, Texas A&M University; Yichang Xiong, Independent Researcher; Guangliang Yang, Georgia Tech; Hong Hu, Penn State University; Guofei Gu, Texas A&M University; Wenke Lee, Georgia Tech

Available Media

Nowadays, Node.js has been widely used in the development of server-side and desktop programs (e.g., Skype), with its cross-platform and high-performance execution environment of JavaScript. In past years, it has been reported other dynamic programming languages (e.g., PHP and Ruby) are unsafe on sharing objects. However, this security risk is not well studied and understood in JavaScript and Node.js programs.

In this paper, we fill the gap by conducting the first systematic study on the communication process between client- and server-side code in Node.js programs. We extensively identify several new vulnerabilities in popular Node.js programs. To demonstrate their security implications, we design and develop a novel feasible attack, named hidden property abusing (HPA). Our further analysis shows HPA attacks are subtly different from existing findings regarding exploitation and attack effects. Through HPA attacks, a remote web attacker may obtain dangerous abilities, such as stealing confidential data, bypassing security checks, and launching DoS (Denial of Service) attacks.

To help Node.js developers vet their programs against HPA, we design a novel vulnerability detection and verification tool, named Lynx, that utilizes hybrid program analysis to automatically reveal HPA vulnerabilities and even synthesize exploits. We apply Lynx on a set of widely-used Node.js programs and identify 15 previously unknown vulnerabilities. We have reported all of our findings to the Node.js community. 10 of them have been assigned with CVE, and 8 of them are rated as "Critical'" or "High" severity. This indicates HPA attacks can cause serious security threats.

3:00 pm–3:05 pm

Break

3:05 pm-3:15 pm

Academic Leadership Award presented by Intel

3:15 pm–4:15 pm

Keynote Address

People Count: Contact Tracing Apps and Public Health

Susan Landau, Tufts University

Available Media

Dr. Anthony Fauci ticked off the timeline: "First notice at the end of December, hit China in January, hit the rest of the world in February, March, April, May, early June." COVID spread like wildfire. This disease turned out to be Fauci's "worst nightmare."

Pandemics end because we shut down the infection source, eradicate it, or vaccinate against it. But if these techniques don't work, then we contact trace. For COVID-19, a disease that spreads presymptomically and respiratorily, contact-tracing apps seem to be an optimal way to harness technology in stopping spread.

Apps were launched in Singapore beginning in March 2020, privacy-protective apps made their appearance in Europe and the US beginning in June. In some locations, the apps are effectively required, but where they are not, adoption is low. What's going on? Are the apps efficacious? And if so, why aren't they being used?

Is this a security failure? A privacy failure? A usability issue? The next pandemic will be different from COVID-19. Now is the time to learn what medical and social interventions we should make.

Susan Landau, Tufts University

Susan Landau is Bridge Professor in Cyber Security and Policy at The Fletcher School and the School of Engineering, Department of Computer Science, Tufts University and Visiting Professor, Department of Computer Science, University College London. Landau's new book, People Count: Contact-Tracing Apps and Public Health (MIT Press), focuses on efficacy, equity, and privacy. Landau has written three books on encryption and wiretap policy: Listening In: Cybersecurity in an Insecure Age (Yale University Press, 2017), Surveillance or Security? The Risks Posed by New Wiretapping Technologies (MIT Press, 2011) and, with Whitfield Diffie, of Privacy on the Line: The Politics of Wiretapping and Encryption (MIT Press, rev. ed. 2007). Landau has testified before Congress, written for the Washington Post, Science, and Scientific American, and frequently appears on NPR and BBC. Landau has been a senior staff Privacy Analyst at Google, a Distinguished Engineer at Sun Microsystems, and a faculty member at Worcester Polytechnic Institute, the University of Massachusetts Amherst, and Wesleyan University. She received the 2008 Women of Vision Social Impact Award, was a 2010-2011 fellow of the Radcliffe Institute for Advanced Study, was a 2012 Guggenheim fellow, was inducted into the Cybersecurity Hall of Fame in 2015 and into the Information System Security Association Hall of Fame in 2018. She is also a fellow of the American Association for the Advancement of Science and the Association for Computing Machinery.

Friday, August 13

7:00 am–8:45 am

Track 1

Forensics and Diagnostics for Security and Voting

Session Chair: Xusheng Xiao, Case Western Reserve University

mID: Tracing Screen Photos via Moiré Patterns

Yushi Cheng, Xiaoyu Ji, Lixu Wang, and Qi Pang, Zhejiang University; Yi-Chao Chen, Shanghai Jiao Tong University; Wenyuan Xu, Zhejiang University

Available Media

Cyber-theft of trade secrets has become a serious business threat. Digital watermarking is a popular technique to assist in identifying the source of the file leakage, whereby a unique watermark for each insider is hidden in sensitive files. However, malicious insiders may use their smartphones to photograph the secret file displayed on screens to remove the embedded hidden digital watermarks due to the optical noises introduced during photographing. To identify the leakage source despite such screen photo-based leakage attacks, we leverage Moiré pattern, an optical phenomenon resulted from the optical interaction between electronic screens and cameras. As such, we present mID, a new watermark-like technique that can create a carefully crafted Moiré pattern on the photo when it is taken towards the screen. We design patterns that appear to be natural yet can be linked to the identity of the leaker. We implemented mID and evaluate it with 5 display devices and 6 smartphones from various manufacturers and models. The results demonstrate that mID can achieve an average bit error rate (BER) of 0.6% and can successfully identify an ID with an average accuracy of 96%, with little influence from the type of display devices, cameras, IDs, and ambient lights.

SEAL: Storage-efficient Causality Analysis on Enterprise Logs with Query-friendly Compression

Peng Fei, Zhou Li, and Zhiying Wang, University of California, Irvine; Xiao Yu, NEC Laboratories America, Inc.; Ding Li, Peking University; Kangkook Jee, University of Texas at Dallas

Available Media

Causality analysis automates attack forensic and facilitates behavioral detection by associating causally related but temporally distant system events. Despite its proven usefulness, the analysis suffers from the innate big data challenge to store and process a colossal amount of system events that are constantly collected from hundreds of thousands of end-hosts in a realistic network. In addition, the effectiveness of the analysis to discover security breaches relies on the assumption that comprehensive historical events over a long span are stored. Hence, it is imminent to address the scalability issue in order to make causality analysis practical and applicable to the enterprise-level environment.

In this work, we present SEAL, a novel data compression approach for causality analysis. Based on information-theoretic observations on system event data, our approach achieves lossless compression and supports near real-time retrieval of historic events. In the compression step, the causality graph induced by the system logs is investigated, and abundant edge reduction potentials are explored. In the query step, for maximal speed, decompression is opportunistically executed. Experiments on two real-world datasets show that SEAL offers 2.63x and 12.94x data size reduction, respectively. Besides, 89% of the queries are faster on the compressed dataset than the uncompressed one, and SEAL returns exactly the same query results as the uncompressed data.

ATLAS: A Sequence-based Learning Approach for Attack Investigation

Abdulellah Alsaheel and Yuhong Nan, Purdue University; Shiqing Ma, Rutgers University; Le Yu, Gregory Walkup, Z. Berkay Celik, Xiangyu Zhang, and Dongyan Xu, Purdue University

Available Media

Advanced Persistent Threats (APT) involve multiple attack steps over a long period, and their investigation requires analysis of myriad logs to identify their attack steps, which are a set of activities undertaken to run an APT attack. However, on a daily basis in an enterprise, intrusion detection systems generate many threat alerts of suspicious events (attack symptoms). Cyber analysts must investigate such events to determine whether an event is a part of an attack. With many alerts to investigate, cyber analysts often end up with alert fatigue, causing them to ignore a large number of alerts and miss true attack events. In this paper, we present ATLAS, a framework that constructs an end-to-end attack story from off-the-shelf audit logs. Our key observation is that different attacks may share similar abstract attack strategies, regardless of the vulnerabilities exploited and payloads executed. ATLAS leverages a novel combination of causality analysis, natural language processing, and machine learning techniques to build a sequence-based model, which establishes key patterns of attack and non-attack behaviors from a causal graph. At inference time, given a threat alert event, an attack symptom node in a causal graph is identified. ATLAS then constructs a set of candidate sequences associated with the symptom node, uses the sequence-based model to identify nodes in a sequence that contribute to the attack, and unifies the identified attack nodes to construct an attack story. We evaluated ATLAS with ten real-world APT attacks executed in a realistic virtual environment. ATLAS recovers attack steps and construct attack stories with an average of 91.06% precision, 97.29% recall, and 93.76% F1-score. Through this effort, we provide security investigators with a new means of identifying the attack events that make up the attack story.

ELISE: A Storage Efficient Logging System Powered by Redundancy Reduction and Representation Learning

Hailun Ding, Shenao Yan, Juan Zhai, and Shiqing Ma, Rutgers University

Available Media

Log is a key enabler of many security applications including but not limited to security auditing and forensic analysis. Due to the rapid growth of modern computing infrastructure size, software systems are generating more and more logs every day. Moreover, the duration of recent cyber attacks like Advanced Persistent Threats (APTs) is becoming longer, and their targets consist of many connected organizations instead of a single one. This requires the analysis on logs from different sources and long time periods. Storing such large sized log files is becoming more important and also challenging than ever. Existing logging systems are either inefficient (i.e., high storage overhead) or designed for limited security applications (i.e., no support for general security analysis). In this paper, we propose ELISE, a storage efficient logging system built on top of a novel lossless data compression technique, which naturally supports all types of security analysis. It features lossless log compression using a novel log file preprocessing and Deep Neural Network (DNN) based method to learn optimal character encoding. On average, ELISE can achieve 3 and 2 times better compression results compared with existing state-of-the-art methods Gzip and DeepZip, respectively, showing a promising future research direction.

V0Finder: Discovering the Correct Origin of Publicly Reported Software Vulnerabilities

Seunghoon Woo, Dongwook Lee, Sunghan Park, and Heejo Lee, Korea University; Sven Dietrich, City University of New York

Available Media

Common Vulnerabilities and Exposures (CVEs) are used to ensure confidence among developers, to share information about software vulnerabilities, and to provide a baseline for security measures. Therefore, the correctness of CVE reports is crucial for detecting and patching software vulnerabilities.

In this paper, we introduce the concept of "Vulnerability Zero" (VZ), the software where a vulnerability first originated. We then present V0Finder, a precise mechanism for discovering the VZ of a vulnerability, including software name and its version. V0Finder utilizes code-based analysis to identify reuse relations, which specify the direction of vulnerability propagation, among vulnerable software. V0Finder constructs a graph from all the identified directions and traces backward to the root of that graph to find the VZ.

We applied V0Finder to 5,671 CVE vulnerabilities collected from the National Vulnerability Database (NVD) and popular Bugzilla-based projects. V0Finder discovered VZs with high accuracy of 98% precision and 95% recall. Furthermore, V0Finder identified 96 CVEs with incorrect information related to their respective VZs. We confirmed that the incorrect VZ causes prolonged patch updates of vulnerable software; the patch update of CVEs with the incorrect VZ information takes 2 years, while the patch update of CVEs with the correct VZ takes less than a year on average. Such incorrectly identified VZ hinders the objective of the CVE and causes confusion rather than "ensuring confidence" among developers. Our analysis shows that V0Finder can enhance the credibility of information provided by the CVEs.

Minerva– An Efficient Risk-Limiting Ballot Polling Audit

Filip Zagórski, Wroclaw University of Science and Technology; Grant McClearn and Sarah Morin, The George Washington University; Neal McBurnett; Poorvi L. Vora, The George Washington University

Available Media

Evidence-based elections aim to produce trustworthy and compelling evidence of the correctness of election outcomes, enabling the detection of problems with high probability. They require a well-curated voter-verified paper trail, compliance audits, and a rigorous tabulation audit of the election outcome, known as a risk-limiting audit (RLA).

This paper focuses on ballot polling RLAs which can require that a very large sample of ballots be drawn. The main ballot polling RLA in use today, BRAVO, is designed for use when single ballots are drawn at random and a decision regarding whether to stop the audit or draw another ballot is taken after each ballot draw. But in practice, ballot polling audits draw many ballots in a single round before determining whether to stop.

Direct application of BRAVO to large rounds results in considerable inefficiency. We present MINERVA, a risk-limiting audit that addresses this problem. When compared to the BRAVO stopping rule being applied at the end of the round, for a first-round with 90% stopping probability, MINERVA halves the number of ballots required across all state margins in the 2020 US Presidential election. When compared to the BRAVO stopping rule being applied after examination of individual ballots, MINERVA reduces the number of ballots by about a quarter. MINERVA requires that round sizes are predetermined; this does not appear to be a drawback for large first rounds which have been typical choices for election officials.

Ballot-polling audits are the leading option in most states. MINERVA significantly reduces the necessary expense for contests with close margins and thus makes adopting RLAs easier. Wider adoption of RLAs is a critical step in increasing public confidence in elections.

MINERVA was used in Ohio's pilot RLA of the primaries in May 2020 in Montgomery County. We provide open-source implementations of MINERVA. The code has been integrated as an option in Arlo, the most widely-used RLA software.

Security Analysis of the Democracy Live Online Voting System

Michael Specter, MIT; J. Alex Halderman, University of Michigan

Available Media

Democracy Live's OmniBallot platform is a web-based system for blank ballot delivery, ballot marking, and online voting. Three states—Delaware, West Virginia, and New Jersey—recently announced that they would allow certain voters to cast votes online using OmniBallot, but, despite the well established risks of Internet voting, the system has never before undergone a public, independent security review.

We reverse engineered the client-side portion of OmniBallot, as used in Delaware, in order to detail the system's operation and analyze its security. We find that OmniBallot uses a simplistic approach to Internet voting that is vulnerable to vote manipulation by malware on the voter's device and by insiders or other attackers who can compromise Democracy Live, Amazon, Google, or Cloudflare. In addition, Democracy Live, which had no privacy policy prior to our work, receives sensitive personally identifiable information—including the voter's identity, ballot selections, and browser fingerprint—that could be used to target political ads or disinformation campaigns. Even when OmniBallot is used to mark ballots that will be printed and returned in the mail, the software sends the voter's identity and ballot choices to Democracy Live, an unnecessary risk that jeopardizes the secret ballot.

We recommend changes to make the platform safer for ballot delivery and marking. However, we conclude that using OmniBallot for electronic ballot return represents a severe risk to election security and could allow attackers to alter election results without detection. In response to our findings, Delaware and New Jersey have halted use of OmniBallot, but it remains available in other jurisdictions, as do similar online voting methods that are likely to face the same serious risks.

Track 2

Internet and Network Security

Session Chair: Danny Yuxing Huang, New York University

Hopper: Modeling and Detecting Lateral Movement

Grant Ho, UC San Diego, UC Berkeley, and Dropbox; Mayank Dhiman, Dropbox; Devdatta Akhawe, Figma, Inc.; Vern Paxson, UC Berkeley and International Computer Science Institute; Stefan Savage and Geoffrey M. Voelker, UC San Diego; David Wagner, UC Berkeley

Available Media

In successful enterprise attacks, adversaries often need to gain access to additional machines beyond their initial point of compromise, a set of internal movements known as lateral movement. We present Hopper, a system for detecting lateral movement based on commonly available enterprise logs. Hopper constructs a graph of login activity among internal machines and then identifies suspicious sequences of logins that correspond to lateral movement. To understand the larger context of each login, Hopper employs an inference algorithm to identify the broader path(s) of movement that each login belongs to and the causal user responsible for performing the logins. Hopper then leverages this path inference algorithm, in conjunction with a set of detection rules and a new anomaly scoring algorithm, to surface the login paths most likely to reflect lateral movement. On a 15-month enterprise dataset consisting of over 780 million internal logins, Hopper achieves a 94.5% detection rate across over 300 realistic attack scenarios, including one red team attack, while generating an average of <9 alerts per day. In contrast, to detect the same number of attacks, prior state-of-the-art systems would need to generate nearly 8x as many false positives.

LZR: Identifying Unexpected Internet Services

Liz Izhikevich, Stanford University; Renata Teixeira, Inria; Zakir Durumeric, Stanford University

Available Media

Internet-wide scanning is a commonly used research technique that has helped uncover real-world attacks, find cryptographic weaknesses, and understand both operator and miscreant behavior. Studies that employ scanning have largely assumed that services are hosted on their IANA-assigned ports, overlooking the study of services on unusual ports. In this work, we investigate where Internet services are deployed in practice and evaluate the security posture of services on unexpected ports. We show protocol deployment is more diffuse than previously believed and that protocols run on many additional ports beyond their primary IANA-assigned port. For example, only 3% of HTTP and 6% of TLS services run on ports 80 and 443, respectively. Services on non-standard ports are more likely to be insecure, which results in studies dramatically underestimating the security posture of Internet hosts. Building on our observations, we introduce LZR ("Laser"), a system that identifies 99% of identifiable unexpected services in five handshakes and dramatically reduces the time needed to perform application-layer scans on ports with few responsive expected services (e.g., 5500% speedup on 27017/MongoDB). We conclude with recommendations for future studies.

Blind In/On-Path Attacks and Applications to VPNs

William J. Tolley and Beau Kujath, Breakpointing Bad/Arizona State University; Mohammad Taha Khan, Washington and Lee University; Narseo Vallina-Rodriguez, IMDEA Networks Institute/ICSI; Jedidiah R. Crandall, Breakpointing Bad/Arizona State University

Available Media

Protecting network protocols within an encrypted tunnel, using technologies such as Virtual Private Networks (VPNs), is increasingly important to millions of users needing solutions to evade censorship or protect their traffic against in/on-path observers/attackers. In this paper, we present a series of attacks from two threat models: an attacker that can inject spoofed packets into the network stack of a VPN client (called client-side), and an attacker that can spoof packets on the Internet and send them to a VPN server (called server-side). In both cases, we assume that the attacker is in/on-path, and can count encrypted bytes or packets over time. In both threat models, we demonstrate attacks to infer the existence of, interfere with, or inject data into TCP connections forwarded through the encrypted VPN tunnel. In the server-side threat model, we also demonstrate an attack to hijack tunneled DNS queries and completely remove the protections of the VPN tunnel. For the attacks presented in this paper, we (1) assess their feasibility in terms of packet rates and timing; (2) test their applicability against a broad range of VPN technologies, types, and vendors; and (3) consider practical issues with respect to real-world attacks. We followed an ethical disclosure process for all attacks presented in this paper. Client-side attacks were addressed with two CVEs and partially mitigated by a series of updates from some operating system and VPN client vendors. Server-side attacks have not been addressed and are still feasible with all operating systems and VPN servers that we tested.

The Hijackers Guide To The Galaxy: Off-Path Taking Over Internet Resources

Tianxiang Dai, Fraunhofer Institute for Secure Information Technology SIT; Philipp Jeitner, Fraunhofer Institute for Secure Information Technology SIT, Technical University of Darmstadt; Haya Shulman, Fraunhofer Institute for Secure Information Technology SIT; Michael Waidner, Fraunhofer Institute for Secure Information Technology SIT, Technical University of Darmstadt

Available Media

Internet resources form the basic fabric of the digital society. They provide the fundamental platform for digital services and assets, e.g., for critical infrastructures, financial services, government. Whoever controls that fabric effectively controls the digital society.

In this work we demonstrate that the current practices of Internet resources management, of IP addresses, domains, certificates and virtual platforms are insecure. Over long periods of time adversaries can maintain control over Internet resources which they do not own and perform stealthy manipulations, leading to devastating attacks. We show that network adversaries can take over and manipulate at least 68% of the assigned IPv4 address space as well as 31% of the top Alexa domains. We demonstrate such attacks by hijacking the accounts associated with the digital resources.

For hijacking the accounts we launch off-path DNS cache poisoning attacks, to redirect the password recovery link to the adversarial hosts. We then demonstrate that the adversaries can manipulate the resources associated with these accounts. We find all the tested providers vulnerable to our attacks.

We recommend mitigations for blocking the attacks that we present in this work. Nevertheless, the countermeasures cannot solve the fundamental problem - the management of the Internet resources should be revised to ensure that applying transactions cannot be done so easily and stealthily as is currently possible.

Injection Attacks Reloaded: Tunnelling Malicious Payloads over DNS

Philipp Jeitner, TU Darmstadt; Haya Shulman, Fraunhofer SIT

Available Media

The traditional design principle for Internet protocols indicates: "Be strict when sending and tolerant when receiving" [RFC1958], and DNS is no exception to this. The transparency of DNS in handling the DNS records, also standardised specifically for DNS [RFC3597], is one of the key features that made it such a popular platform facilitating a constantly increasing number of new applications. An application simply creates a new DNS record and can instantly start distributing it over DNS without requiring any changes to the DNS servers and platforms. Our Internet wide study confirms that more than 1.3M (96% of tested) open DNS resolvers are standard compliant and treat DNS records transparently.

In this work we show that this `transparency' introduces a severe vulnerability in the Internet: we demonstrate a new method to launch string injection attacks by encoding malicious payloads into DNS records. We show how to weaponise such DNS records to attack popular applications. For instance, we apply string injection to launch a new type of DNS cache poisoning attack, which we evaluated against a population of open resolvers and found 105K to be vulnerable. Such cache poisoning cannot be prevented with common setups of DNSSEC. Our attacks apply to internal as well as to public services, for instance, we reveal that all eduroam services are vulnerable to our injection attacks, allowing us to launch exploits ranging from unauthorised access to eduroam networks to resource starvation. Depending on the application, our attacks cause system crashes, data corruption and leakage, degradation of security, and can introduce remote code execution and arbitrary errors.

In our evaluation of the attacks in the Internet we find that all the standard compliant open DNS resolvers we tested allow our injection attacks against applications and users on their networks.

Causal Analysis for Software-Defined Networking Attacks

Benjamin E. Ujcich, Georgetown University; Samuel Jero and Richard Skowyra, MIT Lincoln Laboratory; Adam Bates, University of Illinois at Urbana-Champaign; William H. Sanders, Carnegie Mellon University; Hamed Okhravi, MIT Lincoln Laboratory

Available Media

Software-defined networking (SDN) has emerged as a flexible network architecture for central and programmatic control. Although SDN can improve network security oversight and policy enforcement, ensuring the security of SDN from sophisticated attacks is an ongoing challenge for practitioners. Existing network forensics tools attempt to identify and track such attacks, but holistic causal reasoning across control and data planes remains challenging.

We present PicoSDN, a provenance-informed causal observer for SDN attack analysis. PicoSDN leverages fine-grained data and execution partitioning techniques, as well as a unified control and data plane model, to allow practitioners to efficiently determine root causes of attacks and to make informed decisions on mitigating them. We implement PicoSDN on the popular ONOS SDN controller. Our evaluation across several attack case studies shows that PicoSDN is practical for the identification, analysis, and mitigation of SDN attacks.

Track 3

Attacks

Session Chairs: Qi Alfred Chen, University of California, Irvine, and Xiali (Sharon) Hei, University of Louisiana at Lafayette

Weak Links in Authentication Chains: A Large-scale Analysis of Email Sender Spoofing Attacks

Kaiwen Shen, Chuhan Wang, and Minglei Guo, Tsinghua University; Xiaofeng Zheng, Tsinghua University and Qi An Xin Technology Research Institute; Chaoyi Lu and Baojun Liu, Tsinghua University; Yuxuan Zhao, North China Institute of Computing Technology; Shuang Hao, University of Texas at Dallas; Haixin Duan, Tsinghua University; Qi An Xin Technology Research Institute; Qingfeng Pan, Coremail Technology Co. Ltd; Min Yang, Fudan University

Available Media

As a fundamental communicative service, email is playing an important role in both individual and corporate communications, which also makes it one of the most frequently attack vectors. An email's authenticity is based on an authentication chain involving multiple protocols, roles and services, the inconsistency among which creates security threats. Thus, it depends on the weakest link of the chain, as any failed part can break the whole chain-based defense.

This paper systematically analyzes the transmission of an email and identifies a series of new attacks capable of bypassing SPF, DKIM, DMARC and user-interface protections. In particular, by conducting a "cocktail" joint attack, more realistic emails can be forged to penetrate the celebrated email services, such as Gmail and Outlook. We conduct a large-scale experiment on 30 popular email services and 23 email clients, and find that all of them are vulnerable to certain types of new attacks. We have duly reported the identified vulnerabilities to the related email service providers, and received positive responses from 11 of them, including Gmail, Yahoo, iCloud and Alibaba. Furthermore, we propose key mitigating measures to defend against the new attacks. Therefore, this work is of great value for identifying email spoofing attacks and improving the email ecosystem's overall security.

Automated Discovery of Denial-of-Service Vulnerabilities in Connected Vehicle Protocols

Shengtuo Hu, University of Michigan; Qi Alfred Chen, UC Irvine; Jiachen Sun, Yiheng Feng, Z. Morley Mao, and Henry X. Liu, University of Michigan

Available Media

With the development of the emerging Connected Vehicle (CV) technology, vehicles can wirelessly communicate with traffic infrastructure and other vehicles to exchange safety and mobility information in real time. However, the integrated communication capability inevitably increases the attack surface of vehicles, which can be exploited to cause safety hazard on the road. Thus, it is highly desirable to systematically understand design-level flaws in the current CV network stack as well as in CV applications, and the corresponding security/safety consequences so that these flaws can be proactively discovered and addressed before large-scale deployment.

In this paper, we design CVAnalyzer, a system for discovering design-level flaws for availability violations of the CV network stack, as well as quantifying the corresponding security/safety consequences. To achieve this, CVAnalyzer combines the attack discovery capability of a general model checker and the quantitative threat assessment capability of a probabilistic model checker. Using CVAnalyzer, we successfully uncovered 4 new DoS (Denial-of-Service) vulnerabilities of the latest CV network protocols and 14 new DoS vulnerabilities of two CV platoon management protocols. Our quantification results show that these attacks can have as high as 99% success rates, and in the worst case can at least double the delay in packet processing, violating the latency requirement in CV communication. We implemented and validated all attacks in a real-world testbed, and also analyzed the fundamental causes to propose potential solutions. We have reported our findings in the CV network protocols to the IEEE 1609 Working Group, and the group has acknowledged the discovered vulnerabilities and plans to adopt our solutions.

Too Good to Be Safe: Tricking Lane Detection in Autonomous Driving with Crafted Perturbations

Pengfei Jing, The Hong Kong Polytechnic University and Keen Security Lab, Tencent; Qiyi Tang and Yuefeng Du, Keen Security Lab, Tencent; Lei Xue and Xiapu Luo, The Hong Kong Polytechnic University; Ting Wang, Pennsylvania State University; Sen Nie and Shi Wu, Keen Security Lab, Tencent

Available Media

Autonomous driving is developing rapidly and has achieved promising performance by adopting machine learning algorithms to finish various tasks automatically. Lane detection is one of the major tasks because its result directly affects the steering decisions. Although recent studies have discovered some vulnerabilities in autonomous vehicles, to the best of our knowledge, none has investigated the security of lane detection module in real vehicles. In this paper, we conduct the first investigation on the lane detection module in a real vehicle, and reveal that the over-sensitivity of the target module can be exploited to launch attacks on the vehicle. More precisely, an over-sensitive lane detection module may regard small markings on the road surface, which are introduced by an adversary, as a valid lane and then drive the vehicle in the wrong direction. It is challenging to design such small road markings that should be perceived by the lane detection module but unnoticeable to the driver. Manual manipulation of the road markings to launch attacks on the lane detection module is very labor-intensive and error-prone. We propose a novel two-stage approach to automatically determine such road markings after tackling several technical challenges. Our approach first decides the optimal perturbations on the camera image and then maps them to road markings in physical world. We conduct extensive experiments on a Tesla Model S vehicle, and the experimental results show that the lane detection module can be deceived by very unobtrusive perturbations to create a lane, thus misleading the vehicle in auto-steer mode.

Acoustics to the Rescue: Physical Key Inference Attack Revisited

Soundarya Ramesh and Rui Xiao, National University of Singapore; Anindya Maiti, University of Oklahoma; Jong Taek Lee, Harini Ramprasad, and Ananda Kumar, National University of Singapore; Murtuza Jadliwala, University of Texas at San Antonio; Jun Han, National University of Singapore

Available Media

Lock picking and key bumping are the most common attacks on traditional pin tumbler door locks. However, these approaches require physical access to the lock throughout the attack, increasing suspicion and chances of the attacker getting caught. To overcome this challenge, we propose Keynergy, a stealthy offline attack that infers key bittings (or secret) by substantially extending and improving prior work that only utilizes a still image of the key. Keynergy effectively utilizes the inherent audible “clicks” due to a victim's key insertion, together with video footage of the victim holding the key, in order to infer the victim's key's bittings. We evaluate Keynergy via a proof-of-concept implementation and real-world experiments comprising of participants that perform multiple key insertions across a total of 75 keys with the related audio recorded using different microphone types placed at varying distances. We demonstrate that Keynergy achieves an average reduction rate of around 75% with an acoustics-based approach alone. When we combine both acoustics and video together, Keynergy obtains a reduced keyspace below ten keys for 8% of the keys (i.e., six keys out of 75 keys tested).

Messy States of Wiring: Vulnerabilities in Emerging Personal Payment Systems

Jiadong Lou and Xu Yuan, University of Louisiana at Lafayette; Ning Zhang, Washington University in St. Louis

Available Media

This paper presents our study on an emerging paradigm of payment service that allows individual merchants to leverage the personal transfer service in third-party platforms to support commercial transactions. This is made possible by leveraging an additional order management system, collectively named Personal Payment System (PPS). To gain a better understanding of these emerging systems, we conducted a systematic study on 35 PPSs covering over 11740 merchant clients supporting more than 20 million customers. By examining the documentation, available source codes, and demos, we extracted a common abstracted model for PPS and discovered seven categories of vulnerabilities in the existing personal payment protocol design and system implementation. It is alarming that all PPSs under study have at least one vulnerability. To further dissect these potential weaknesses, we present the corresponding attack methods to exploit the discovered vulnerabilities. To validate our proposed attacks, we conducted four successful real attacks to illustrate the severe consequences. We have responsibly disclosed the newly discovered vulnerabilities, with some patched after our reporting.

Research on the Security of Visual Reasoning CAPTCHA

Yipeng Gao, Haichang Gao, Sainan Luo, Yang Zi, Shudong Zhang, Wenjie Mao, Ping Wang, and Yulong Shen, Xidian University; Jeff Yan, Linköping University

Available Media

CAPTCHA is an effective mechanism for protecting computers from malicious bots. With the development of deep learning techniques, current mainstream text-based CAPTCHAs have been proven to be insecure. Therefore, a major effort has been directed toward developing image-based CAPTCHAs, and image-based visual reasoning is emerging as a new direction of such development. Recently, Tencent deployed the Visual Turing Test (VTT) CAPTCHA. This appears to have been the first application of a visual reasoning scheme. Subsequently, other CAPTCHA service providers (Geetest, NetEase, Dingxiang, etc.) have proposed their own visual reasoning schemes to defend against bots. It is, therefore, natural to ask a fundamental question: are visual reasoning CAPTCHAs as secure as their designers expect? This paper presents the first attempt to solve visual reasoning CAPTCHAs. We implemented a holistic attack and a modular attack, which achieved overall success rates of 67.3% and 88.0% on VTT CAPTCHA, respectively. The results show that visual reasoning CAPTCHAs are not as secure as anticipated; this latest effort to use novel, hard AI problems for CAPTCHAs has not yet succeeded. Based on the lessons we learned from our attacks, we also offer some guidelines for designing visual CAPTCHAs with better security.

Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack

Takami Sato, Junjie Shen, and Ningfei Wang, University of California, Irvine; Yunhan Jia, ByteDance; Xue Lin, Northeastern University; Qi Alfred Chen, University of California, Irvine

Available Media

Automated Lane Centering (ALC) systems are convenient and widely deployed today, but also highly security and safety critical. In this work, we are the first to systematically study the security of state-of-the-art deep learning based ALC systems in their designed operational domains under physical-world adversarial attacks. We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches. To systematically generate the attack, we adopt an optimization-based approach and overcome domain-specific design challenges such as camera frame inter-dependencies due to attack-influenced vehicle control, and the lack of objective function design for lane detection models.

We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces. The results show that our attack is highly effective with over 97.5% success rates and less than 0.903 sec average success time, which is substantially lower than the average driver reaction time. This attack is also found (1) robust to various real-world factors such as lighting conditions and view angles, (2) general to different model designs, and (3) stealthy from the driver's view. To understand the safety impacts, we conduct experiments using software-in-the-loop simulation and attack trace injection in a real vehicle. The results show that our attack can cause a 100% collision rate in different scenarios, including when tested with common safety features such as automatic emergency braking. We also evaluate and discuss defenses.

8:45 am–9:00 am

Break

9:00 am–10:45 am

Track 1

Research on Surveillance and Censorship

Session Chairs: Amir Houmansadr, University of Massachusetts Amherst, and Roger Dingledine, The Tor Project

Domain Shadowing: Leveraging Content Delivery Networks for Robust Blocking-Resistant Communications

Mingkui Wei, George Mason University

Available Media

We debut domain shadowing, a novel censorship evasion technique leveraging content delivery networks (CDNs). Domain shadowing exploits the fact that CDNs allow their customers to claim arbitrary domains as the back-end. By setting the frond-end of a CDN service as an allowed domain and the back-end a blocked one, a censored user can access resources of the blocked domain with all "indicators", including the connecting URL, the SNI of the TLS connection, and the Host header of the HTTP(S) request, appear to belong to the allowed domain. Furthermore, we demonstrate that domain shadowing can be proliferated by domain fronting, a censorship evasion technique popularly used a few years ago, making it even more difficult to block. Compared with existing censorship evasion solutions, domain shadowing is lightweight, incurs negligible overhead, and does not require dedicated third-party support. As a proof of concept, we implemented domain shadowing as a Firefox browser extension and presented its capability in circumventing censorship within a heavily censored country known by its strict censorship policies and advanced technologies.

Weaponizing Middleboxes for TCP Reflected Amplification

Kevin Bock, University of Maryland; Abdulrahman Alaraj, University of Colorado Boulder; Yair Fax and Kyle Hurley, University of Maryland; Eric Wustrow, University of Colorado Boulder; Dave Levin, University of Maryland

Distinguished Paper Award Winner and Third Prize winner of the 2021 Internet Defense Prize

Available Media

Reflective amplification attacks are a powerful tool in the arsenal of a DDoS attacker, but to date have almost exclusively targeted UDP-based protocols. In this paper, we demonstrate that non-trivial TCP-based amplification is possible and can be orders of magnitude more effective than well-known UDP-based amplification. By taking advantage of TCP-noncompliance in network middleboxes, we show that attackers can induce middleboxes to respond and amplify network traffic. With the novel application of a recent genetic algorithm, we discover and maximize the efficacy of new TCP-based reflective amplification attacks, and present several packet sequences that cause network middleboxes to respond with substantially more packets than we send.

We scanned the entire IPv4 Internet to measure how many IP addresses permit reflected amplification. We find hundreds of thousands of IP addresses that offer amplification factors greater than 100×. Through our Internet-wide measurements, we explore several open questions regarding DoS attacks, including the root cause of so-called "mega amplifiers". We also report on network phenomena that causes some of the TCP-based attacks to be so effective as to technically have infinite amplification factor (after the attacker sends a constant number of bytes, the reflector generates traffic indefinitely). We have made our code publicly available.

Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong

Martin R. Albrecht, Jorge Blasco, Rikke Bjerg Jensen, and Lenka Mareková, Royal Holloway, University of London

Available Media

The Anti-Extradition Law Amendment Bill protests in Hong Kong present a rich context for exploring information security practices among protesters due to their large-scale urban setting and highly digitalised nature. We conducted in-depth, semi-structured interviews with 11 participants of these protests. Research findings reveal how protesters favoured Telegram and relied on its security for internal communication and organisation of on-the-ground collective action; were organised in small private groups and large public groups to enable collective action; adopted tactics and technologies that enable pseudonymity; and developed a variety of strategies to detect compromises and to achieve forms of forward secrecy and post-compromise security when group members were (presumed) arrested. We further show how group administrators had assumed the roles of leaders in these 'leaderless' protests and were critical to collective protest efforts.

How Great is the Great Firewall? Measuring China's DNS Censorship

Nguyen Phong Hoang, Stony Brook University and Citizen Lab, University of Toronto; Arian Akhavan Niaki, University of Massachusetts, Amherst; Jakub Dalek, Jeffrey Knockel, and Pellaeon Lin, Citizen Lab, University of Toronto; Bill Marczak, Citizen Lab, University of Toronto, and University of California, Berkeley; Masashi Crete-Nishihata, Citizen Lab, University of Toronto; Phillipa Gill, University of Massachusetts, Amherst; Michalis Polychronakis, Stony Brook University

Available Media

The DNS filtering apparatus of China's Great Firewall (GFW) has evolved considerably over the past two decades. However, most prior studies of China's DNS filtering were performed over short time periods, leading to unnoticed changes in the GFW's behavior. In this study, we introduce GFWatch, a large-scale, longitudinal measurement platform capable of testing hundreds of millions of domains daily, enabling continuous monitoring of the GFW's DNS filtering behavior.

We present the results of running GFWatch over a nine-month period, during which we tested an average of 411M domains per day and detected a total of 311K domains censored by GFW's DNS filter. To the best of our knowledge, this is the largest number of domains tested and censored domains discovered in the literature. We further reverse engineer regular expressions used by the GFW and find 41K innocuous domains that match these filters, resulting in overblocking of their content. We also observe bogus IPv6 and globally routable IPv4 addresses injected by the GFW, including addresses owned by US companies, such as Facebook, Dropbox, and Twitter.

Using data from GFWatch, we studied the impact of GFW blocking on the global DNS system. We found 77K censored domains with DNS resource records polluted in popular public DNS resolvers, such as Google and Cloudflare. Finally, we propose strategies to detect poisoned responses that can (1) sanitize poisoned DNS records from the cache of public DNS resolvers, and (2) assist in the development of circumvention tools to bypass the GFW's DNS censorship.

Balboa: Bobbing and Weaving around Network Censorship

Marc B. Rosen, James Parker, and Alex J. Malozemoff, Galois, Inc.

Available Media

We introduce Balboa, a link obfuscation framework for censorship circumvention. Balboa provides a general framework for tunneling data through existing applications. Balboa sits between an application and the operating system, intercepting outgoing network traffic and rewriting it to embed data. To avoid introducing any distinguishable divergence from the expected application behavior, Balboa only rewrites traffic that matches an externally specified traffic model pre-shared between the communicating parties. The traffic model captures some subset of the network traffic (e.g., some subset of music an audio streaming server streams). The sender uses this model to replace outgoing data with a pointer to the associated location in the model and embed data in the freed up space. The receiver then extracts the data, replacing the pointer with the original data from the model before passing the data on to the application. When using TLS, this approach means that application behavior with Balboa is equivalent, modulo small (protocol-dependent) timing differences, to if the application was running without Balboa.

Balboa differs from prior approaches in that it (1) provides a framework for tunneling data through arbitrary (TLSprotected) protocols/applications, and (2) runs the unaltered application binaries on standard inputs, as opposed to most prior tunneling approaches which run the application on nonstandard—and thus potentially distinguishable—inputs.

We present two instantiations of Balboa—one for audio streaming and one for web browsing—and demonstrate the difficulty of identifying Balboa by a machine learning classifier.

Once is Never Enough: Foundations for Sound Statistical Inference in Tor Network Experimentation

Rob Jansen, U.S. Naval Research Laboratory; Justin Tracey and Ian Goldberg, University of Waterloo

Available Media

Tor is a popular low-latency anonymous communication system that focuses on usability and performance: a faster network will attract more users, which in turn will improve the anonymity of everyone using the system. The standard practice for previous research attempting to enhance Tor performance is to draw conclusions from the observed results of a single simulation for standard Tor and for each research variant. But because the simulations are run in sampled Tor networks, it is possible that sampling error alone could cause the observed effects. Therefore, we call into question the practical meaning of any conclusions that are drawn without considering the statistical significance of the reported results.

In this paper, we build foundations upon which we improve the Tor experimental method. First, we present a new Tor network modeling methodology that produces more representative Tor networks as well as new and improved experimentation tools that run Tor simulations faster and at a larger scale than was previously possible. We showcase these contributions by running simulations with 6,489 relays and 792k simultaneously active users, the largest known Tor network simulations and the first at a network scale of 100%. Second, we present new statistical methodologies through which we: (i) show that running multiple simulations in independently sampled networks is necessary in order to produce informative results; and (ii) show how to use the results from multiple simulations to conduct sound statistical inference. We present a case study using 420 simulations to demonstrate how to apply our methodologies to a concrete set of Tor experiments and how to analyze the results.

Rollercoaster: An Efficient Group-Multicast Scheme for Mix Networks

Daniel Hugenroth, Martin Kleppmann, and Alastair R. Beresford, University of Cambridge

Available Media

Mix network designs such as Loopix provide strong metadata anonymity guarantees that are crucial across many applications. However, because they limit the rate at which messages can be sent by each user, they incur high delays when sending many messages to multiple recipients—for instance, in decentralised collaborative apps.

In this paper we present an efficient multicast scheme named Rollercoaster that reduces the time for delivering a message to all members of a group of size m from O(m) to O(log m). Rollercoaster can be deployed without modifications to the underlying mix network, allowing it to benefit from the anonymity set provided by existing users. We further develop an extension that achieves the same asymptotic guarantees in the presence of unreliable group members.

While the scheme is applicable to many mix network designs, we evaluate it for the Loopix network, which is the most advanced and practical design to date. For this evaluation we developed a network simulator that allows fast, reproducible, and inspectable runs while eliminating external influences.

Track 2

Malware and Program Analysis 1

Session Chair: Michael Franz, University of California, Irvine

Obfuscation-Resilient Executable Payload Extraction From Packed Malware

Binlin Cheng, Hubei Normal University & Wuhan University; Jiang Ming, Erika A Leal, and Haotian Zhang, The University of Texas at Arlington; Jianming Fu and Guojun Peng, Wuhan University; Jean-Yves Marion, Université de Lorraine, CNRS, LORIA

Available Media

Over the past two decades, packed malware is always a veritable challenge to security analysts. Not only is determining the end of the unpacking increasingly difficult, but also advanced packers embed a variety of anti-analysis tricks to impede reverse engineering. As malware's APIs provide rich information about malicious behavior, one common anti-analysis strategy is API obfuscation, which removes the metadata of imported APIs from malware's PE header and complicates API name resolution from API callsites. In this way, even when security analysts obtain the unpacked code, a disassembler still fails to recognize imported API names, and the unpacked code cannot be successfully executed. Recently, generic binary unpacking has made breakthrough progress with noticeable performance improvement. However, reconstructing unpacked code's import tables, which is vital for further malware static/dynamic analyses, has largely been overlooked. Existing approaches are far from mature: they either can be easily evaded by various API obfuscation schemes (e.g., stolen code), or suffer from incomplete API coverage.

In this paper, we aim to achieve the ultimate goal of Windows malware unpacking: recovering an executable malware program from the packed and obfuscated binary code. Based on the process memory when the original entry point (OEP) is reached, we develop a hardware-assisted tool, API-Xray, to reconstruct import tables. Import table reconstruction is challenging enough in its own right. Our core technique, API Micro Execution, explores all possible API callsites and executes them without knowing API argument values. At the same time, we take advantage of hardware tracing via Intel Branch Trace Store and NX bit to resolve API names and finally rebuild import tables. Compared with the previous work, API-Xray has a better resistance against various API obfuscation schemes and more coverage on resolved Windows API names. Since July 2019, we have tested API-Xray in practice to assist security professionals in malware analysis: we have successfully rebuilt 155,811 executable malware programs and substantially improved the detection rate for 7,514 unknown or new malware variants.

DeepReflect: Discovering Malicious Functionality through Binary Reconstruction

Evan Downing, Georgia Institute of Technology; Yisroel Mirsky, Georgia Institute of Technology & Ben-Gurion University; Kyuhong Park and Wenke Lee, Georgia Institute of Technology

Available Media

Deep learning has continued to show promising results for malware classification. However, to identify key malicious behaviors, malware analysts are still tasked with reverse engineering unknown malware binaries using static analysis tools, which can take hours. Although machine learning can be used to help identify important parts of a binary, supervised approaches are impractical due to the expense of acquiring a sufficiently large labeled dataset.

To increase the productivity of static (or manual) reverse engineering, we propose DeepReflect: a tool for localizing and identifying malware components within a malicious binary. To localize malware components, we use an unsupervised deep neural network in a novel way, and classify the components through a semi-supervised cluster analysis, where analysts incrementally provide labels during their daily work flow. The tool is practical since it requires no data labeling to train the localization model, and minimal/noninvasive labeling to train the classifier incrementally.

In our evaluation with five malware analysts on over 26k malware samples, we found that DeepReflect reduces the number of functions that an analyst needs to reverse engineer by 85% on average. Our approach also detects 80% of the malware components compared to 43% when using a signature-based tool (CAPA). Furthermore, DeepReflect performs better with our proposed autoencoder than SHAP (an AI explanation tool). This is significant because SHAP, a state-of-the-art method, requires a labeled dataset and autoencoders do not.

When Malware Changed Its Mind: An Empirical Study of Variable Program Behaviors in the Real World

Erin Avllazagaj, University of Maryland, College Park; Ziyun Zhu, Facebook; Leyla Bilge, NortonLifeLock Research Group; Davide Balzarotti, EURECOM; Tudor Dumitraș, University of Maryland, College Park

Available Media

Behavioral program analysis is widely used for understanding malware behavior, for creating rule-based detectors, and for clustering samples into malware families. However, this approach is ineffective when the behavior of individual samples changes across different executions, owing to environment sensitivity, evasive techniques or time variability. While the inability to observe the complete behavior of a program is a well-known limitation of dynamic analysis, the prevalence of this behavior variability in the wild, and the behavior components that are most affected by it, are still unknown. As the behavioral traces are typically collected by executing the samples in a controlled environment, the models created and tested using such traces do not account for the broad range of behaviors observed in the wild, and may result in a false sense of security.

In this paper we conduct the first quantitative analysis of behavioral variability in Windows malware, PUP and benign samples, using a novel dataset of 7.6M execution traces, recorded in 5.4M real hosts from 113 countries. We analyze program behaviors at multiple granularities, and we show how they change across hosts and across time. We then analyze the invariant parts of the malware behaviors, and we show how this affects the effectiveness of malware detection using a common class of behavioral rules. Our findings have actionable implications for malware clustering and detection, and they emphasize that program behavior in the wild depends on a subtle interplay of factors that may only be observed at scale, by monitoring malware on real hosts.

The Circle Of Life: A Large-Scale Study of The IoT Malware Lifecycle

Omar Alrawi, Charles Lever, and Kevin Valakuzhy, Georgia Institute of Technology; Ryan Court and Kevin Snow, Zero Point Dynamics; Fabian Monrose, University of North Carolina at Chapel Hill; Manos Antonakakis, Georgia Institute of Technology

Available Media

Our current defenses against IoT malware may not be adequate to remediate an IoT malware attack similar to the Mirai botnet. This work seeks to investigate this matter by systematically and empirically studying the lifecycle of IoT malware and comparing it with traditional malware that target desktop and mobile platforms. We present a large-scale measurement of more than 166K Linux-based IoT malware samples collected over a year. We compare our results with prior works by systematizing desktop and mobile malware studies into a novel framework and answering key questions about defense readiness. Based on our findings, we deduce that the required technology to defend against IoT malware is available, but we conclude that there are insufficient efforts in place to deal with a large-scale IoT malware infection breakout.

Forecasting Malware Capabilities From Cyber Attack Memory Images

Omar Alrawi, Moses Ike, Matthew Pruett, Ranjita Pai Kasturi, Srimanta Barua, Taleb Hirani, Brennan Hill, and Brendan Saltaformaggio, Georgia Institute of Technology

Available Media

The remediation of ongoing cyber attacks relies upon timely malware analysis, which aims to uncover malicious functionalities that have not yet executed. Unfortunately, this requires repeated context switching between different tools and incurs a high cognitive load on the analyst, slowing down the investigation and giving attackers an advantage. We present Forecast, a post-detection technique to enable incident responders to automatically predict capabilities which malware have staged for execution. Forecast is based on a probabilistic model that allows Forecast to discover capabilities and also weigh each capability according to its relative likelihood of execution (i.e., forecasts). Forecast leverages the execution context of the ongoing attack (from the malware's memory image) to guide a symbolic analysis of the malware's code. We performed extensive evaluations, with 6,727 real-world malware and futuristic attacks aiming to subvert Forecast, showing the accuracy and robustness in predicting malware capabilities.

YARIX: Scalable YARA-based Malware Intelligence

Michael Brengel and Christian Rossow, CISPA Helmholtz Center for Information Security

Available Media

YARA is the industry standard to search for patterns in malware data sets. Malware analysts heavily rely on YARA rules to identify specific threats, e.g., by scanning unknown malware samples for patterns that are characteristic for a certain malware strain. While YARA is tremendously useful to inspect individual files, its run time grows linearly with the number of input files, resulting in prohibitive performance penalties in large malware corpora.

We present YARIX, a methodology to efficiently reveal files matching arbitrary YARA rules. In order to scale to large malware corpora, YARIX uses an inverted n-gram index that maps fixed-length byte sequences to lists of files in which they appear. To efficiently query such corpora, YARIX optimizes YARA searches by transforming YARA rules into index lookups to obtain a set of candidate files that potentially match the rule. Given the storage demands that arise when indexing binary files, YARIX compresses the disk footprint with variable byte delta encoding, abstracts from file offsets, and leverages a novel grouping-based compression methodology. This completeness-preserving approximation will then be scanned using YARA to get the actual set of matching files.

Using 32M malware samples and 1404 YARA rules, we show that YARIX scales in both disk footprint and search performance. The index requires just ≈74% of the space required for storing the malware samples. Querying YARIX with a YARA rule in our test setup is five orders of magnitude faster than using standard sequential YARA scans.

Constraint-guided Directed Greybox Fuzzing

Gwangmu Lee, Seoul National University; Woochul Shim, Samsung Research; Byoungyoung Lee, Seoul National University

Available Media

Directed greybox fuzzing is an augmented fuzzing technique intended for the targeted usages such as crash reproduction and proof-of-concept generation, which gives directedness to fuzzing by driving the seeds toward the designated program locations called target sites. However, we find that directed greybox fuzzing can still suffer from the long fuzzing time before exposing the targeted crash, because it does not consider the ordered target sites and the data conditions. This paper presents constraint-guided directed greybox fuzzing that aims to satisfy a sequence of constraints rather than merely reaching a set of target sites. Constraint-guided greybox fuzzing defines a constraint as the combination of a target site and the data conditions, and drives the seeds to satisfy the constraints in the specified order. We automatically generate the constraints with seven types of crash dumps and four types of patch changelogs, and evaluate the prototype system CAFL against the representative directed greybox fuzzing system AFLGo with 47 real-world crashes and 12 patch changelogs. The evaluation shows CAFL outperforms AFLGo by 2.88x for crash reproduction, and better performs in PoC generation as the constraints get explicit.

Track 3

Mobile System Security and Privacy

Session Chairs: Brendan Saltaformaggio, Georgia Institute of Technology, and Zhiqiang Lin, The Ohio State University

PrivateDrop: Practical Privacy-Preserving Authentication for Apple AirDrop

Alexander Heinrich, Matthias Hollick, Thomas Schneider, Milan Stute, and Christian Weinert, TU Darmstadt

Available Media

Apple's offline file-sharing service AirDrop is integrated into more than 1.5 billion end-user devices worldwide. We discovered two design flaws in the underlying protocol that allow attackers to learn the phone numbers and email addresses of both sender and receiver devices. As a remediation, we study the applicability of private set intersection (PSI) to mutual authentication, which is similar to contact discovery in mobile messengers. We propose a novel optimized PSI-based protocol called PrivateDrop that addresses the specific challenges of offline resource-constrained operation and integrates seamlessly into the current AirDrop protocol stack. Using our native PrivateDrop implementation for iOS and macOS, we experimentally demonstrate that PrivateDrop preserves AirDrop's exemplary user experience with an authentication delay well below one second. We responsibly disclosed our findings to Apple and open-sourced our PrivateDrop implementation.

Privacy-Preserving and Standard-Compatible AKA Protocol for 5G

Yuchen Wang, TCA of State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences & Alibaba Group; Zhenfeng Zhang, TCA of State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences; Yongquan Xie, Commercial Cryptography Testing Center of State Cryptography Administration

Available Media

The 3GPP consortium has published the Authentication and Key Agreement protocol for the 5th generation (5G) mobile communication system (i.e., 5G-AKA) by Technical Specification (TS) 33.501. It introduces public key encryption to conceal the so-called SUPIs so as to enhance mobile users' privacy. However, 5G-AKA is only privacy-preserving at the presence of passive attackers, and is still vulnerable to the linkability attacks from active attackers. An active attacker can track target mobile phones via performing these attacks, which puts the privacy of users at risk.

It is resistant to linkability attacks performed by active attackers, and is compatible with the SIM cards and currently deployed Serving Networks (SNs). In particular, we first conduct an analysis on the known linkability attacks in 5G-AKA, and find out a root cause of all attacks. Then, we design a counter-measure with the inherent key encapsulation mechanism of ECIES (i.e., ECIES-KEM), and use the shared key established by ECIES-KEM to encrypt the challenges sent by a Home Network (HN). With this measure, a target User Equipment (UE) who receives a message replayed from its previously attended sessions behaves as non-target UEs, which prevents the attacker from distinguishing the UE by linking it with its previous sessions. Moreover, 5G-AKA′ does not raise additional bandwidth cost, and only introduces limited additional time costs from 0.02% to 0.03%. Finally, we use a state-of-the-art formal verification tool, Tamarin prover, to prove that 5G-AKA′ achieves the desired security goals of privacy, authentication and secrecy.

SEApp: Bringing Mandatory Access Control to Android Apps

Matthew Rossi, Dario Facchinetti, and Enrico Bacis, Università degli Studi di Bergamo; Marco Rosa, SAP Security Research; Stefano Paraboschi, Università degli Studi di Bergamo

Available Media

Mandatory Access Control (MAC) has provided a great contribution to the improvement of the security of modern operating systems. A clear demonstration is represented by Android, which has progressively assigned a greater role to SELinux since its introduction in 2013. These benefits have been mostly dedicated to the protection of system components against the behavior of apps and no control is offered to app developers on the use of MAC. Our solution overcomes this limitation, giving developers the power to define ad-hoc MAC policies for their apps, supporting the internal compartmentalization of app components.

This is a natural evolution of the security mechanisms already available in Android, but its realization requires to consider that (i) the security of system components must be maintained, (ii) the solution must be usable by developers, and (iii) the performance impact should be limited. Our proposal meets these three requirements. The proposal is supported by an open-source implementation.

A11y and Privacy don't have to be mutually exclusive: Constraining Accessibility Service Misuse on Android

Jie Huang, Michael Backes, and Sven Bugiel, CISPA Helmholtz Center for Information Security

Available Media

Accessibility features of Android are crucial in assisting people with disabilities or impairment to navigate their devices. However, the same, powerful features are commonly misused by shady apps for malevolent purposes, such as stealing data from other apps. Unfortunately, existing defenses do not allow apps to protect themselves and at the same time to be fully inclusive to users with accessibility needs.

To enhance the privacy protection of the user while preserving the accessibility features for assistive apps, we introduce an extension to Android's accessibility framework. Our design is based on a study of how accessibility features are used in 95 existing accessibility apps of different types (malware, utility, and a11y). Based on those insights, we propose to model the usage of the accessibility framework as a pipeline of code modules, which are all sandboxed on the system-side. By policing the data flows of those modules, we achieve more fine-grained control over the access to accessibility features and the way they are used in apps, allowing a balance between accessibility functionality for dependent users and reduced privacy risks. We demonstrate the feasibility of our solution by migrating two real-world apps to our privacy-enhanced accessibility framework.

An Investigation of the Android Kernel Patch Ecosystem

Zheng Zhang, UC RIverside; Hang Zhang and Zhiyun Qian, UC Riverside; Billy Lau, Google Inc.

Available Media

open-source projects are often reused in commercial software. Android, a popular mobile operating system, is a great example that has fostered an ecosystem of open-source kernels. However, due to the largely decentralized and fragmented nature, patch propagation from the upstream through multiple layers to end devices can be severely delayed. In this paper, we undertake a thorough investigation of the patch propagation behaviors in the entire Android kernel ecosystem. By analyzing the CVEs and patches available since the inception of the Android security bulletin, as well as open-source upstream kernels (e.g., Linux and AOSP) and hundreds of mostly binary OEM kernels (e.g., by Samsung), we find that the delays of patches are largely due to the current patching practices and the lack of knowledge about which upstream commits being security-critical. Unfortunately, we find that the gap between the first publicly available patch and its final application on end devices is often months and even years, leaving a large attack window for experienced hackers to exploit the unpatched vulnerabilities

Share First, Ask Later (or Never?) Studying Violations of GDPR's Explicit Consent in Android Apps

Trung Tin Nguyen, CISPA Helmholtz Center for Information Security; Saarbrücken Graduate School of Computer Science, Saarland University; Michael Backes, Ninja Marnau, and Ben Stock, CISPA Helmholtz Center for Information Security

Available Media

Since the General Data Protection Regulation (GDPR) went into effect in May 2018, online services are required to obtain users' explicit consent before sharing users' personal data with third parties that use the data for their own purposes. While violations of this legal basis on the Web have been studied in-depth, the community lacks insight into such violations in the mobile ecosystem.

We perform the first large-scale measurement on Android apps in the wild to understand the current state of the violation of GDPR's explicit consent. Specifically, we build a semi-automated pipeline to detect data sent out to the Internet without prior consent and apply it to a set of 86,163 Android apps. Based on the domains that receive data protected under the GDPR without prior consent, we collaborate with a legal scholar to assess if these contacted domains are third-party data controllers. Doing so, we find 24,838 apps send personal data towards data controllers without the user's explicit prior consent. To understand the reasons behind this, we run a notification campaign to inform affected developers and gather insights from their responses. We then conduct an in-depth analysis of violating apps as well as the corresponding third parties' documentation and privacy policies. Based on the responses and our analysis of available documentation, we derive concrete recommendations for all involved entities in the ecosystem to allow data subjects to exercise their fundamental rights and freedoms.

DEFInit: An Analysis of Exposed Android Init Routines

Yuede Ji, University of North Texas; Mohamed Elsabagh, Ryan Johnson, and Angelos Stavrou, Kryptowire

Available Media

During the booting process of an Android device, a special daemon called Init is launched by the kernel as the first user-space process. Android allows vendors to extend the behavior of Init by introducing custom routines in .rc files. These Init routines can also be triggered by privileged pre-installed apps in a certain manner to accomplish privileged functionalities. However, as these pre-installed apps may fail to properly protect access to code sites triggering these Init routines, the capabilities of these routines may leak to unprivileged apps, resulting in crossing security boundaries set by the system. To this end, this study aims at investigating the prevalence of these Init routines and their security impact. We present DEFInit as a tool to help automate the process of identifying Init routines exposed by pre-installed apps and estimating their potential security impact. Our findings are alarming. We found that custom Init routines added by vendors were substantial and had significant security impact. On a data set of 259 firmware from the top 21 vendors worldwide, we identified 1,947 exposed custom Init routines in 101 firmware from 13 vendors. Of these routines, 515 performed at least one sensitive action. We verified 89 instances spanning 30 firmware from 6 vendors, allowing unprivileged apps to perform sensitive functionalities without user interaction, including disabling SELinux enforcement, sniffing network traffic, reading system logs, among others.

10:45 am–11:15 am

Break

11:15 am–1:00 pm

Track 1

Phishing and the Malicious Web

Session Chairs: David Freeman, Facebook Inc., and Sadia Afroz, International Computer Science Institute (ICSI), University of California, Berkeley, and Avast

Scalable Detection of Promotional Website Defacements in Black Hat SEO Campaigns

Ronghai Yang, Sangfor Technologies Inc.; Xianbo Wang, The Chinese University of Hong Kong; Cheng Chi, Dawei Wang, Jiawei He, and Siming Pang, Sangfor Technologies Inc.; Wing Cheong Lau, The Chinese University of Hong Kong

Available Media

Miscreants from online underground economies regularly exploit website vulnerabilities and inject fraudulent content into victim web pages to promote illicit goods and services. Scalable detection of such promotional website defacements remains an open problem despite their prevalence in Black Hat Search Engine Optimization (SEO) campaigns. Adversaries often manage to inject content in a stealthy manner by obfuscating the description of illicit products and/or the presence of defacements to make them undetectable. In this paper, we design and implement DMoS—a Defacement Monitoring System which protects websites from promotional defacements at scale. Our design is based on two key observations: Firstly, for effective advertising, the obfuscated jargons of illicit goods or services need to be easily understood by their target customers (i.e., sharing similar shape or pronunciation). Secondly, to promote the underground business, the defacements are crafted to boost search engine ranking of the defaced web pages while trying to stay stealthy from the maintainers and legitimate users of the compromised websites. Leveraging these insights, we first follow the human convention and design a jargon normalization algorithm to map obfuscated jargons to their original forms. We then develop a tag embedding mechanism, which enables DMoS to focus more on those not-so-visually-obvious, yet site-ranking influential HTML tags (i.e., title, meta). Consequently, DMoS can reliably detect illicit content hidden in compromised web pages. In particular, we have deployed DMoS as a cloud-based monitoring service for a five-month trial run. It has analyzed more than 38 million web pages across 7000+ commercial Chinese websites and found defacements in 11% of these websites. It achieves a recall over 99% with a precision about 89%. While the original design of DMoS focuses on the detection of Chinese promotional defacements, we have extended the system and demonstrated its applicability for English website defacement detection via proof-of-concept experiments.

Compromised or Attacker-Owned: A Large Scale Classification and Study of Hosting Domains of Malicious URLs

Ravindu De Silva, SCoRe Lab and Qatar Computing Research Institute; Mohamed Nabeel, Qatar Computing Research Institute; Charith Elvitigala, SCoRe Lab; Issa Khalil and Ting Yu, Qatar Computing Research Institute; Chamath Keppitiyagama, University of Colombo School of Computing

Available Media

The mitigation action against a malicious website may differ greatly depending on how that site is hosted. If it is hosted under a private apex domain, where all its subdomains and pages are under the apex domain owner's direct control, we could block at the apex domain level. If it is hosted under a public apex domain though (e.g., a web hosting service provider), it would be more appropriate to block at the subdomain level. Further, for the former case, the private apex domain may be legitimate but compromised, or may be attacker-generated, which, again, would warrant different mitigation actions: attacker-owned apex domains could be blocked permanently, while only temporarily for compromised ones.

In this paper, we study over eight hundred million VirusTotal (VT) URL scans from Aug. 1, 2019 to Nov. 18, 2019 and build the first content agnostic machine learning models to distinguish between the above mentioned different types of apex domains hosting malicious websites. Specifically, we first build a highly accurate model to distinguish between public and private apex domains. Then we build additional models to further distinguish compromised domains from attacker-owned ones. Utilizing our trained models, we conduct a large-scale study of the host domains of malicious websites. We observe that even though public apex domains are less than 1% of the apexes hosting malicious websites, they amount to a whopping 46.5% malicious web pages seen in VT URL feeds during our study period. 19.5% of these public malicious websites are compromised. Out of the remaining websites (53.5%), which are hosted on private apexes, we observe that attackers mostly compromise benign websites (65.6%) to launch their attacks, whereas only 34.4% of malicious websites are hosted on domains registered by attackers. Overall, we observe the concerning trend that the majority (81.7%) of malicious websites are hosted under apex domains that attackers do not own.

Assessing Browser-level Defense against IDN-based Phishing

Hang Hu, Virginia Tech; Steve T.K. Jan, University of Illinois at Urbana-Champaign/Virginia Tech; Yang Wang and Gang Wang, University of Illinois at Urbana-Champaign

Available Media

Internationalized Domain Names (IDN) allow people around the world to use their native languages for domain names. Unfortunately, because characters from different languages can look like each other, IDNs have been used to impersonate popular domains for phishing, i.e., IDN homograph. To mitigate this risk, browsers have recently introduced defense policies. However, it is not yet well understood regarding how these policies are constructed and how effective they are.

In this paper, we present an empirical analysis of browser IDN policies, and a user study to understand user perception of homograph IDNs. We focus on 5 major web browsers (Chrome, Firefox, Safari, Microsoft Edge, and IE), and 2 mobile browsers (Android Chrome and iOS Safari) and analyze their current and historical versions released from January 2015 to April 2020. By treating each browser instance as a black box, we develop an automated tool to test the browser policies with over 9,000 testing cases. We find that all the tested browsers have weaknesses in their rules, leaving opportunities for attackers to craft homograph IDNs to impersonate target websites while bypassing browsers' defense. In addition, a browser's defense is not always getting stricter over time. For example, we observe Chrome has reversed its rules to re-allow certain homograph IDNs. Finally, our user study shows that the homograph IDNs that can bypass browsers' defense are still highly deceptive to users. Overall, our results suggest the need to improve the current defense against IDN homograph.

Catching Phishers By Their Bait: Investigating the Dutch Phishing Landscape through Phishing Kit Detection

Hugo Bijmans, Tim Booij, and Anneke Schwedersky, Netherlands Organisation for Applied Scientific Research (TNO); Aria Nedgabat, Eindhoven University of Technology; Rolf van Wegberg, Delft University of Technology

Available Media

Off-the-shelf, easy-to-deploy phishing kits are believed to lower the threshold for criminal entrepreneurs going phishing. That is, the practice of harvesting user credentials by tricking victims into disclosing these on fraudulent websites. But, how do these kits impact the phishing landscape? And, how often are they used? We leverage the use of TLS certificates by phishers to uncover possible Dutch phishing domains aimed at the financial sector between September 2020 and January 2021. We collect 70 different Dutch phishing kits in the underground economy, and identify 10 distinct kit families. We create unique fingerprints of these kits to measure their prevalence in the wild. With this novel method, we identify 1,363 Dutch phishing domains that deploy these phishing kits, and capture their end-to-end life cycle—from domain registration, kit deployment, to take-down. We find the median uptime of phishing domains to be just 24 hours, indicating that phishers do act fast. Our analysis of the deployed phishing kits reveals that only a small number of different kits are in use. We discover that phishers increase their luring capabilities by using decoy pages to trick victims into disclosing their credentials. In this paper, we paint a comprehensive picture of the tactics, techniques and procedures (TTP) prevalent in the Dutch phishing landscape and present public policy takeaways for anti-phishing initiatives.

PhishPrint: Evading Phishing Detection Crawlers by Prior Profiling

Bhupendra Acharya and Phani Vadrevu, UNO Cyber Center, University of New Orleans

Available Media

Security companies often use web crawlers to detect phishing and other social engineering attack websites. We built a novel, scalable, low-cost framework named PhishPrint to enable the evaluation of such web security crawlers against multiple cloaking attacks. PhishPrint is unique in that it completely avoids the use of any simulated phishing sites and blocklisting measurements. Instead, it uses web pages with benign content to profile security crawlers.

We used PhishPrint to evaluate 23 security crawlers including highly ubiquitous services such as Google Safe Browsing and Microsoft Outlook e-mail scanners. Our 70-day evaluation found several previously unknown cloaking weaknesses across the crawler ecosystem. In particular, we show that all the crawlers' browsers are either not supporting advanced fingerprinting related web APIs (such as Canvas API) or are severely lacking in fingerprint diversity thus exposing them to new fingerprinting-based cloaking attacks.

We confirmed the practical impact of our findings by deploying 20 evasive phishing web pages that exploit the found weaknesses. 18 of the pages managed to survive indefinitely despite aggressive self-reporting of the pages to all crawlers. We confirmed the specificity of these attack vectors with 1150 volunteers as well as 467K web users. We also proposed countermeasures that all crawlers should take up in terms of both their crawling and reporting infrastructure. We have relayed the found weaknesses to all entities through an elaborate vulnerability disclosure process that resulted in some remedial actions as well as multiple vulnerability rewards.

Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages

Yun Lin and Ruofan Liu, National University of Singapore; Dinil Mon Divakaran, Trustwave; Jun Yang Ng and Qing Zhou Chan, National University of Singapore; Yiwen Lu, Yuxuan Si, and Fan Zhang, Zhejiang University; Jin Song Dong, National University of Singapore

Available Media

Recent years have seen the development of phishing detection and identification approaches to defend against phishing attacks. Phishing detection solutions often report binary results, i.e., phishing or not, without any explanation. In contrast, phishing identification approaches identify phishing webpages by visually comparing webpages with predefined legitimate references and report phishing along with its target brand, thereby having explainable results. However, there are technical challenges in visual analyses that limit existing solutions from being effective (with high accuracy) and efficient (with low runtime overhead), to be put to practical use.

In this work, we design a hybrid deep learning system, Phishpedia, to address two prominent technical challenges in phishing identification, i.e., (i) accurate recognition of identity logos on webpage screenshots, and (ii) matching logo variants of the same brand. Phishpedia achieves both high accuracy and low runtime overhead. And very importantly, different from common approaches, Phishpedia does not require training on any phishing samples. We carry out extensive experiments using real phishing data; the results demonstrate that Phishpedia significantly outperforms baseline identification approaches (EMD, PhishZoo, and LogoSENSE) in accurately and efficiently identifying phishing pages. We also deployed Phishpedia with CertStream service and discovered 1704 new real phishing websites within 30 days, way more than other solutions; moreover, 1113 of them are not reported by any engines in VirusTotal.

Is Real-time Phishing Eliminated with FIDO? Social Engineering Downgrade Attacks against FIDO Protocols

Enis Ulqinaku, ETH Zürich; Hala Assal, AbdelRahman Abdou, and Sonia Chiasson, Carleton University; Srdjan Capkun, ETH Zürich

Available Media

FIDO's U2F is a web-authentication mechanism designed to mitigate real-time phishing—an attack that undermines multi-factor authentication by allowing an attacker to relay second-factor one-time tokens from the victim user to the legitimate website in real-time. A U2F dongle is simple to use, and is designed to restrain users from using it incorrectly. We show that social engineering attacks allow an adversary to downgrade FIDO's U2F to alternative authentication mechanisms. Websites allow such alternatives to handle dongle malfunction or loss. All FIDO-supporting websites in Alexa's top 100 allow choosing alternatives to FIDO, and are thus potentially vulnerable to real-time phishing attacks. We crafted a phishing website that mimics Google login's page and implements a FIDO-downgrade attack. We then ran a carefully-designed user study to test the effect on users. We found that, when using FIDO as their second authentication factor, 55% of participants fell for real-time phishing, and another 35% would potentially be susceptible to the attack in practice.

Track 2

DDOS; Wireless Security

Session Chair: Syed Rafiul Hussain, The Pennsylvania State University

Jaqen: A High-Performance Switch-Native Approach for Detecting and Mitigating Volumetric DDoS Attacks with Programmable Switches

Zaoxing Liu, Boston University; Hun Namkung, Carnegie Mellon University; Georgios Nikolaidis, Jeongkeun Lee, and Changhoon Kim, Intel, Barefoot Switch Division; Xin Jin, Peking University; Vladimir Braverman, Johns Hopkins University; Minlan Yu, Harvard University; Vyas Sekar, Carnegie Mellon University

Available Media

The emergence of programmable switches offers a new opportunity to revisit ISP-scale defenses for volumetric DDoS attacks. In theory, these can offer better cost vs. performance vs. flexibility trade-offs relative to proprietary hardware and virtual appliances. However, the ISP setting creates unique challenges in this regard---we need to run a broad spectrum of detection and mitigation functions natively on the programmable switch hardware and respond to dynamic adaptive attacks at scale. Thus, prior efforts in using programmable switches that assume out-of-band detection and/or use switches merely as accelerators for specific tasks are no longer sufficient, and as such, this potential remains unrealized. To tackle these challenges, we design and implement Jaqen, a switch-native approach for volumetric DDoS defense that can run detection and mitigation functions entirely inline on switches, without relying on additional data plane hardware. We design switch-optimized, resource-efficient detection and mitigation building blocks. We design a flexible API to construct a wide spectrum of best-practice (and future) defense strategies that efficiently use switch capabilities. We build a network-wide resource manager that quickly adapts to the attack posture changes. Our experiments show that Jaqen is orders of magnitude more performant than existing systems: Jaqen can handle large-scale hybrid and dynamic attacks within seconds, and mitigate them effectively at high line-rates (380 Gbps).

ReDoSHunter: A Combined Static and Dynamic Approach for Regular Expression DoS Detection

Yeting Li and Zixuan Chen, SKLCS, ISCAS, UCAS; Jialun Cao, HKUST; Zhiwu Xu, Shenzhen University; Qiancheng Peng, SKLCS, ISCAS, UCAS; Haiming Chen, SKLCS, ISCAS; Liyuan Chen, Tencent; Shing-Chi Cheung, HKUST

Available Media

Regular expression Denial of Service (ReDoS) is a class of algorithmic complexity attacks using the regular expressions (regexes) that cause the typical backtracking-based matching algorithms to run super-linear time. Due to the wide adoption of regexes in computation, ReDoS poses a pervasive and serious security threat. Early detection of ReDoSvulnerable regexes in software is thus vital. Existing detection approaches mainly fall into two categories: static and dynamic analysis. However, they all suffer from either poor precision or poor recall in the detection of vulnerable regexes. The problem of accurately detecting vulnerable regexes at high precision and high recall remains unsolved. Furthermore, we observed that many ReDoS-vulnerable regex contain more than one vulnerability in reality. Another problem with existing approaches is that they are incapable of detecting multiple vulnerabilities in one regex.

To address these two problems, we propose ReDoSHunter, a ReDoS-vulnerable regex detection framework that can effectively pinpoint the multiple vulnerabilities in a vulnerable regex, and generate examples of attack-triggering strings. ReDoSHunter is driven by five vulnerability patterns derived from massive vulnerable regexes. Besides pinpointing vulnerabilities, ReDoSHunter can assess the degree (i.e., exponential or polynomial) of the vulnerabilities detected. Our experiment results show that ReDoSHunter achieves 100% precision and 100% recall in the detection of ReDoS-vulnerable regexes in three large-scale datasets with 37,651 regexes. It significantly outperforms seven state-of-the-art techniques. ReDoSHunter uncovered 28 new ReDoS-vulnerabilities in 26 well-maintained popular projects, resulting in 26 assigned CVEs and 2 fixes.

Ripple: A Programmable, Decentralized Link-Flooding Defense Against Adaptive Adversaries

Jiarong Xing, Wenqing Wu, and Ang Chen, Rice University

Available Media

Link-flooding attacks (LFAs) aim to cut off an edge network from the Internet by congesting core network links. Such an adversary can further change the attack strategy dynamically (e.g., target links, traffic types) to evade mitigation and launch persistent attacks.

We develop Ripple, a programmable, decentralized link-flooding defense against dynamic adversaries. Ripple can be programmed using a declarative policy language to emulate a range of state-of-the-art SDN defenses, but it enables the defenses to shapeshift on their own without a central controller. To achieve this, Ripple develops new defense primitives in programmable switches, which are configured by the policy language to implement a desired defense. The Ripple compiler generates a distributed set of switch programs to extract a panoramic view of attack signals and act against them in a fully decentralized manner, enabling successive waves of defenses against fast-changing attacks. We show that Ripple has low overheads, and that it can effectively recover traffic throughput where SDN-based defenses fail.

Accurately Measuring Global Risk of Amplification Attacks using AmpMap

Soo-Jin Moon, Yucheng Yin, and Rahul Anand Sharma, Carnegie Mellon University; Yifei Yuan, Alibaba Group; Jonathan M. Spring, CERT/CC, SEI, Carnegie Mellon University; Vyas Sekar, Carnegie Mellon University

Available Media

Many recent DDoS attacks rely on amplification, where an attacker induces public servers to generate a large volume of network traffic to a victim. In this paper, we argue for a low-footprint Internet health monitoring service that can systematically and continuously quantify this risk to inform mitigation efforts. Unfortunately, the problem is challenging because amplification is a complex function of query (header) values and server instances. As such, existing techniques that enumerate the total number of servers or focus on a specific amplification-inducing query are fundamentally imprecise. In designing AmpMap, we leverage key structural insights to develop an efficient approach that searches across the space of protocol headers and servers. Using AmpMap, we scanned thousands of servers for 6 UDP-based protocols. We find that relying on prior recommendations to block or rate-limit specific queries still leaves open substantial residual risk as they miss many other amplification-inducing query patterns. We also observe significant variability across servers and protocols, and thus prior approaches that rely on server census can substantially misestimate amplification risk.

A Stealthy Location Identification Attack Exploiting Carrier Aggregation in Cellular Networks

Nitya Lakshmanan and Nishant Budhdev, National University of Singapore; Min Suk Kang, KAIST; Mun Choon Chan and Jun Han, National University of Singapore

Available Media

We present the SLIC that achieves fine-grained location tracking (e.g., finding indoor walking paths) of targeted cellular user devices in a passive manner. The attack exploits a new side channel in modern cellular systems through a universally available feature called carrier aggregation (CA). CA enables higher cellular data rates by allowing multiple base stations on different carrier frequencies to concurrently transmit to a single user. We discover that a passive adversary can learn the side channel — namely, the number of actively transmitting base stations for any user of interest in the same macrocell. We then show that a time series of this side channel can constitute a highly unique fingerprint of a walking path, which can be used to identify the path taken by a target cellular user. We first demonstrate the collection of the new side channel and a small-scale path identification attack in an existing LTE-A network with up to three CA capability (i.e., three base stations can be coordinated for concurrent transmission), showing the feasibility of SLIC in the current cellular networks. We then emulate a near-future 5G network environment with up to nine CA capability in various multi-story buildings in our institution. SLIC shows up to 98.4% of path-identification accuracy among 100 different walking paths in a large office building. Through testing in various building structures, we confirm that the attack is effective in typical office building environments; e.g., corridors, open spaces. We present complete and partial countermeasures and discuss some practical cell deployment suggestions for 5G networks.

Disrupting Continuity of Apple's Wireless Ecosystem Security: New Tracking, DoS, and MitM Attacks on iOS and macOS Through Bluetooth Low Energy, AWDL, and Wi-Fi

Milan Stute, Alexander Heinrich, Jannik Lorenz, and Matthias Hollick, Technical University of Darmstadt

Available Media

Apple controls one of the largest mobile ecosystems, with 1.5 billion active devices worldwide, and offers twelve proprietary wireless Continuity services. Previous works have unveiled several security and privacy issues in the involved protocols. These works extensively studied AirDrop while the coverage of the remaining vast Continuity service space is still low. To facilitate the cumbersome reverse-engineering process, we describe the first guide on how to approach a structured analysis of the involved protocols using several vantage points available on macOS. Also, we develop a toolkit to automate parts of this otherwise manual process. Based on this guide, we analyze the full protocol stacks involved in three Continuity services, in particular, Handoff (HO), Universal Clipboard (UC), and Wi-Fi Password Sharing (PWS). We discover several vulnerabilities spanning from Bluetooth Low Energy (BLE) advertisements to Apple's proprietary authentication protocols. These flaws allow for device tracking via HO's mDNS responses, a denial-of-service (DoS) attack on HO and UC, a DoS attack on PWS that prevents Wi-Fi password entry, and a machine-in-the-middle (MitM) attack on PWS that connects a target to an attacker-controlled Wi-Fi network. Our PoC implementations demonstrate that the attacks can be mounted using affordable off-the-shelf hardware ($20 micro:bit and a Wi-Fi card). Finally, we suggest practical mitigations and share our findings with Apple, who have started to release fixes through iOS and macOS updates.

Stars Can Tell: A Robust Method to Defend against GPS Spoofing Attacks using Off-the-shelf Chipset

Shinan Liu, University of Chicago; Xiang Cheng and Hanchao Yang, Virginia Tech; Yuanchao Shu, Microsoft Research; Xiaoran Weng, University of Electronic Science and Technology of China; Ping Guo, City University of Hong Kong; Kexiong (Curtis) Zeng, Facebook; Gang Wang, University of Illinois at Urbana-Champaign; Yaling Yang, Virginia Tech

Available Media

The GPS has empowered billions of users and various critical infrastructures with its positioning and time services. However, GPS spoofing attacks also become a growing threat to GPS-dependent systems. Existing detection methods either require expensive hardware modifications to current GPS devices or lack the basic robustness against sophisticated attacks, hurting their adoption and usage in practice.

In this paper, we propose a novel GPS spoofing detection framework that works with off-the-shelf GPS chipsets. Our basic idea is to rotate a one-side-blocked GPS receiver to derive the angle-of-arrival (AoAs) of received signals and compare them with the GPS constellation (consists of tens of GPS satellites). We first demonstrate the effectiveness of this idea by implementing a smartphone prototype and evaluating it against a real spoofer in various field experiments (in both open air and urban canyon environments). Our method achieves a high accuracy (95%–100%) in 5 seconds. Then we implement an adaptive attack, assuming the attacker becomes aware of our defense method and actively modulates the spoofing signals accordingly. We study this adaptive attack and propose enhancement methods (using the rotation speed as the "secret key") to fortify the defense. Further experiments are conducted to validate the effectiveness of the enhanced defense.

Track 3

Cryptography and the Cloud

Session Chair: Sherman S. M. Chow, The Chinese University of Hong Kong

Formally Verified Memory Protection for a Commodity Multiprocessor Hypervisor

Shih-Wei Li, Xupeng Li, Ronghui Gu, Jason Nieh, and John Zhuang Hui, Columbia University

Available Media

Hypervisors are widely deployed by cloud computing providers to support virtual machines, but their growing complexity poses a security risk, as large codebases contain many vulnerabilities. We present SeKVM, a layered Linux KVM hypervisor architecture that has been formally verified on multiprocessor hardware. Using layers, we isolate KVM's trusted computing base into a small core such that only the core needs to be verified to ensure KVM's security guarantees. Using layers, we model hardware features at different levels of abstraction tailored to each layer of software. Lower hypervisor layers that configure and control hardware are verified using a novel machine model that includes multiprocessor memory management hardware such as multi-level shared page tables, tagged TLBs, and a coherent cache hierarchy with cache bypass support. Higher hypervisor layers that build on the lower layers are then verified using a more abstract and simplified model, taking advantage of layer encapsulation to reduce proof burden. Furthermore, layers provide modularity to reduce verification effort across multiple implementation versions. We have retrofitted and verified multiple versions of KVM on Arm multiprocessor hardware, proving the correctness of the implementations and that they contain no vulnerabilities that can affect KVM's security guarantees. Our work is the first machine-checked proof for a commodity hypervisor using multiprocessor memory management hardware. SeKVM requires only modest KVM modifications and incurs only modest performance overhead versus unmodified KVM on real application workloads.

Automatic Policy Generation for Inter-Service Access Control of Microservices

Xing Li, Zhejiang University; Yan Chen, Northwestern University; Zhiqiang Lin, The Ohio State University; Xiao Wang and Jim Hao Chen, Northwestern University

Available Media

Cloud applications today are often composed of many microservices. To prevent a microservice from being abused by other (compromised) microservices, inter-service access control is applied. However, the complexity of fine-grained access control policies, along with the large-scale and dynamic nature of microservices, makes the current manual configuration-based access control unsuitable. This paper presents AUTOARMOR, the first attempt to automate inter-service access control policy generation for microservices, with two fundamental techniques: (1) a static analysis-based request extraction mechanism that automatically obtains the invocation logic among microservices, and (2) a graph-based policy management mechanism that generates corresponding access control policies with on-demand policy update. Our evaluation on popular microservice applications shows that AUTOARMOR is able to generate fine-grained inter-service access control policies and update them timely based on changes in the application, with only a minor runtime overhead. By seamlessly integrating with the lifecycle of microservices, it does not require any changes to existing code and infrastructures.

CLARION: Sound and Clear Provenance Tracking for Microservice Deployments

Xutong Chen, Northwestern University; Hassaan Irshad, SRI International; Yan Chen, Northwestern University; Ashish Gehani and Vinod Yegneswaran, SRI International

Available Media

Linux container-based microservices have emerged as an attractive alternative to virtualization as they reduce application footprints and facilitate more efficient resource utilization. Their popularity has also led to increased scrutiny of the underlying security properties and attack surface of container technology. Provenance-based analysis techniques have been proposed as an effective means toward comprehensive and high-assurance security control as they provide fine-grained mechanisms to track data flows across the system and detect unwanted or unexpected changes to data objects. However, existing provenance tracking techniques are limited in their ability to build sound and clear provenance in container network environments due to complexities introduced by namespace virtualization.

We describe a namespace- and container-aware provenance tracking solution, called CLARION, that addresses the unique soundness and clarity challenges introduced by traditional provenance tracking solutions. Specifically, we first describe fragmentation and ambiguities introduced in provenance analysis tools by each of the Linux namespaces and propose solutions to address analysis soundness. Then we discuss the design of specialized semantics-summarization techniques that improve the clarity of provenance analysis. We have developed a prototype implementation of CLARION and evaluate its performance against a spectrum of container-specific attacks. The results demonstrate the utility of our system and how it outperforms the state-of-the-art provenance tracking systems by providing an accurate and concise view of data provenance in container environments.

Virtual Secure Platform: A Five-Stage Pipeline Processor over TFHE

Kotaro Matsuoka, Ryotaro Banno, Naoki Matsumoto, Takashi Sato, and Song Bian, Kyoto University

Available Media

We present Virtual Secure Platform (VSP), the first comprehensive platform that implements a multi-opcode general-purpose sequential processor over Fully Homomorphic Encryption (FHE) for Secure Multi-Party Computation (SMPC). VSP protects both the data and functions on which the data are evaluated from the adversary in a secure computation offloading situation like cloud computing. We proposed a complete processor architecture with a five-stage pipeline, which improves the performance of the VSP by providing more parallelism in circuit evaluation. In addition, we also designed a custom Instruction Set Architecture (ISA) to reduce the gate count of our processor, along with an entire set of toolchains to ensure that arbitrary C programs can be compiled into our custom ISA. In order to speed up instruction evaluation over VSP, CMUX Memory based ROM and RAM constructions over FHE are also proposed. Our experiments show both the pipelined architecture and the CMUX Memory are effective in improving the performance of the proposed processor. We provide an fully open-source implementation of VSP which attains a per-instruction latency of less than 1 second. We show that compared to the best existing processor over FHE, our implementation runs nearly 1,600× faster.

Searching Encrypted Data with Size-Locked Indexes

Min Xu, University of Chicago; Armin Namavari, Cornell University; David Cash, University of Chicago; Thomas Ristenpart, Cornell Tech

Available Media

We investigate a simple but overlooked folklore approach for searching encrypted documents held at an untrusted service: Just stash an index (with unstructured encryption) at the service and download it for updating and searching. This approach is simple to deploy, enables rich search support beyond unsorted keyword lookup, requires no persistent client state, and (intuitively at least) provides excellent security compared with approaches like dynamic searchable symmetric encryption (DSSE). This work first shows that implementing this construct securely is more subtle than it appears, and that naive implementations with commodity indexes are insecure due to the leakage of the byte-length of the encoded index. We then develop a set of techniques for encoding indexes, called size-locking, that eliminates this leakage. Our key idea is to fix the size of indexes to depend only on features that are safe to leak. We further develop techniques for securely partitioning indexes into smaller pieces that are downloaded, trading leakage for large increases in performance in a measured way. We implement our systems and evaluate that they provide search quality matching plaintext systems, support for stateless clients, and resistance to damaging injection attacks.

Blitz: Secure Multi-Hop Payments Without Two-Phase Commits

Lukas Aumayr, TU Wien; Pedro Moreno-Sanchez, IMDEA Software Institute; Aniket Kate, Purdue University; Matteo Maffei, TU Wien

Available Media

Payment-channel networks (PCN) are the most prominent approach to tackle the scalability issues of current permissionless blockchains. A PCN reduces the load on-chain by allowing arbitrarily many off-chain multi-hop payments (MHPs) between any two users connected through a path of payment channels. Unfortunately, current MHP protocols are far from satisfactory. One-round MHPs (e.g., Interledger) are insecure as a malicious intermediary can steal the payment funds. Two-round MHPs (e.g., Lightning Network (LN)) follow the 2-phase-commit paradigm as in databases to overcome this issue. However, when tied with economical incentives, 2-phase-commit brings other security threats (i.e., wormhole attacks), staggered collateral (i.e., funds are locked for a time proportional to the payment path length) and dependency on specific scripting language functionality (e.g., Hash Time-Lock Contracts) that hinders a wider deployment in practice.

We present Blitz, a novel MHP protocol that demonstrates for the first time that we can achieve the best of the two worlds: a single round MHP where no malicious intermediary can steal coins. Moreover, Blitz provides the same privacy for sender and receiver as current MHP protocols do, is not prone to the wormhole attack and requires only constant collateral. Additionally, we construct MHPs using only digital signatures and a timelock functionality, both available at the core of virtually every cryptocurrency today. We provide the cryptographic details of Blitz and we formally prove its security. Furthermore, our experimental evaluation on a LN snapshot shows that (i) staggered collateral in LN leads to in between 4x and 33x more unsuccessful payments than the constant collateral in Blitz; (ii) Blitz reduces the size of the payment contract by 26%; and (iii) Blitz prevents up to 0.3 BTC (3397 USD in October 2020) in fees being stolen over a three day period as it avoids wormhole attacks by design.

Reducing HSM Reliance in Payments through Proxy Re-Encryption

Sivanarayana Gaddam, Visa; Atul Luykx, Security Engineering Research, Google; Rohit Sinha, Swirlds Inc.; Gaven Watson, Visa Research

Available Media

Credit and debit-card payments are typically authenticated with PINs. Once entered into a terminal, the PIN is sent as an encrypted PIN block across a payments network to the destination bank, which decrypts and verifies the PIN block. Each node in the payments network routes the PIN block to the next node by decrypting the block with its own key, and then re-encrypting the PIN block with the next node's key; nodes establish shared secret keys with their neighbors to do so. This decrypt-then-encrypt operation over PIN blocks is known as PIN translation, and it is currently performed in Hardware Security Modules (HSMs) to avoid possible PIN exposure. However, HSMs incur heavy acquisition and operational expenses.

Introduced at EUROCRYPT '98, proxy re-encryption (PRE) is a cryptographic primitive which can re-encrypt without exposing sensitive data. We perform an extensive study of PRE as applied to PIN translation, and show through formalization, security analysis, and an implementation study that PRE is a practical alternative to HSMs. With PRE, we eliminate the need for HSMs during re-encryption of a PIN, thus greatly reducing the number of HSMs needed by each participant in the payments ecosystem. Along the way we conduct practice-oriented PRE research, with novel theoretical contributions to resolve issues in comparing so-called honest re-encryption to chosen ciphertext PRE security, and a new efficient PRE scheme achieving a type of chosen ciphertext security.

1:00 pm–1:15 pm

Break

1:15 pm–2:45 pm

Track 1

Measurements of Fraud, Malware, Spam, and Other Abuse

Session Chair: Joseph Calandrino, Federal Trade Commission

Risky Business? Investigating the Security Practices of Vendors on an Online Anonymous Market using Ground-Truth Data

Jochem van de Laarschot and Rolf van Wegberg, Delft University of Technology

Available Media

Cybercriminal entrepreneurs on online anonymous markets rely on security mechanisms to thwart investigators in attributing their illicit activities. Earlier work indicates that—despite the high-risk criminal context—cybercriminals may turn to poor security practices due to competing business incentives. This claim has not yet been supported through empirical, quantitative analysis on ground-truth data. In this paper, we investigate the security practices on Hansa Market (2015-2017) and measure the prevalence of poor security practices across the vendor population (n=1,733).

We create 'vendor types' based on latent profile analysis, clustering vendors that are similar regarding their experience, activity on other markets, and the amount of physical and digital items sold. We then analyze how these types of vendors differ in their security practices. To that end, we capture their password strength and password uniqueness, 2FA usage, PGP adoption and key strength, PGP-key reuse and the traceability of their cash-out. We find that insecure practices are prevalent across all types of vendors. Yet, between them large differences exist. Rather counter-intuitively, Hansa Market vendors that sell digital items—like stolen credit cards or malware—resort to insecure practices more often than vendors selling drugs. We discuss possible explanations, including that vendors of illicit digital items may perceive their risk to be lower than vendors of illicit physical items.

Deep Entity Classification: Abusive Account Detection for Online Social Networks

Teng Xu, Gerard Goossen, Huseyin Kerem Cevahir, Sara Khodeir, and Yingyezhe Jin, Facebook, Inc; Frank Li, Facebook, Inc, and Georgia Institute of Technology; Shawn Shan, Facebook, Inc, and University of Chicago; Sagar Patel and David Freeman, Facebook, Inc; Paul Pearce, Facebook, Inc, and Georgia Institute of Technology

Available Media

Online social networks (OSNs) attract attackers that use abusive accounts to conduct malicious activities for economic, political, and personal gain. In response, OSNs often deploy abusive account classifiers using machine learning (ML) approaches. However, a practical, effective ML-based defense requires carefully engineering features that are robust to adversarial manipulation, obtaining enough ground truth labeled data for model training, and designing a system that can scale to all active accounts on an OSN (potentially in the billions).

To address these challenges we present Deep Entity Classification (DEC), an ML framework that detects abusive accounts in OSNs that have evaded other, traditional abuse detection systems. We leverage the insight that while accounts in isolation may be difficult to classify, their embeddings in the social graph—the network structure, properties, and behaviors of themselves and those around them—are fundamentally difficult for attackers to replicate or manipulate at scale. Our system:

  • Extracts "deep features" of accounts by aggregating properties and behavioral features from their direct and indirect neighbors in the social graph.
  • Employs a "multi-stage multi-task learning" (MS-MTL) paradigm that leverages imprecise ground truth data by consuming, in separate stages, both a small number of high-precision human-labeled samples and a large amount of lower-precision automated labels. This architecture results in a single model that provides high-precision classification for multiple types of abusive accounts.
  • Scales to billions of users through various sampling and reclassification strategies that reduce system load.

DEC has been deployed at Facebook where it classifies all users continuously, resulting in an estimated reduction of abusive accounts on the network by 27% beyond those already detected by other, traditional methods.

SocialHEISTing: Understanding Stolen Facebook Accounts

Jeremiah Onaolapo, University of Vermont; Nektarios Leontiadis and Despoina Magka, Facebook; Gianluca Stringhini, Boston University

Available Media

Online social network (OSN) accounts are often more user-centric than other types of online accounts (e.g., email accounts) because they present a number of demographic attributes such as age, gender, location, and occupation. While these attributes allow for more meaningful online interactions, they can also be used by malicious parties to craft various types of abuse. To understand the effects of demographic attributes on attacker behavior in stolen social accounts, we devised a method to instrument and monitor such accounts. We then created, instrumented, and deployed more than 1000 Facebook accounts, and exposed them to criminals. Our results confirm that victim demographic traits indeed influence the way cybercriminals abuse their accounts. For example, we find that cybercriminals that access teen accounts write messages and posts more than the ones accessing adult accounts, and attackers that compromise male accounts perform disruptive activities such as changing some of their profile information more than the ones that access female accounts. This knowledge could potentially help online services develop new models to characterize benign and malicious activity across various demographic attributes, and thus automatically classify future activity.

Understanding Malicious Cross-library Data Harvesting on Android

Jice Wang, National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; Indiana University Bloomington; Yue Xiao and Xueqiang Wang, Indiana University Bloomington; Yuhong Nan, Purdue University; Luyi Xing and Xiaojing Liao, Indiana University Bloomington; JinWei Dong, School of Cyber Engineering, Xidian University; Nicolas Serrano, Indiana University, Bloomington; Haoran Lu and XiaoFeng Wang, Indiana University Bloomington; Yuqing Zhang, National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; School of Cyber Engineering, Xidian University; School of Computer Science and Cyberspace Security, Hainan University

Available Media

Recent years have witnessed the rise of security risks of libraries integrated in mobile apps, which are reported to steal private user data from the host apps and the app backend servers. Their security implications, however, have never been fully understood. In our research, we brought to light a new attack vector long been ignored yet with serious privacy impacts – malicious libraries strategically target other vendors'SDKs integrated in the same host app to harvest private user data (e.g., Facebook's user profile). Using a methodology that incorporates semantic analysis on an SDK's Terms of Services (ToS, which describes restricted data access and sharing policies) and code analysis on cross-library interactions, we were able to investigate 1.3 million Google Play apps and the ToSes from 40 highly-popular SDKs, leading to the discovery of 42 distinct libraries stealthily harvesting data from 16 popular SDKs, which affect more than 19K apps with a total of 9 billion downloads. Our study further sheds light on the underground ecosystem behind such library-based data harvesting (e.g., monetary incentives for SDK integration), their unique strategies (e.g., hiding data in crash reports and using C2 server to schedule data exfiltration) and significant impacts.

Swiped: Analyzing Ground-truth Data of a Marketplace for Stolen Debit and Credit Cards

Maxwell Aliapoulios, Cameron Ballard, Rasika Bhalerao, Tobias Lauinger, and Damon McCoy, New York University

Available Media

This paper presents the first empirical study of ground-truth data from a major underground shop selling stolen credit and debit cards. To date, there is little quantitative knowledge about how this segment of the underground economy operates, despite causing fraud losses estimated at billions of dollars a year. Our analysis of four years of leaked transactional data allows us to characterize this shop's business model, sellers, customers, and finances. The shop earned close to $104M in gross revenue and listed over 19M unique card numbers for sale. Around 97% of the inventory was stolen magnetic stripe data, commonly used to produce counterfeit cards for in-person payments. Perhaps surprisingly, customers purchased only 40% of this inventory. In contrast, the shop sold 83% of its card-not-present inventory, used for online fraud, which appeared to be in short supply. Demand and pricing were not uniform, as buyers appeared to perceive some banks as having weaker countermeasures against fraud. Even multiple years into the U.S. EMV chip deployment, the supply of stolen magnetic stripe data continued to increase sharply. In particular, we identified a continuing supply of newly issued cards not equipped with EMV chips, especially among prepaid cards. Our findings suggest that improvements to EMV chip deployment in the U.S., combined with a limited supply of stolen card-not-present data, could be avenues to decreasing the revenue and profitability of this shop.

Having Your Cake and Eating It: An Analysis of Concession-Abuse-as-a-Service

Zhibo Sun, Adam Oest, and Penghui Zhang, Arizona State University; Carlos Rubio-Medrano, Texas A&M University - Corpus Christi; Tiffany Bao and Ruoyu Wang, Arizona State University; Ziming Zhao, Rochester Institute of Technology; Yan Shoshitaishvili and Adam Doupé, Arizona State University; Gail-Joon Ahn, Arizona State University and Samsung Research

Available Media

Concession Abuse as a Service (CAaaS) is a growing scam service in underground forums that defrauds online retailers through the systematic abuse of their return policies (via social engineering) and the exploitation of loopholes in company protocols. Timely detection of such scams is difficult as they are fueled by an extensive suite of criminal services, such as credential theft, document forgery, and fake shipments. Ultimately, the scam enables malicious actors to steal arbitrary goods from merchants with minimal investment.

In this paper, we perform in-depth manual and automated analysis of public and private messages from four large underground forums to identify the malicious actors involved in CAaaS, carefully study the operation of the scam, and define attributes to fingerprint the scam and inform mitigation strategies. Additionally, we surveyed users to evaluate their attitudes toward these mitigations and understand the factors that merchants should consider before implementing these strategies. We find that the scam is easy to scale—and can bypass traditional anti-fraud efforts—and thus poses a notable threat to online retailers.

Track 2

IoT; Specialty Networking

Session Chairs: Adwait Nadkarni, College of William & Mary, and Ahmad-Reza Sadeghi, Technische Universität Darmstadt

Capture: Centralized Library Management for Heterogeneous IoT Devices

Han Zhang, Abhijith Anilkumar, Matt Fredrikson, and Yuvraj Agarwal, Carnegie Mellon University

Available Media

With their growing popularity, Internet-of-Things (IoT) devices have become attractive targets for attack. Like most modern software systems, IoT device firmware depends on external third-party libraries extensively, increasing the attack surface of IoT devices. Furthermore, we find that the risk is compounded by inconsistent library management practices and delays in applying security updates—sometimes hundreds of days behind the public availability of critical patches—by device vendors. Worse yet, because these dependencies are "baked into" the vendor-controlled firmware, even security-conscious users are unable to take matters into their own hands when it comes to good security hygiene.

We present Capture, a novel architecture for deploying IoT device firmware that addresses this problem by allowing devices on a local network to leverage a centralized hub with third-party libraries that are managed and kept up-to-date by a single trusted entity. An IoT device supporting Capture comprises of two components: Capture-enabled firmware on the device and a remote driver that uses third-party libraries on the Capture hub in the local network. To ensure isolation, we introduce a novel Virtual Device Entity (VDE) interface that facilitates access control between mutually-distrustful devices that reside on the same hub. Our evaluation on a prototype implementation of Capture, along with 9 devices and 3 automation applets ported to our framework, shows that our approach incurs low overhead in most cases (<15% increased latency, <10% additional resources). We show that a single Capture Hub with modest hardware can support hundreds of devices, keeping their shared libraries up-to-date.

MPInspector: A Systematic and Automatic Approach for Evaluating the Security of IoT Messaging Protocols

Qinying Wang, Zhejiang University; Shouling Ji, Zhejiang University; Binjiang Institute of Zhejiang University; Yuan Tian, University of Virginia; Xuhong Zhang, Zhejiang University; Binjiang Institute of Zhejiang University; Binbin Zhao, Georgia Institute of Technology; Yuhong Kan and Zhaowei Lin, Zhejiang University; Changting Lin and Shuiguang Deng, Zhejiang University; Binjiang Institute of Zhejiang University; Alex X. Liu, Ant Group; Raheem Beyah, Georgia Institute of Technology

Available Media

Facilitated by messaging protocols (MP), many home devices are connected to the Internet, bringing convenience and accessibility to customers. However, most deployed MPs on IoT platforms are fragmented, which are not implemented carefully to support secure communication. To the best of our knowledge, there is no systematic solution to perform automatic security checks on MP implementations yet.

To bridge the gap, we present MPInspector, the first automatic and systematic solution for vetting the security of MP implementations. MPInspector combines model learning with formal analysis and operates in three stages: (a) using parameter semantics extraction and interaction logic extraction to automatically infer the state machine of an MP implementation, (b) generating security properties based on meta properties and the state machine, and (c) applying automatic property based formal verification to identify property violations. We evaluate MPInspector on three popular MPs, including MQTT, CoAP and AMQP, implemented on nine leading IoT platforms. It identifies 252 property violations, leveraging which we further identify eleven types of attacks under two realistic attack scenarios. In addition, we demonstrate that MPInspector is lightweight (the average overhead of end-to-end analysis is ~4.5 hours) and effective with a precision of 100% in identifying property violations.

HAWatcher: Semantics-Aware Anomaly Detection for Appified Smart Homes

Chenglong Fu, Temple University; Qiang Zeng, University of South Carolina; Xiaojiang Du, Temple University

Available Media

As IoT devices are integrated via automation and coupled with the physical environment, anomalies in an appified smart home, whether due to attacks or device malfunctions, may lead to severe consequences. Prior works that utilize data mining techniques to detect anomalies suffer from high false alarm rates and missing many real anomalies. Our observation is that data mining-based approaches miss a large chunk of information about automation programs (also called smart apps) and devices. We propose Home Automation Watcher (HAWatcher), a semantics-aware anomaly detection system for appified smart homes. HAWatcher models a smart home's normal behaviors based on both event logs and semantics. Given a home, HAWatcher generates hypothetical correlations according to semantic information, such as apps, device types, relations and installation locations, and verifies them with event logs. The mined correlations are refined using correlations extracted from the installed smart apps. The refined correlations are used by a Shadow Execution engine to simulate the smart home's normal behaviors. During runtime, inconsistencies between devices' real-world states and simulated states are reported as anomalies. We evaluate our prototype on the SmartThings platform in four real-world testbeds and test it against totally 62 different anomaly cases. The results show that HAWatcher achieves high accuracy, significantly outperforming prior approaches.

Exposing New Vulnerabilities of Error Handling Mechanism in CAN

Khaled Serag and Rohit Bhatia, Purdue University; Vireshwar Kumar, Indian Institute of Technology Delhi; Z. Berkay Celik and Dongyan Xu, Purdue University

Available Media

Controller Area Network (CAN) has established itself as the main internal communication medium for vehicles. However, recent works have shown that error handling makes CAN nodes vulnerable to certain attacks. In the light of such a threat, we systematically analyze CAN's error handling and fault confinement mechanism to investigate it for further vulnerabilities. In this paper, we develop CANOX, a testing tool that monitors the behavior of a CAN node under different bus and error conditions, and flags conditions that cause an unexpected node behavior. Using CANOX, we found three major undiscovered vulnerabilities in the CAN standard that could be exploited to launch a variety of attacks. Combining the three vulnerabilities, we construct the Scan-Then-Strike Attack (STS), a multi-staged attack in which an attacker with no previous knowledge of the vehicle's internals maps the vehicle's CAN bus, identifies a safety-critical ECU, swiftly silences it, and persistently prevents it from recovering. We validate the practicality of STS by evaluating it on a CAN bus testbed and a real vehicle.

CANARY - a reactive defense mechanism for Controller Area Networks based on Active RelaYs

Bogdan Groza, Lucian Popa, and Pal-Stefan Murvay, Universitatea Politehnica Timisoara; Yuval Elovici and Asaf Shabtai, Ben-Gurion University of the Negev

Available Media

We are rethinking the decades-old design of the CAN bus by incorporating reactive defense capabilities in it. While its reliability and cost effectiveness turned CAN into the most widely used in-vehicle communication interface, its topology, physical layer and arbitration mechanism make it impossible to prevent certain types of adversarial activities on the bus. For example, DoS attacks cannot be stopped as the physical layer gives equal rights to all the connected ECUs and an adversary may exploit this by flooding the network with high priority frames or cause transmission errors which may move honest ECUs into the bus-off state. In response to this, we propose a reactive mechanism based on relays placed along the bus that will change the network topology in case of an attack, i.e., a moving target defense mechanism, allowing a bus guardian to filter and redirect legitimate traffic. We take care of physical properties of the bus and keep the 120 Ω load constant at the end of the lines whenever relays are triggered to modify the topology of the bus. We build a proof-of-concept implementation and test it in a laboratory setup with automotive-grade controllers that demonstrates its functionality over collected real-world in-vehicle traffic. Our experiments show that despite short term disturbances when the relays are triggered, the frame loss is effectively zero.

ReDMArk: Bypassing RDMA Security Mechanisms

Benjamin Rothenberger, Konstantin Taranov, Adrian Perrig, and Torsten Hoefler, ETH Zurich

Available Media

State-of-the-art remote direct memory access (RDMA) technologies such as InfiniBand (IB) or RDMA over Converged Ethernet (RoCE) are becoming widely used in data center applications and are gaining traction in cloud environments. Hence, the security of RDMA architectures is crucial, yet potential security implications of using RDMA communication remain largely unstudied. ReDMArk shows that current security mechanisms of IB-based architectures are insufficient against both in-network attackers and attackers located on end hosts, thus affecting not only secrecy, but also integrity of RDMA applications. We demonstrate multiple vulnerabilities in the design of IB-based architectures and implementations of RDMA-capable network interface cards (RNICs) and exploit those vulnerabilities to enable powerful attacks such as packet injection using impersonation, unauthorized memory access, and Denial-of-Service (DoS) attacks. To thwart the discovered attacks we propose multiple mitigation mechanisms that are deployable in current RDMA networks.

Track 3

TLS

Session Chairs: Avesta Hojjati, Digicert Inc, and Juraj Somorovsky, Paderborn University

ALPACA: Application Layer Protocol Confusion - Analyzing and Mitigating Cracks in TLS Authentication

Marcus Brinkmann, Ruhr University Bochum; Christian Dresen, Münster University of Applied Sciences; Robert Merget, Ruhr University Bochum; Damian Poddebniak, Münster University of Applied Sciences; Jens Müller, Ruhr University Bochum; Juraj Somorovsky, Paderborn University; Jörg Schwenk, Ruhr University Bochum; Sebastian Schinzel, Münster University of Applied Sciences

Available Media

TLS is widely used to add confidentiality, authenticity and integrity to application layer protocols such as HTTP, SMTP, IMAP, POP3, and FTP. However, TLS does not bind a TCP connection to the intended application layer protocol. This allows a man-in-the-middle attacker to redirect TLS traffic to a different TLS service endpoint on another IP address and/or port. For example, if subdomains share a wildcard certificate, an attacker can redirect traffic from one subdomain to another, resulting in a valid TLS session. This breaks the authentication of TLS and cross-protocol attacks may be possible where the behavior of one service may compromise the security of the other at the application layer.

In this paper, we investigate cross-protocol attacks on TLS in general and conduct a systematic case study on web servers, redirecting HTTPS requests from a victim's web browser to SMTP, IMAP, POP3, and FTP servers. We show that in realistic scenarios, the attacker can extract session cookies and other private user data or execute arbitrary JavaScript in the context of the vulnerable web server, therefore bypassing TLS and web application security.

We evaluate the real-world attack surface of web browsers and widely-deployed email and FTP servers in lab experiments and with internet-wide scans. We find that 1.4M web servers are generally vulnerable to cross-protocol attacks, i.e., TLS application data confusion is possible. Of these, 114k web servers can be attacked using an exploitable application server. Finally, we discuss the effectiveness of TLS extensions such as Application Layer Protocol Negotiation (ALPN) and Server Name Indiciation (SNI) in mitigating these and other cross-protocol attacks.

Experiences Deploying Multi-Vantage-Point Domain Validation at Let's Encrypt

Henry Birge-Lee and Liang Wang, Princeton University; Daniel McCarney, Square Inc.; Roland Shoemaker, unaffiliated; Jennifer Rexford and Prateek Mittal, Princeton University

Available Media

An attacker can obtain a valid TLS certificate for a domain by hijacking communication between a certificate authority (CA) and a victim domain. Performing domain validation from multiple vantage points can defend against these attacks. We explore the design space of multi-vantage-point domain validation to achieve (1) security via sufficiently diverse vantage points, (2) performance by ensuring low latency and overhead in certificate issuance, (3) manageability by complying with CA/Browser forum requirements, and requiring minimal changes to CA operations, and (4) a low benign failure rate for legitimate requests. Our opensource implementation was deployed by the Let's Encrypt CA in February 2020, and has since secured the issuance of more than half a billion certificates during the first year of its deployment. Using real-world operational data from Let's Encrypt, we show that our approach has negligible latency and communication overhead, and a benign failure rate comparable to conventional designs with one vantage point. Finally, we evaluate the security improvements using a combination of ethically conducted real-world BGP hijacks, Internet-scale traceroute experiments, and a novel BGP simulation framework. We show that multi-vantage-point domain validation can thwart the vast majority of BGP attacks. Our work motivates the deployment of multi-vantage-point domain validation across the CA ecosystem to strengthen TLS certificate issuance and user privacy.

SiamHAN: IPv6 Address Correlation Attacks on TLS Encrypted Traffic via Siamese Heterogeneous Graph Attention Network

Tianyu Cui, Gaopeng Gou, Gang Xiong, Zhen Li, Mingxin Cui, and Chang Liu, Institute of Information Engineering, Chinese Academy of Sciences, and School of Cyber Security, University of Chinese Academy of Sciences

Available Media

Unlike IPv4 addresses, which are typically masked by a NAT, IPv6 addresses could easily be correlated with users' activity, endangering their privacy. Mitigations to address this privacy concern have been deployed, making existing approaches for address-to-user correlation unreliable. This work demonstrates that an adversary could still correlate IPv6 addresses with users accurately, even with these protection mechanisms. To do this, we propose an IPv6 address correlation model – SiamHAN. The model uses a Siamese Heterogeneous Graph Attention Network to measure whether two IPv6 client addresses belong to the same user even if the user's traffic is protected by TLS encryption. Using a large real-world dataset, we show that, for the tasks of tracking target users and discovering unique users, the state-of-the-art techniques could achieve only 85% and 60% accuracy, respectively. However, SiamHAN exhibits 99% and 88% accuracy.

Why Eve and Mallory Still Love Android: Revisiting TLS (In)Security in Android Applications

Marten Oltrogge, CISPA Helmholtz Center for Information Security; Nicolas Huaman, Sabrina Amft, and Yasemin Acar, Leibniz University Hannover; Michael Backes, CISPA Helmholtz Center for Information Security; Sascha Fahl, Leibniz University Hannover

Available Media

Android applications have a long history of being vulnerable to man-in-the-middle attacks due to insecure custom TLS certificate validation implementations. To resolve this, Google deployed the Network Security Configuration (NSC), a configuration-based approach to increase custom certificate validation logic security, and implemented safeguards in Google Play to block insecure applications.

In this paper, we perform a large-scale in-depth investigation of the effectiveness of these countermeasures: First, we investigate the security of 99,21 NSC settings files in 1,335,322 Google Play apps using static code and manual analysis techniques. We find that 88.87% of the apps using custom NSC settings downgrade security compared to the default settings, and only 0.67% implement certificate pinning. Second, we penetrate Google Play's protection mechanisms by trying to publish apps that are vulnerable to man-in-the-middle attacks. In contrast to official announcements by Google, we found that Play does not effectively block vulnerable apps. Finally, we performed a static code analysis study of 15,000 apps and find that 5,511 recently published apps still contain vulnerable certificate validation code.

Overall, we attribute most of the problems we find to insufficient support for developers, missing clarification of security risks in official documentation, and inadequate security checks for vulnerable applications in Google Play.

Why TLS is better without STARTTLS: A Security Analysis of STARTTLS in the Email Context

Damian Poddebniak and Fabian Ising, Münster University of Applied Sciences; Hanno Böck, Independent Researcher; Sebastian Schinzel, Münster University of Applied Sciences

Available Media

TLS is one of today's most widely used and best-analyzed encryption technologies. However, for historical reasons, TLS for email protocols is often not used directly but negotiated via STARTTLS. This additional negotiation adds complexity and was prone to security vulnerabilities such as naive STARTTLS stripping or command injection attacks in the past.

We perform the first structured analysis of STARTTLS in SMTP, POP3, and IMAP and introduce EAST, a semi-automatic testing toolkit with more than 100 test cases covering a wide range of variants of STARTTLS stripping, command and response injections, tampering attacks, and UI spoofing attacks for email protocols. Our analysis focuses on the confidentiality and integrity of email submission (email client to SMTP server) and email retrieval (email client to POP3 or IMAP server). While some of our findings are also relevant for email transport (from one SMTP server to another), the security implications in email submission and retrieval are more critical because these connections involve not only individual email messages but also user credentials that allow access to a user's email archive.

We used EAST to analyze 28 email clients and 23 servers. In total, we reported over 40 STARTTLS issues, some of which allow mailbox spoofing, credential stealing, and even the hosting of HTTPS with a cross-protocol attack on IMAP. We conducted an Internet-wide scan for the particularly dangerous command injection attack and found that 320.000 email servers (2% of all email servers) are affected. Surprisingly, several clients were vulnerable to STARTTLS stripping attacks. In total, only 3 out of 28 clients did not show any STARTTLS-specific security issues. Even though the command injection attack received multiple CVEs in the past, EAST detected eight new instances of this problem. In total, only 7 out of 23 tested servers were never affected by this issue. We conclude that STARTTLS is error-prone to implement, under-specified in the standards, and should be avoided.

What's in a Name? Exploring CA Certificate Control

Zane Ma and Joshua Mason, University of Illinois at Urbana-Champaign; Manos Antonakakis, Georgia Institute of Technology; Zakir Durumeric, Stanford University; Michael Bailey, University of Illinois at Urbana-Champaign

Available Media

TLS clients rely on a supporting PKI in which certificate authorities (CAs)—trusted organizations—validate and cryptographically attest to the identities of web servers. A client's confidence that it is connecting to the right server depends entirely on the set of CAs that it trusts. However, as we demonstrate in this work, the identity specified in CA certificates is frequently inaccurate due to lax naming requirements, ownership changes, and long-lived certificates. This not only muddles client selection of trusted CAs, but also prevents PKI operators and researchers from correctly attributing CA certificate issues to CA organizations. To help Web PKI participants understand the organizations that control each CA certificate, we develop Fides, a system that models and clusters CA operational behavior in order to detect CA certificates under shared operational control. We label the clusters that Fides uncovers, and build a new database of CA ownership that corrects the CA operator for 241 CA certificates, and expands coverage to 651 new CA certificates, leading to a more complete picture of CA certificate control.

2:45 pm–3:00 pm

Closing Remarks

Program Co-Chairs: Michael Bailey, University of Illinois at Urbana–Champaign, and Rachel Greenstadt, New York University

3:00 pm–4:00 pm

Hack@SEC Winner Report Out and Award Ceremony