USENIX Security '21 Summer Accepted Papers

USENIX Security '21 has three submission deadlines. Prepublication versions of the accepted papers from the summer submission deadline are available below. The full program will be available in May 2021.

Weak Links in Authentication Chains: A Large-scale Analysis of Email Sender Spoofing Attacks

Kaiwen Shen, Chuhan Wang, and Minglei Guo, Tsinghua University; Xiaofeng Zheng, Tsinghua University and Qi An Xin Technology Research Institute; Chaoyi Lu and Baojun Liu, Tsinghua University; Yuxuan Zhao, North China Institute of Computing Technology; Shuang Hao, University of Texas at Dallas; Haixin Duan, Tsinghua University; Qi An Xin Technology Research Institute; Qingfeng Pan, Coremail Technology Co. Ltd; Min Yang, Fudan University

Available Media

As a fundamental communicative service, email is playing an important role in both individual and corporate communications, which also makes it one of the most frequently attack vectors. An email's authenticity is based on an authentication chain involving multiple protocols, roles and services, the inconsistency among which creates security threats. Thus, it depends on the weakest link of the chain, as any failed part can break the whole chain-based defense.

This paper systematically analyzes the transmission of an email and identifies a series of new attacks capable of bypassing SPF, DKIM, DMARC and user-interface protections. In particular, by conducting a "cocktail" joint attack, more realistic emails can be forged to penetrate the celebrated email services, such as Gmail and Outlook. We conduct a large-scale experiment on 30 popular email services and 23 email clients, and find that all of them are vulnerable to certain types of new attacks. We have duly reported the identified vulnerabilities to the related email service providers, and received positive responses from 11 of them, including Gmail, Yahoo, iCloud and Alibaba. Furthermore, we propose key mitigating measures to defend against the new attacks. Therefore, this work is of great value for identifying email spoofing attacks and improving the email ecosystem's overall security.

Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

Di Tang, Chinese University of Hong Kong; XiaoFeng Wang and Haixu Tang, Indiana University; Kehuan Zhang, Chinese University of Hong Kong

Available Media

A security threat to deep neural networks (DNN) is data contamination attack, in which an adversary poisons the training data of the target model to inject a backdoor so that images carrying a specific trigger will always be given a specific label. We discover that prior defense on this problem assumes the dominance of the trigger in model's representation space, which causes any image with the trigger to be classified to the target label. Such dominance comes from the unique representations of trigger-carrying images, which are assumed to be significantly different from what benign images produce. Our research, however, shows that this assumption can be broken by a targeted contamination TaCT that obscures the difference between those two kinds of representations and causes the attack images to be less distinguishable from benign ones, thereby evading existing protection.

In our research, we observe that TaCT can affect the representation distribution of the target class but can hardly change the distribution across all classes, allowing us to build new defense performing a statistic analysis on the global information. More specifically, we leverage an EM algorithm to decompose an images into its identity part (e.g., person) and variation part (e.g., poses). Then the distribution of the variation, based upon the global information across all classes, is utilized by a likelihood-ratio test to analyze the representations in each class, identifying those more likely to be characterized by a mixture model resulted from adding attack samples into the legitimate image pool of the current class. Our research illustrates that our approach can effectively detect data contamination attacks, not only the known ones but the new TaCT attack discovered in our study.

Automated Discovery of Denial-of-Service Vulnerabilities in Connected Vehicle Protocols

Shengtuo Hu, University of Michigan; Qi Alfred Chen, UC Irvine; Jiachen Sun, Yiheng Feng, Z. Morley Mao, and Henry X. Liu, University of Michigan

Available Media

With the development of the emerging Connected Vehicle (CV) technology, vehicles can wirelessly communicate with traffic infrastructure and other vehicles to exchange safety and mobility information in real time. However, the integrated communication capability inevitably increases the attack surface of vehicles, which can be exploited to cause safety hazard on the road. Thus, it is highly desirable to systematically understand design-level flaws in the current CV network stack as well as in CV applications, and the corresponding security/safety consequences so that these flaws can be proactively discovered and addressed before large-scale deployment.

In this paper, we design CVAnalyzer, a system for discovering design-level flaws for availability violations of the CV network stack, as well as quantifying the corresponding security/safety consequences. To achieve this, CVAnalyzer combines the attack discovery capability of a general model checker and the quantitative threat assessment capability of a probabilistic model checker. Using CVAnalyzer, we successfully uncovered 4 new DoS (Denial-of-Service) vulnerabilities of the latest CV network protocols and 14 new DoS vulnerabilities of two CV platoon management protocols. Our quantification results show that these attacks can have as high as 99% success rates, and in the worst case can at least double the delay in packet processing, violating the latency requirement in CV communication. We implemented and validated all attacks in a real-world testbed, and also analyzed the fundamental causes to propose potential solutions. We have reported our findings in the CV network protocols to the IEEE 1609 Working Group, and the group has acknowledged the discovered vulnerabilities and plans to adopt our solutions.

An Analysis of Speculative Type Confusion Vulnerabilities in the Wild

Ofek Kirzner and Adam Morrison, Tel Aviv University

Distinguished Paper Award Winner and First Prize winner of the 2021 Internet Defense Prize

Award: 
Distinguished Paper Award
Available Media

Spectre v1 attacks, which exploit conditional branch misprediction, are often identified with attacks that bypass array bounds checking to leak data from a victim's memory. Generally, however, Spectre v1 attacks can exploit any conditional branch misprediction that makes the victim execute code incorrectly. In this paper, we investigate speculative type confusion, a Spectre v1 attack vector in which branch mispredictions make the victim execute with variables holding values of the wrong type and thereby leak memory content.

We observe that speculative type confusion can be inadvertently introduced by a compiler, making it extremely hard for programmers to reason about security and manually apply Spectre mitigations. We thus set out to determine the extent to which speculative type confusion affects the Linux kernel. Our analysis finds exploitable and potentially-exploitable arbitrary memory disclosure vulnerabilities. We also find many latent vulnerabilities, which could become exploitable due to innocuous system changes, such as coding style changes.

Our results suggest that Spectre mitigations which rely on statically/manually identifying "bad" code patterns need to be rethought, and more comprehensive mitigations are needed.

PACStack: an Authenticated Call Stack

Hans Liljestrand, University of Waterloo; Thomas Nyman and Lachlan J. Gunn, Aalto University; Jan-Erik Ekberg, Huawei Technologies and Aalto University; N. Asokan, University of Waterloo and Aalto University

Available Media

A popular run-time attack technique is to compromise the control-flow integrity of a program by modifying function return addresses on the stack. So far, shadow stacks have proven to be essential for comprehensively preventing return address manipulation. Shadow stacks record return addresses in integrity-protected memory secured with hardware-assistance or software access control. Software shadow stacks incur high overheads or trade off security for efficiency. Hardware-assisted shadow stacks are efficient and secure, but require the deployment of special-purpose hardware.

We present authenticated call stack (ACS), an approach that uses chained message authentication codes (MACs). Our prototype, PACStack, uses the ARMv8.3-A general purpose hardware mechanism for pointer authentication (PA) to implement ACS. Via a rigorous security analysis, we show that PACStack achieves security comparable to hardware-assisted shadow stacks without requiring dedicated hardware. We demonstrate that PACStack's performance overhead is small (≈3%).

Can Systems Explain Permissions Better? Understanding Users' Misperceptions under Smartphone Runtime Permission Model

Bingyu Shen, University of California, San Diego; Lili Wei, The Hong Kong University of Science and Technology; Chengcheng Xiang, Yudong Wu, Mingyao Shen, and Yuanyuan Zhou, University of California, San Diego; Xinxin Jin, Whova, Inc.

Available Media

Current smartphone operating systems enable users to manage permissions according to their personal preferences with a runtime permission model. Nonetheless, the systems provide very limited information when requesting permissions, making it difficult for users to understand permissions' capabilities and potentially induced risks.

In this paper, we first investigated to what extent current system-provided information can help users understand the scope of permissions and their potential risks. We took a mixed-methods approach by collecting real permission settings from 4,636 Android users, an interview study of 20 participants, and large-scale Internet surveys of 1559 users. Our study identified several common misunderstandings on the runtime permission model among users. We found that only a very small percentage (6.1%) of users can infer the scope of permission groups accurately from the system-provided information. This indicates that the information provided by current systems is far from sufficient.

We thereby explored what extra information that systems can provide to help users make more informed permission decisions. By surveying users' common concerns on apps' permission requests, we identified five types of information (i.e., decision factors) that are helpful for users' decisions. We further studied the impact and helpfulness of the factors to users' permission decisions with both positive and negative messages. Our study shows that the background access factor helps most while the grant rate helps the least. Based on the findings, we provide suggestions for system designers to enhance future systems with more permission information.

EVMPatch: Timely and Automated Patching of Ethereum Smart Contracts

Michael Rodler, University of Duisburg-Essen; Wenting Li and Ghassan O. Karame, NEC Laboratories Europe; Lucas Davi, University of Duisburg-Essen

Available Media

Recent attacks exploiting errors in smart contract code had devastating consequences thereby questioning the benefits of this technology. It is currently highly challenging to fix errors and deploy a patched contract in time. Instant patching is especially important since smart contracts are always online due to the distributed nature of blockchain systems. They also manage considerable amounts of assets, which are at risk and often beyond recovery after an attack. Existing solutions to upgrade smart contracts depend on manual and error-prone processes. This paper presents a framework, called EVMPatch, to instantly and automatically patch faulty smart contracts. EVMPatch features a bytecode rewriting engine for the popular Ethereum blockchain, and transparently/automatically rewrites common off-the-shelf contracts to upgradable contracts. The proof-of-concept implementation of EVMPatch automatically hardens smart contracts that are vulnerable to integer over/underflows and access control errors, but can be easily extended to cover more bug classes. Our evaluation on 14,000 real-world contracts demonstrates that our approach successfully blocks attack transactions launched on contracts, while keeping the intended functionality of the contract intact. We perform a study with experienced software developers, showing that EVMPatch is practical, and reduces the time for converting a given Solidity smart contract to an upgradable contract by 97.6 %, while ensuring functional equivalence to the original contract.

Privacy and Integrity Preserving Computations with CRISP

Sylvain Chatel, Apostolos Pyrgelis, Juan Ramón Troncoso-Pastoriza, and Jean-Pierre Hubaux, EPFL

Available Media

In the digital era, users share their personal data with service providers to obtain some utility, e.g., access to high-quality services. Yet, the induced information flows raise privacy and integrity concerns. Consequently, cautious users may want to protect their privacy by minimizing the amount of information they disclose to curious service providers. Service providers are interested in verifying the integrity of the users' data to improve their services and obtain useful knowledge for their business. In this work, we present a generic solution to the trade-off between privacy, integrity, and utility, by achieving authenticity verification of data that has been encrypted for offloading to service providers. Based on lattice-based homomorphic encryption and commitments, as well as zero-knowledge proofs, our construction enables a service provider to process and reuse third-party signed data in a privacy-friendly manner with integrity guarantees. We evaluate our solution on different use cases such as smart-metering, disease susceptibility, and location-based activity tracking, thus showing its versatility. Our solution achieves broad generality, quantum-resistance, and relaxes some assumptions of state-of-the-art solutions without affecting performance.

Virtual Secure Platform: A Five-Stage Pipeline Processor over TFHE

Kotaro Matsuoka, Ryotaro Banno, Naoki Matsumoto, Takashi Sato, and Song Bian, Kyoto University

Available Media

We present Virtual Secure Platform (VSP), the first comprehensive platform that implements a multi-opcode general-purpose sequential processor over Fully Homomorphic Encryption (FHE) for Secure Multi-Party Computation (SMPC). VSP protects both the data and functions on which the data are evaluated from the adversary in a secure computation offloading situation like cloud computing. We proposed a complete processor architecture with a five-stage pipeline, which improves the performance of the VSP by providing more parallelism in circuit evaluation. In addition, we also designed a custom Instruction Set Architecture (ISA) to reduce the gate count of our processor, along with an entire set of toolchains to ensure that arbitrary C programs can be compiled into our custom ISA. In order to speed up instruction evaluation over VSP, CMUX Memory based ROM and RAM constructions over FHE are also proposed. Our experiments show both the pipelined architecture and the CMUX Memory are effective in improving the performance of the proposed processor. We provide an fully open-source implementation of VSP which attains a per-instruction latency of less than 1 second. We show that compared to the best existing processor over FHE, our implementation runs nearly 1,600× faster.

Senate: A Maliciously-Secure MPC Platform for Collaborative Analytics

Rishabh Poddar and Sukrit Kalra, UC Berkeley; Avishay Yanai, VMware Research; Ryan Deng, Raluca Ada Popa, and Joseph M. Hellerstein, UC Berkeley

Available Media

Many organizations stand to benefit from pooling their data together in order to draw mutually beneficial insights—e.g., for fraud detection across banks, better medical studies across hospitals, etc. However, such organizations are often prevented from sharing their data with each other by privacy concerns, regulatory hurdles, or business competition.

We present Senate, a system that allows multiple parties to collaboratively run analytical SQL queries without revealing their individual data to each other. Unlike prior works on secure multi-party computation (MPC) that assume that all parties are semi-honest, Senate protects the data even in the presence of malicious adversaries. At the heart of Senate lies a new MPC decomposition protocol that decomposes the cryptographic MPC computation into smaller units, some of which can be executed by subsets of parties and in parallel, while preserving its security guarantees. Senate then provides a new query planning algorithm that decomposes and plans the cryptographic computation effectively, achieving a performance of up to 145 × faster than the state-of-the-art.

Accurately Measuring Global Risk of Amplification Attacks using AmpMap

Soo-Jin Moon, Yucheng Yin, and Rahul Anand Sharma, Carnegie Mellon University; Yifei Yuan, Alibaba Group; Jonathan M. Spring, CERT/CC, SEI, Carnegie Mellon University; Vyas Sekar, Carnegie Mellon University

Available Media

Many recent DDoS attacks rely on amplification, where an attacker induces public servers to generate a large volume of network traffic to a victim. In this paper, we argue for a low-footprint Internet health monitoring service that can systematically and continuously quantify this risk to inform mitigation efforts. Unfortunately, the problem is challenging because amplification is a complex function of query (header) values and server instances. As such, existing techniques that enumerate the total number of servers or focus on a specific amplification-inducing query are fundamentally imprecise. In designing AmpMap, we leverage key structural insights to develop an efficient approach that searches across the space of protocol headers and servers. Using AmpMap, we scanned thousands of servers for 6 UDP-based protocols. We find that relying on prior recommendations to block or rate-limit specific queries still leaves open substantial residual risk as they miss many other amplification-inducing query patterns. We also observe significant variability across servers and protocols, and thus prior approaches that rely on server census can substantially misestimate amplification risk.

Protecting Cryptography Against Compelled Self-Incrimination

Sarah Scheffler and Mayank Varia, Boston University

Available Media

The information security community has devoted substantial effort to the design, development, and universal deployment of strong encryption schemes that withstand search and seizure by computationally-powerful nation-state adversaries. In response, governments are increasingly turning to a different tactic: issuing subpoenas that compel people to decrypt devices themselves, under the penalty of contempt of court if they do not comply. Compelled decryption subpoenas sidestep questions around government search powers that have dominated the Crypto Wars and instead touch upon a different (and still unsettled) area of the law: how encryption relates to a person's right to silence and against self-incrimination.

In this work, we provide a rigorous, composable definition of a critical piece of the law that determines whether cryptosystems are vulnerable to government compelled disclosure in the United States. We justify our definition by showing that it is consistent with prior court cases. We prove that decryption is often not compellable by the government under our definition. Conversely, we show that many techniques that bolster security overall can leave one more vulnerable to compelled disclosure.

As a result, we initiate the study of protecting cryptographic protocols against the threat of future compelled disclosure. We find that secure multi-party computation is particularly vulnerable to this threat, and we design and implement new schemes that are provably resilient in the face of government compelled disclosure. We believe this work should influence the design of future cryptographic primitives and contribute toward the legal debates over the constitutionality of compelled decryption.

Effective Notification Campaigns on the Web: A Matter of Trust, Framing, and Support

Max Maass and Alina Stöver, TU Darmstadt; Henning Pridöhl, Universität Bamberg; Sebastian Bretthauer, Goethe-Universität Frankfurt; Dominik Herrmann, Universität Bamberg; Matthias Hollick, TU Darmstadt; Indra Spiecker, Goethe-Universität Frankfurt

Available Media

Misconfigurations and outdated software are a major cause of compromised websites and data leaks. Past research has proposed and evaluated sending automated security notifications to the operators of misconfigured websites, but encountered issues with reachability, mistrust, and a perceived lack of importance. In this paper, we seek to understand the determinants of effective notifications. We identify a data protection misconfiguration that affects 12.7 % of the 1.3 million websites we scanned and opens them up to legal liability. Using a subset of 4754 websites, we conduct a multivariate randomized controlled notification experiment, evaluating contact medium, sender, and framing of the message. We also include a link to a public web-based self-service tool that is run by us in disguise and conduct an anonymous survey of the notified website owners (N=477) to understand their perspective.

We find that framing a misconfiguration as a problem of legal compliance can increase remediation rates, especially when the notification is sent as a letter from a legal research group, achieving remediation rates of 76.3 % compared to 33.9 % for emails sent by computer science researchers warning about a privacy issue. Across all groups, 56.6 % of notified owners remediated the issue, compared to 9.2 % in the control group. In conclusion, we present factors that lead website owners to trust a notification, show what framing of the notification brings them into action, and how they can be supported in remediating the issue.

Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

Yuankun Zhu, The University of Texas at Dallas; Yueqiang Cheng, Baidu Security; Husheng Zhou, VMware; Yantao Lu, Syracuse University

Available Media

Deep Neural Network (DNN) models become one of the most valuable enterprise assets due to their critical roles in all aspects of applications. With the trend of privatization deployment of DNN models, the data leakage of the DNN models is becoming increasingly severe and widespread. All existing model-extraction attacks can only leak parts of targeted DNN models with low accuracy or high overhead. In this paper, we first identify a new attack surface -- unencrypted PCIe traffic, to leak DNN models. Based on this new attack surface, we propose a novel model-extraction attack, namely Hermes Attack, which is the first attack to fully steal the whole victim DNN model. The stolen DNN models have the same hyper-parameters, parameters, and semantically identical architecture as the original ones. It is challenging due to the closed-source CUDA runtime, driver, and GPU internals, as well as the undocumented data structures and the loss of some critical semantics in the PCIe traffic. Additionally, there are millions of PCIe packets with numerous noises and chaos orders. Our Hermes Attack addresses these issues by massive reverse engineering efforts and reliable semantic reconstruction, as well as skillful packet selection and order correction. We implement a prototype of the Hermes Attack and evaluate two sequential DNN models (i.e., MINIST and VGG) and one non-sequential DNN model (i.e., ResNet) on three NVIDIA GPU platforms, i.e., NVIDIA Geforce GT 730, NVIDIA Geforce GTX 1080 Ti, and NVIDIA Geforce RTX 2080 Ti. The evaluation results indicate that our scheme can efficiently and completely reconstruct ALL of them by making inferences on any one image. Evaluated with Cifar10 test dataset that contains 10, 000 images, the experiment results show that the stolen models have the same inference accuracy as the original ones (i.e., lossless inference accuracy).

Deep Entity Classification: Abusive Account Detection for Online Social Networks

Teng Xu, Gerard Goossen, Huseyin Kerem Cevahir, Sara Khodeir, and Yingyezhe Jin, Facebook, Inc; Frank Li, Facebook, Inc, and Georgia Institute of Technology; Shawn Shan, Facebook, Inc, and University of Chicago; Sagar Patel and David Freeman, Facebook, Inc; Paul Pearce, Facebook, Inc, and Georgia Institute of Technology

Available Media

Online social networks (OSNs) attract attackers that use abusive accounts to conduct malicious activities for economic, political, and personal gain. In response, OSNs often deploy abusive account classifiers using machine learning (ML) approaches. However, a practical, effective ML-based defense requires carefully engineering features that are robust to adversarial manipulation, obtaining enough ground truth labeled data for model training, and designing a system that can scale to all active accounts on an OSN (potentially in the billions).

To address these challenges we present Deep Entity Classification (DEC), an ML framework that detects abusive accounts in OSNs that have evaded other, traditional abuse detection systems. We leverage the insight that while accounts in isolation may be difficult to classify, their embeddings in the social graph—the network structure, properties, and behaviors of themselves and those around them—are fundamentally difficult for attackers to replicate or manipulate at scale. Our system:

  • Extracts "deep features" of accounts by aggregating properties and behavioral features from their direct and indirect neighbors in the social graph.
  • Employs a "multi-stage multi-task learning" (MS-MTL) paradigm that leverages imprecise ground truth data by consuming, in separate stages, both a small number of high-precision human-labeled samples and a large amount of lower-precision automated labels. This architecture results in a single model that provides high-precision classification for multiple types of abusive accounts.
  • Scales to billions of users through various sampling and reclassification strategies that reduce system load.

DEC has been deployed at Facebook where it classifies all users continuously, resulting in an estimated reduction of abusive accounts on the network by 27% beyond those already detected by other, traditional methods.

Evil Under the Sun: Understanding and Discovering Attacks on Ethereum Decentralized Applications

Liya Su, Indiana University Bloomington; Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Xinyue Shen, Indiana University Bloomington and Alibaba Group; Xiangyu Du, Indiana University Bloomington; Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Xiaojing Liao, XiaoFeng Wang, and Luyi Xing, Indiana University Bloomington; Baoxu Liu, Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences

Available Media

The popularity of Ethereum decentralized applications (Dapps) also brings in new security risks: it has been reported that these Dapps have been under various kinds of attacks from cybercriminals to gain profit. To the best of our knowledge, little has been done so far to understand this new cybercrime, in terms of its scope, criminal footprints and attack operational intents, not to mention any efforts to investigate these attack incidents automatically on a large scale. In this paper, we performed the first measurement study on real-world Dapp attack instances to recover critical threat intelligence (e.g., kill chain and attack patterns). Utilizing such threat intelligence, we proposed the first technique DEFIER to automatically investigate attack incidents on a large scale. Running DEFIER on 2.3 million transactions from 104 Ethereum on-chain Dapps, we were able to identify 476,342 exploit transactions on 85 target Dapps, which related to 75 0-day victim Dapps and 17K previously-unknown attacker EOAs. To the best of our knowledge, it is the largest Ethereum on-chain Dapp attack incidents dataset ever reported.

PTAuth: Temporal Memory Safety via Robust Points-to Authentication

Reza Mirzazade Farkhani, Mansour Ahmadi, and Long Lu, Northeastern University

Available Media

Temporal memory corruptions are commonly exploited software vulnerabilities that can lead to powerful attacks. Despite significant progress made by decades of research on mitigation techniques, existing countermeasures fall short due to either limited coverage or overly high overhead. Furthermore, they require external mechanisms (e.g., spatial memory safety) to protect their metadata. Otherwise, their protection can be bypassed or disabled.

To address these limitations, we present robust points-to authentication, a novel runtime scheme for detecting all kinds of temporal memory corruptions. We built a prototype system, called PTAuth, that realizes this scheme on ARM architectures. PTAuth contains a customized compiler for code analysis and instrumentation and a runtime library for performing the points-to authentication as a protected program runs. PTAuth leverages the Pointer Authentication Code (PAC) feature, provided by the ARMv8.3 and later CPUs, which serves as a simple hardware-based encryption primitive. PTAuth uses minimal in-memory metadata and protects its metadata without requiring spatial memory safety. We report our evaluation of PTAuth in terms of security, robustness and performance using 150 vulnerable programs from Juliet test suite and the SPEC CPU2006 benchmarks. PTAuth detects all three categories of heap-based temporal memory corruptions, generates zero false alerts, and slows down program execution by 26% (this number was measured based on software-emulated PAC; it is expected to decrease to 20% when using hardware-based PAC). We also show that PTAuth incurs 2% memory overhead thanks to the efficient use of metadata.

UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers

Yuwei Li, Zhejiang University; Shouling Ji, Zhejiang University/Zhejiang University NGICS Platform; Yuan Chen, Zhejiang University; Sizhuang Liang, Georgia Institute of Technology; Wei-Han Lee, IBM Research; Yueyao Chen and Chenyang Lyu, Zhejiang University; Chunming Wu, Zhejiang University/Zhejiang Lab, Hangzhou, China; Raheem Beyah, Georgia Institute of Technology; Peng Cheng, Zhejiang University NGICS Platform/Zhejiang University; Kangjie Lu, University of Minnesota; Ting Wang, Pennsylvania State University

Available Media

A flurry of fuzzing tools (fuzzers) have been proposed in the literature, aiming at detecting software vulnerabilities effectively and efficiently. To date, it is however still challenging to compare fuzzers due to the inconsistency of the benchmarks, performance metrics, and/or environments for evaluation, which buries the useful insights and thus impedes the discovery of promising fuzzing primitives. In this paper, we design and develop UNIFUZZ, an open-source and metrics-driven platform for assessing fuzzers in a comprehensive and quantitative manner. Specifically, UNIFUZZ to date has incorporated 35 usable fuzzers, a benchmark of 20 real-world programs, and six categories of performance metrics. We first systematically study the usability of existing fuzzers, find and fix a number of flaws, and integrate them into UNIFUZZ. Based on the study, we propose a collection of pragmatic performance metrics to evaluate fuzzers from six complementary perspectives. Using UNIFUZZ, we conduct in-depth evaluations of several prominent fuzzers including AFL [1], AFLFast [2], Angora [3], Honggfuzz [4], MOPT [5], QSYM [6], T-Fuzz [7] and VUzzer64 [8]. We find that none of them outperforms the others across all the target programs, and that using a single metric to assess the performance of a fuzzer may lead to unilateral conclusions, which demonstrates the significance of comprehensive metrics. Moreover, we identify and investigate previously overlooked factors that may significantly affect a fuzzer's performance, including instrumentation methods and crash analysis tools. Our empirical results show that they are critical to the evaluation of a fuzzer. We hope that our findings can shed light on reliable fuzzing evaluation, so that we can discover promising fuzzing primitives to effectively facilitate fuzzer designs in the future.

VoltPillager: Hardware-based fault injection attacks against Intel SGX Enclaves using the SVID voltage scaling interface

Zitai Chen, Georgios Vasilakis, Kit Murdock, Edward Dean, David Oswald, and Flavio D. Garcia, School of Computer Science, University of Birmingham, UK

Available Media

Hardware-based fault injection attacks such as voltage and clock glitching have been thoroughly studied on embedded devices. Typical targets for such attacks include smartcards and low-power microcontrollers used in IoT devices. This paper presents the first hardware-based voltage glitching attack against a fully-fledged Intel CPU. The transition to complex CPUs is not trivial due to several factors, including: a complex operating system, large power consumption, multi-threading, and high clock speeds. To this end, we have built VoltPillager, a low-cost tool for injecting messages on the Serial Voltage Identification bus between the CPU and the voltage regulator on the motherboard. This allows us to precisely control the CPU core voltage. We leverage this powerful tool to mount fault-injection attacks that breach confidentiality and integrity of Intel SGX enclaves. We present proof-of-concept key-recovery attacks against cryptographic algorithms running inside SGX. We demonstrate that VoltPillager attacks are more powerful than recent software-only undervolting attacks against SGX (CVE-2019-11157) because they work on fully patched systems with all countermeasures against software undervolting enabled. Additionally, we are able to fault security-critical operations by delaying memory writes. Mitigation of VoltPillager is not straightforward and may require a rethink of the SGX adversarial model where a cloud provider is untrusted and has physical access to the hardware.

ReDMArk: Bypassing RDMA Security Mechanisms

Benjamin Rothenberger, Konstantin Taranov, Adrian Perrig, and Torsten Hoefler, ETH Zurich

Available Media

State-of-the-art remote direct memory access (RDMA) technologies such as InfiniBand (IB) or RDMA over Converged Ethernet (RoCE) are becoming widely used in data center applications and are gaining traction in cloud environments. Hence, the security of RDMA architectures is crucial, yet potential security implications of using RDMA communication remain largely unstudied. ReDMArk shows that current security mechanisms of IB-based architectures are insufficient against both in-network attackers and attackers located on end hosts, thus affecting not only secrecy, but also integrity of RDMA applications. We demonstrate multiple vulnerabilities in the design of IB-based architectures and implementations of RDMA-capable network interface cards (RNICs) and exploit those vulnerabilities to enable powerful attacks such as packet injection using impersonation, unauthorized memory access, and Denial-of-Service (DoS) attacks. To thwart the discovered attacks we propose multiple mitigation mechanisms that are deployable in current RDMA networks.

Stealing Links from Graph Neural Networks

Xinlei He, CISPA Helmholtz Center for Information Security; Jinyuan Jia, Duke University; Michael Backes, CISPA Helmholtz Center for Information Security; Neil Zhenqiang Gong, Duke University; Yang Zhang, CISPA Helmholtz Center for Information Security

Available Media

Graph data, such as chemical networks and social networks, may be deemed confidential/private because the data owner often spends lots of resources collecting the data or the data contains sensitive information, e.g., social relationships. Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs). Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection. In this work, we propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph. Specifically, given a black-box access to a GNN model, our attacks can infer whether there exists a link between any pair of nodes in the graph used to train the model. We call our attacks link stealing attacks. We propose a threat model to systematically characterize an adversary's background knowledge along three dimensions which in total leads to a comprehensive taxonomy of 8 different link stealing attacks. We propose multiple novel methods to realize these 8 attacks. Extensive experiments on 8 real-world datasets show that our attacks are effective at stealing links, e.g., AUC (area under the ROC curve) is above 0.95 in multiple cases. Our results indicate that the outputs of a GNN model reveal rich information about the structure of the graph used to train the model.

Hiding the Access Pattern is Not Enough: Exploiting Search Pattern Leakage in Searchable Encryption

Simon Oya and Florian Kerschbaum, University of Waterloo

Available Media

Recent Searchable Symmetric Encryption (SSE) schemes enable secure searching over an encrypted database stored in a server while limiting the information leaked to the server. These schemes focus on hiding the access pattern, which refers to the set of documents that match the client's queries. This provides protection against current attacks that largely depend on this leakage to succeed. However, most SSE constructions also leak whether or not two queries aim for the same keyword, also called the search pattern.

In this work, we show that search pattern leakage can severely undermine current SSE defenses. We propose an attack that leverages both access and search pattern leakage, as well as some background and query distribution information, to recover the keywords of the queries performed by the client. Our attack follows a maximum likelihood estimation approach, and is easy to adapt against SSE defenses that obfuscate the access pattern. We empirically show that our attack is efficient, it outperforms other proposed attacks, and it completely thwarts two out of the three defenses we evaluate it against, even when these defenses are set to high privacy regimes. These findings highlight that hiding the search pattern, a feature that most constructions are lacking, is key towards providing practical privacy guarantees in SSE.

Adapting Security Warnings to Counter Online Disinformation

Ben Kaiser, Jerry Wei, Eli Lucherini, and Kevin Lee, Princeton University; J. Nathan Matias, Cornell University; Jonathan Mayer, Princeton University

Available Media

Disinformation is proliferating on the internet, and platforms are responding by attaching warnings to content. There is little evidence, however, that these warnings help users identify or avoid disinformation. In this work, we adapt methods and results from the information security warning literature in order to design and evaluate effective disinformation warnings.

In an initial laboratory study, we used a simulated search task to examine contextual and interstitial disinformation warning designs. We found that users routinely ignore contextual warnings, but users notice interstitial warnings---and respond by seeking information from alternative sources.

We then conducted a follow-on crowdworker study with eight interstitial warning designs. We confirmed a significant impact on user information-seeking behavior, and we found that a warning's design could effectively inform users or convey a risk of harm. We also found, however, that neither user comprehension nor fear of harm moderated behavioral effects.

Our work provides evidence that disinformation warnings can---when designed well---help users identify and avoid disinformation. We show a path forward for designing effective warnings, and we contribute repeatable methods for evaluating behavioral effects. We also surface a possible dilemma: disinformation warnings might be able to inform users and guide behavior, but the behavioral effects might result from user experience friction, not informed decision making.

ABY2.0: Improved Mixed-Protocol Secure Two-Party Computation

Arpita Patra, Indian Institute of Science; Thomas Schneider, TU Darmstadt; Ajith Suresh, Indian Institute of Science; Hossein Yalame, TU Darmstadt

Available Media

Secure Multi-party Computation (MPC) allows a set of mutually distrusting parties to jointly evaluate a function on their private inputs while maintaining input privacy. In this work, we improve semi-honest secure two-party computation (2PC) over rings, with a focus on the efficiency of the online phase.

We propose an efficient mixed-protocol framework, outperforming the state-of-the-art 2PC framework of ABY. Moreover, we extend our techniques to multi-input multiplication gates without inflating the online communication, i.e., it remains independent of the fan-in. Along the way, we construct efficient protocols for several primitives such as scalar product, matrix multiplication, comparison, maxpool, and equality testing. The online communication of our scalar product is two ring elements irrespective of the vector dimension, which is a feature achieved for the first time in the 2PC literature.

The practicality of our new set of protocols is showcased with four applications: i) AES S-box, ii) Circuit-based Private Set Intersection, iii) Biometric Matching, and iv) Privacy-preserving Machine Learning (PPML). Most notably, for PPML, we implement and benchmark training and inference of Logistic Regression and Neural Networks over LAN and WAN networks. For training, we improve online runtime (both for LAN and WAN) over SecureML (Mohassel et al., IEEE S&P '17) in the range 1.5x–6.1x, while for inference, the improvements are in the range of 2.5x–754.3x.

"It's the Company, the Government, You and I": User Perceptions of Responsibility for Smart Home Privacy and Security

Julie Haney, National Institute of Standards and Technology; Yasemin Acar, National Institute of Standards and Technology and Leibniz University Hannover; Susanne Furman, National Institute of Standards and Technology

Available Media

Smart home technology may expose adopters to increased risk to network security, information privacy, and physical safety. However, users may lack understanding of the privacy and security implications. Additionally, manufacturers often fail to provide transparency and configuration options, and few government-provided guidelines have yet to be widely adopted. This results in little meaningful mitigation action to protect users’ security and privacy. But how can this situation be improved and by whom? It is currently unclear where perceived responsibility for smart home privacy and security lies. To address this gap, we conducted an in-depth interview study of 40 smart home adopters to explore where they assign responsibility and how their perceptions of responsibility relate to their concerns and mitigations. Results reveal that participants’ perceptions of responsibility reflect an interdependent relationship between consumers, manufacturers, and third parties such as the government. However, perceived breakdowns and gaps in the relationship result in users being concerned about their security and privacy. Based on our results, we suggest ways in which these actors can address gaps and better support each other.

Identifying Harmful Media in End-to-End Encrypted Communication: Efficient Private Membership Computation

Anunay Kulshrestha and Jonathan Mayer, Princeton University

Available Media

End-to-end encryption (E2EE) poses a challenge for automated detection of harmful media, such as child sexual abuse material and extremist content. The predominant approach at present, perceptual hash matching, is not viable because in E2EE a communications service cannot access user content.

In this work, we explore the technical feasibility of privacy-preserving perceptual hash matching for E2EE services. We begin by formalizing the problem space and identifying fundamental limitations for protocols. Next, we evaluate the predictive performance of common perceptual hash functions to understand privacy risks to E2EE users and contextualize errors associated with the protocols we design.

Our primary contribution is a set of constructions for privacy-preserving perceptual hash matching. We design and evaluate client-side constructions for scenarios where disclosing the set of harmful hashes is acceptable. We then design and evaluate interactive protocols that optionally protect the hash set and do not disclose matches to users. The constructions that we propose are practical for deployment on mobile devices and introduce a limited additional risk of false negatives.

Fragment and Forge: Breaking Wi-Fi Through Frame Aggregation and Fragmentation

Mathy Vanhoef, New York University Abu Dhabi

Available Media

In this paper, we present three design flaws in the 802.11 standard that underpins Wi-Fi. One design flaw is in the frame aggregation functionality, and another two are in the frame fragmentation functionality. These design flaws enable an adversary to forge encrypted frames in various ways, which in turn enables exfiltration of sensitive data. We also discovered common implementation flaws related to aggregation and fragmentation, which further worsen the impact of our attacks. Our results affect all protected Wi-Fi networks, ranging from WEP all the way to WPA3, meaning the discovered flaws have been part of Wi-Fi since its release in 1997. In our experiments, all devices were vulnerable to one or more of our attacks, confirming that all Wi-Fi devices are likely affected. Finally, we present a tool to test whether devices are affected by any of the vulnerabilities, and we discuss countermeasures to prevent our attacks.

Why Older Adults (Don't) Use Password Managers

Hirak Ray, Flynn Wolf, and Ravi Kuber, University of Maryland, Baltimore County; Adam J. Aviv, The George Washington University

Available Media

Password managers (PMs) are considered highly effective tools for increasing security, and a recent study by Pearman et al. (SOUPS '19) highlighted the motivations and barriers to adopting PMs. We expand these findings by replicating Pearman et al.'s protocol and interview instrument applied to a sample of strictly older adults (>60 years of age), as the prior work focused on a predominantly younger cohort. We conducted n=26 semi-structured interviews with PM users, built-in browser/operating system PM users, and non-PM users. The average participant age was 70.4 years. Using the same codebook from Pearman et al., we showcase differences and similarities in PM adoption between the samples, including fears of a single point of failure and the importance of having control over one's private information. Meanwhile, older adults were found to have higher mistrust of cloud storage of passwords and cross-device synchronization. We also highlight PM adoption motivators for older adults, including the power of recommendations from family members and the importance of education and outreach to improve familiarity.

Charger-Surfing: Exploiting a Power Line Side-Channel for Smartphone Information Leakage

Patrick Cronin, Xing Gao, and Chengmo Yang, University of Delaware; Haining Wang, Virginia Tech

Available Media

Touchscreen-based mobile devices such as smartphones and tablets are used daily by billions of people for productivity and entertainment. This paper uncovers a new security threat posed by a side-channel leakage through the power line, called Charger-Surfing, which targets these touchscreen devices. We reveal that while a smartphone is charging, its power trace, which can be measured via the USB charging cable, leaks information about the dynamic content on its screen. This information can be utilized to determine the location on the touchscreen where an animation is played by the mobile OS to indicate, for instance, that a button press has been registered. We develop a portable, low cost power trace collection system for the side-channel construction. This leakage channel is thoroughly evaluated on various smartphones running Android or iOS, equipped with the two most commonly used screen technologies (LCD and OLED). We validate the effectiveness of Charger-Surfing by conducting a case study on a passcode unlock screen. Our experiments show that an adversary can exploit Charger-Surfing across a wide range of smartphone models to achieve an average accuracy of 98.7% for single button inference, and an average of 95.1% or 92.8% accuracy on the first attempt when cracking a victim's 4-digit or 6-digit passcode, respectively. The inference accuracy increases to 99.3% (4-digit) or 96.9% (6-digit) within five trials. We further demonstrate the robustness of Charger-Surfing in realistic settings and discuss countermeasures against it.

Express: Lowering the Cost of Metadata-hiding Communication with Cryptographic Privacy

Saba Eskandarian, Stanford University; Henry Corrigan-Gibbs, MIT CSAIL; Matei Zaharia and Dan Boneh, Stanford University

Available Media

Existing systems for metadata-hiding messaging that provide cryptographic privacy properties have either high communication costs, high computation costs, or both. In this paper, we introduce Express, a metadata-hiding communication system that significantly reduces both communication and computation costs. Express is a two-server system that provides cryptographic security against an arbitrary number of malicious clients and one malicious server. In terms of communication, Express only incurs a constant-factor overhead per message sent regardless of the number of users, whereas previous cryptographically-secure systems Pung and Riposte had communication costs proportional to roughly the square root of the number of users. In terms of computation, Express only uses symmetric key cryptographic primitives and makes both practical and asymptotic improvements on protocols employed by prior work. These improvements enable Express to increase message throughput, reduce latency, and consume over 100x less bandwidth than Pung and Riposte, dropping the end to end cost of running a realistic whistleblowing application by 6x.

Finding Bugs Using Your Own Code: Detecting Functionally-similar yet Inconsistent Code

Mansour Ahmadi, Reza Mirzazade Farkhani, Ryan Williams, and Long Lu, Northeastern University

Available Media

Probabilistic classification has shown success in detecting known types of software bugs. However, the works following this approach tend to require a large amount of specimens to train their models. We present a new machine learning-based bug detection technique that does not require any external code or samples for training. Instead, our technique learns from the very codebase on which the bug detection is performed, and therefore, obviates the need for the cumbersome task of gathering and cleansing training samples (e.g., buggy code of certain kinds). The key idea behind our technique is a novel two-step clustering process applied on a given codebase. This clustering process identifies code snippets in a project that are functionally-similar yet appear in inconsistent forms. Such inconsistencies are found to cause a wide range of bugs, anything from missing checks to unsafe type conversions. Unlike previous works, our technique is generic and not specific to one type of inconsistency or bug. We prototyped our technique and evaluated it using 5 popular open source software, including QEMU and OpenSSL. With a minimal amount of manual analysis on the inconsistencies detected by our tool, we discovered 22 new unique bugs, despite the fact that many of these programs are constantly undergoing bug scans and new bugs in them are believed to be rare.

Assessing Browser-level Defense against IDN-based Phishing

Hang Hu, Virginia Tech; Steve T.K. Jan, University of Illinois at Urbana-Champaign/Virginia Tech; Yang Wang and Gang Wang, University of Illinois at Urbana-Champaign

Available Media

Internationalized Domain Names (IDN) allow people around the world to use their native languages for domain names. Unfortunately, because characters from different languages can look like each other, IDNs have been used to impersonate popular domains for phishing, i.e., IDN homograph. To mitigate this risk, browsers have recently introduced defense policies. However, it is not yet well understood regarding how these policies are constructed and how effective they are.

In this paper, we present an empirical analysis of browser IDN policies, and a user study to understand user perception of homograph IDNs. We focus on 5 major web browsers (Chrome, Firefox, Safari, Microsoft Edge, and IE), and 2 mobile browsers (Android Chrome and iOS Safari) and analyze their current and historical versions released from January 2015 to April 2020. By treating each browser instance as a black box, we develop an automated tool to test the browser policies with over 9,000 testing cases. We find that all the tested browsers have weaknesses in their rules, leaving opportunities for attackers to craft homograph IDNs to impersonate target websites while bypassing browsers' defense. In addition, a browser's defense is not always getting stricter over time. For example, we observe Chrome has reversed its rules to re-allow certain homograph IDNs. Finally, our user study shows that the homograph IDNs that can bypass browsers' defense are still highly deceptive to users. Overall, our results suggest the need to improve the current defense against IDN homograph.

An Investigation of the Android Kernel Patch Ecosystem

Zheng Zhang, UC RIverside; Hang Zhang and Zhiyun Qian, UC Riverside; Billy Lau, Google Inc.

Available Media

open-source projects are often reused in commercial software. Android, a popular mobile operating system, is a great example that has fostered an ecosystem of open-source kernels. However, due to the largely decentralized and fragmented nature, patch propagation from the upstream through multiple layers to end devices can be severely delayed. In this paper, we undertake a thorough investigation of the patch propagation behaviors in the entire Android kernel ecosystem. By analyzing the CVEs and patches available since the inception of the Android security bulletin, as well as open-source upstream kernels (e.g., Linux and AOSP) and hundreds of mostly binary OEM kernels (e.g., by Samsung), we find that the delays of patches are largely due to the current patching practices and the lack of knowledge about which upstream commits being security-critical. Unfortunately, we find that the gap between the first publicly available patch and its final application on end devices is often months and even years, leaving a large attack window for experienced hackers to exploit the unpatched vulnerabilities

Raccoon Attack: Finding and Exploiting Most-Significant-Bit-Oracles in TLS-DH(E)

Robert Merget and Marcus Brinkmann, Ruhr University Bochum; Nimrod Aviram, School of Computer Science, Tel Aviv University; Juraj Somorovsky, Paderborn University; Johannes Mittmann, Bundesamt für Sicherheit in der Informationstechnik (BSI), Germany; Jörg Schwenk, Ruhr University Bochum

Available Media

Diffie-Hellman key exchange (DHKE) is a widely adopted method for exchanging cryptographic key material in real-world protocols like TLS-DH(E). Past attacks on TLS-DH(E) focused on weak parameter choices or missing parameter validation. The confidentiality of the computed DH share, the premaster secret, was never questioned; DHKE is used as a generic method to avoid the security pitfalls of TLS-RSA.

We show that due to a subtle issue in the key derivation of all TLS-DH(E) cipher suites in versions up to TLS 1.2, the premaster secret of a TLS-DH(E) session may, under certain circumstances, be leaked to an adversary. Our main result is a novel side-channel attack, named Raccoon attack, which exploits a timing vulnerability in TLS-DH(E), leaking the most significant bits of the shared Diffie-Hellman secret. The root cause for this side channel is that the TLS standard encourages non-constant-time processing of the DH secret. If the server reuses ephemeral keys, this side channel may allow an attacker to recover the premaster secret by solving an instance of the Hidden Number Problem. The Raccoon attack takes advantage of uncommon DH modulus sizes, which depend on the properties of the used hash functions. We describe a fully feasible remote attack against an otherwise-secure TLS configuration: OpenSSL with a 1032-bit DH modulus. Fortunately, such moduli are not commonly used on the Internet.

Furthermore, with our large-scale scans we have identified implementation-level issues in production-grade TLS implementations that allow for executing the same attack by directly observing the contents of server responses, without resorting to timing measurements.

"Shhh...be quiet!" Reducing the Unwanted Interruptions of Notification Permission Prompts on Chrome

Igor Bilogrevic, Balazs Engedy, Judson L. Porter III, Nina Taft, Kamila Hasanbega, Andrew Paseltiner, Hwi Kyoung Lee, Edward Jung, Meggyn Watkins, PJ McLachlan, and Jason James, Google

Available Media

Push notifications can be a very useful feature. On web browsers, they allow users to receive timely updates even if the website is not currently open. On Chrome, the feature has become extremely popular since its inception in 2015, but it is also the least likely to be accepted by users. Chrome telemetry shows that, although 74% of all permission prompts are about notifications, they are also the least likely to be granted with only a 12% grant rate on desktop and 23% grant rate on Android. In order to preserve its utility for websites and to reduce unwanted interruptions and potential abuses for the users, we designed and tested both a novel UI and its activation mechanism for notification permission prompts in Chrome.

To understand how users interact with such prompts, we conducted two large-scale studies with more than 300 million users in the wild. The first study showed that most of them block or ignore the prompts across all types of websites, which prompted us to rethink its UI and activation logic. The second study, based on an A/B test using behavioral data from more than 40 million users who interacted with more than 100 million prompts on more than 70 thousand websites, show that the new prompt is very effective at reducing unwanted interruptions and their frequency (up to 30% fewer unnecessary actions on the prompts), with a minimal impact (less than 5%) on the grant rates, across all types of users and websites. We achieve these results thanks to a novel adaptive activation mechanism coupled with a block list of interrupting websites, which is derived from crowd-sourced telemetry from Chrome clients.

Why Eve and Mallory Still Love Android: Revisiting TLS (In)Security in Android Applications

Marten Oltrogge, CISPA Helmholtz Center for Information Security; Nicolas Huaman, Sabrina Amft, and Yasemin Acar, Leibniz University Hannover; Michael Backes, CISPA Helmholtz Center for Information Security; Sascha Fahl, Leibniz University Hannover

Available Media

Android applications have a long history of being vulnerable to man-in-the-middle attacks due to insecure custom TLS certificate validation implementations. To resolve this, Google deployed the Network Security Configuration (NSC), a configuration-based approach to increase custom certificate validation logic security, and implemented safeguards in Google Play to block insecure applications.

In this paper, we perform a large-scale in-depth investigation of the effectiveness of these countermeasures: First, we investigate the security of 99,21 NSC settings files in 1,335,322 Google Play apps using static code and manual analysis techniques. We find that 88.87% of the apps using custom NSC settings downgrade security compared to the default settings, and only 0.67% implement certificate pinning. Second, we penetrate Google Play's protection mechanisms by trying to publish apps that are vulnerable to man-in-the-middle attacks. In contrast to official announcements by Google, we found that Play does not effectively block vulnerable apps. Finally, we performed a static code analysis study of 15,000 apps and find that 5,511 recently published apps still contain vulnerable certificate validation code.

Overall, we attribute most of the problems we find to insufficient support for developers, missing clarification of security risks in official documentation, and inadequate security checks for vulnerable applications in Google Play.

Poseidon: A New Hash Function for Zero-Knowledge Proof Systems

Lorenzo Grassi, Radboud University Nijmegen; Dmitry Khovratovich, Ethereum Foundation and Dusk Network; Christian Rechberger, IAIK, Graz University of Technology; Arnab Roy, University of Klagenfurt; Markus Schofnegger, IAIK, Graz University of Technology

Available Media

The area of practical computational integrity proof systems, like SNARKs, STARKs, Bulletproofs, is seeing a very dynamic development with several constructions having appeared recently with improved properties and relaxed setup requirements. Many use cases of such systems involve, often as their most expensive part, proving the knowledge of a preimage under a certain cryptographic hash function, which is expressed as a circuit over a large prime field. A notable example is a zero-knowledge proof of coin ownership in the Zcash cryptocurrency, where the inadequacy of the SHA-256 hash function for such a circuit caused a huge computational penalty.

In this paper, we present a modular framework and concrete instances of cryptographic hash functions which work natively with GF(p) objects. Our hash function Poseidon uses up to 8x fewer constraints per message bit than Pedersen Hash.

Our construction is not only expressed compactly as a circuit, but can also be tailored for various proof systems using specially crafted polynomials, thus bringing another boost in performance. We demonstrate this by implementing a 1-out-of-a-billion membership proof with Merkle trees in less than a second by using Bulletproofs.

ATLAS: A Sequence-based Learning Approach for Attack Investigation

Abdulellah Alsaheel and Yuhong Nan, Purdue University; Shiqing Ma, Rutgers University; Le Yu, Gregory Walkup, Z. Berkay Celik, Xiangyu Zhang, and Dongyan Xu, Purdue University

Available Media

Advanced Persistent Threats (APT) involve multiple attack steps over a long period, and their investigation requires analysis of myriad logs to identify their attack steps, which are a set of activities undertaken to run an APT attack. However, on a daily basis in an enterprise, intrusion detection systems generate many threat alerts of suspicious events (attack symptoms). Cyber analysts must investigate such events to determine whether an event is a part of an attack. With many alerts to investigate, cyber analysts often end up with alert fatigue, causing them to ignore a large number of alerts and miss true attack events. In this paper, we present ATLAS, a framework that constructs an end-to-end attack story from off-the-shelf audit logs. Our key observation is that different attacks may share similar abstract attack strategies, regardless of the vulnerabilities exploited and payloads executed. ATLAS leverages a novel combination of causality analysis, natural language processing, and machine learning techniques to build a sequence-based model, which establishes key patterns of attack and non-attack behaviors from a causal graph. At inference time, given a threat alert event, an attack symptom node in a causal graph is identified. ATLAS then constructs a set of candidate sequences associated with the symptom node, uses the sequence-based model to identify nodes in a sequence that contribute to the attack, and unifies the identified attack nodes to construct an attack story. We evaluated ATLAS with ten real-world APT attacks executed in a realistic virtual environment. ATLAS recovers attack steps and construct attack stories with an average of 91.06% precision, 97.29% recall, and 93.76% F1-score. Through this effort, we provide security investigators with a new means of identifying the attack events that make up the attack story.

CADE: Detecting and Explaining Concept Drift Samples for Security Applications

Limin Yang, University of Illinois at Urbana-Champaign; Wenbo Guo, The Pennsylvania State University; Qingying Hao, University of Illinois at Urbana-Champaign; Arridhana Ciptadi and Ali Ahmadzadeh, Blue Hexagon; Xinyu Xing, The Pennsylvania State University; Gang Wang, University of Illinois at Urbana-Champaign

Available Media

Concept drift poses a critical challenge to deploy machine learning models to solve practical security problems. Due to the dynamic behavior changes of attackers (and/or the benign counterparts), the testing data distribution is often shifting from the original training data over time, causing major failures to the deployed model.

To combat concept drift, we present a novel system CADE aiming to 1) detect drifting samples that deviate from existing classes, and 2) provide explanations to reason the detected drift. Unlike traditional approaches (that require a large number of new labels to determine concept drift statistically), we aim to identify individual drifting samples as they arrive. Recognizing the challenges introduced by the high-dimensional outlier space, we propose to map the data samples into a low-dimensional space and automatically learn a distance function to measure the dissimilarity between samples. Using contrastive learning, we can take full advantage of existing labels in the training dataset to learn how to compare and contrast pairs of samples. To reason the meaning of the detected drift, we develop a distance-based explanation method. We show that explaining "distance" is much more effective than traditional methods that focus on explaining a "decision boundary" in this problem context. We evaluate CADE with two case studies: Android malware classification and network intrusion detection. We further work with a security company to test CADE on its malware database. Our results show that CADE can effectively detect drifting samples and provide semantically meaningful explanations.

Adversarial Policy Training against Deep Reinforcement Learning

Xian Wu, Wenbo Guo, Hua Wei, and Xinyu Xing, The Pennsylvania State University

Available Media

Reinforcement learning is a set of goal-oriented learning algorithms, through which an agent could learn to behave in an environment, by performing certain actions and observing the reward which it gets from those actions. Integrated with deep neural networks, it becomes deep reinforcement learning, a new paradigm of learning methods. Recently, deep reinforcement learning demonstrates great potential in many applications such as playing video games, mastering GO competition, and even performing autonomous pilot. However, coming together with these great successes is adversarial attacks, in which an adversary could force a well-trained agent to behave abnormally by tampering the input to the agent's policy network or training an adversarial agent to exploit the weakness of the victim.

In this work, we show existing adversarial attacks against reinforcement learning either work in an impractical setting or perform less effectively when being launched in a two-agent zero-sum game. Motivated by this, we propose a new method to train adversarial agents. Technically speaking, our approach extends the Proximal Policy Optimization (PPO) algorithm and then utilizes an explainable AI technique to guide an attacker to train an adversarial agent. In comparison with the adversarial agent trained by the state-of-the-art technique, we show that our adversarial agent exhibits a much stronger capability in exploiting the weakness of victim agents. Besides, we demonstrate that our adversarial attack introduces less variation in the training process and exhibits less sensitivity to the selection of initial states.

Security Analysis of the Democracy Live Online Voting System

Michael Specter, MIT; J. Alex Halderman, University of Michigan

Available Media

Democracy Live's OmniBallot platform is a web-based system for blank ballot delivery, ballot marking, and online voting. Three states—Delaware, West Virginia, and New Jersey—recently announced that they would allow certain voters to cast votes online using OmniBallot, but, despite the well established risks of Internet voting, the system has never before undergone a public, independent security review.

We reverse engineered the client-side portion of OmniBallot, as used in Delaware, in order to detail the system's operation and analyze its security. We find that OmniBallot uses a simplistic approach to Internet voting that is vulnerable to vote manipulation by malware on the voter's device and by insiders or other attackers who can compromise Democracy Live, Amazon, Google, or Cloudflare. In addition, Democracy Live, which had no privacy policy prior to our work, receives sensitive personally identifiable information—including the voter's identity, ballot selections, and browser fingerprint—that could be used to target political ads or disinformation campaigns. Even when OmniBallot is used to mark ballots that will be printed and returned in the mail, the software sends the voter's identity and ballot choices to Democracy Live, an unnecessary risk that jeopardizes the secret ballot.

We recommend changes to make the platform safer for ballot delivery and marking. However, we conclude that using OmniBallot for electronic ballot return represents a severe risk to election security and could allow attackers to alter election results without detection. In response to our findings, Delaware and New Jersey have halted use of OmniBallot, but it remains available in other jurisdictions, as do similar online voting methods that are likely to face the same serious risks.

PriSEC: A Privacy Settings Enforcement Controller

Rishabh Khandelwal and Thomas Linden, University of Wisconsin–Madison; Hamza Harkous, Google Inc.; Kassem Fawaz, University of Wisconsin–Madison

Available Media

Online privacy settings aim to provide users with control over their data. However, in their current state, they suffer from usability and reachability issues. The recent push towards automatically analyzing privacy notices has not accompanied a similar effort for the more critical case of privacy settings. So far, the best efforts targeted the special case of making opt-out pages more reachable. In this work, we present PriSEC, a Privacy Settings Enforcement Controller that leverages machine learning techniques towards a new paradigm for automatically enforcing web privacy controls. PriSEC goes beyond finding the webpages with privacy settings to discovering fine-grained options, presenting them in a searchable, centralized interface, and – most importantly – enforcing them on demand with minimal user intervention. We overcome the open nature of web development through novel algorithms that leverage the invariant behavior and rendering of webpages. We evaluate the performance of PriSEC to find that it is able to precisely annotate the privacy controls for 94.3% of the control pages in our evaluation set. To demonstrate the usability of PriSEC, we conduct a user study with 148 participants. We show an average reduction of 3.75x in the time taken to adjust privacy settings as compared to the baseline system.

Ripple: A Programmable, Decentralized Link-Flooding Defense Against Adaptive Adversaries

Jiarong Xing, Wenqing Wu, and Ang Chen, Rice University

Available Media

Link-flooding attacks (LFAs) aim to cut off an edge network from the Internet by congesting core network links. Such an adversary can further change the attack strategy dynamically (e.g., target links, traffic types) to evade mitigation and launch persistent attacks.

We develop Ripple, a programmable, decentralized link-flooding defense against dynamic adversaries. Ripple can be programmed using a declarative policy language to emulate a range of state-of-the-art SDN defenses, but it enables the defenses to shapeshift on their own without a central controller. To achieve this, Ripple develops new defense primitives in programmable switches, which are configured by the policy language to implement a desired defense. The Ripple compiler generates a distributed set of switch programs to extract a panoramic view of attack signals and act against them in a fully decentralized manner, enabling successive waves of defenses against fast-changing attacks. We show that Ripple has low overheads, and that it can effectively recover traffic throughput where SDN-based defenses fail.

Blinder: Partition-Oblivious Hierarchical Scheduling

Man-Ki Yoon, Mengqi Liu, Hao Chen, Jung-Eun Kim, and Zhong Shao, Yale University

Available Media

Hierarchical scheduling enables modular reasoning about the temporal behavior of individual applications by partitioning CPU time and thus isolating potential misbehavior. However, conventional time-partitioning mechanisms fail to achieve strong temporal isolation from a security perspective; variations in the executions of partitions can be perceived by others, which enables an algorithmic covert timing-channel between partitions that are completely isolated from each other in the utilization of time. Thus, we present a run-time algorithm that makes partitions oblivious to others' varying behaviors even when an adversary has full control over their timings. It enables the use of dynamic time-partitioning mechanisms that provide improved responsiveness, while guaranteeing the algorithmic-level non-interference that static approaches would achieve. From an implementation on an open-source operating system, we evaluate the costs of the solution in terms of the responsiveness as well as scheduling overhead.

Nyx: Greybox Hypervisor Fuzzing using Fast Snapshots and Affine Types

Sergej Schumilo, Cornelius Aschermann, Ali Abbasi, Simon Wör­ner, and Thorsten Holz, Ruhr-Universität Bochum

Available Media

A hypervisor (also know as virtual machine monitor, VMM) enforces the security boundaries between different virtual machines (VMs) running on the same physical machine. A malicious user who is able to run her own kernel on a cloud VM can interact with a large variety of attack surfaces. Exploiting a software fault in any of these surfaces leads to full access to all other VMs that are co-located on the same host. Hence, the efficient detection of hypervisor vulnerabilities is crucial for the security of the modern cloud infrastructure. Recent work showed that blind fuzzing is the most efficient approach to identify security issues in hypervisors, mainly due to an outstandingly high test throughput.

In this paper we present the design and implementation of NYX, a highly optimized, coverage-guided hypervisor fuzzer. We show how a fast snapshot restoration mechanism that allows us to reload the system under test thousands of times per second is key to performance. Furthermore, we introduce a novel mutation engine based on custom bytecode programs, encoded as directed acyclic graphs (DAG), and affine types, that enables the required flexibility to express complex interactions. Our evaluation shows that, while NYX has a lower throughput than the state-of-the-art hypervisor fuzzer, it performs competitively on simple targets: NYX typically requires only a few minutes longer to achieve the same test coverage. On complex devices, however, our approach is able to significantly outperform existing works. Moreover, we are able to uncover substantially more bugs: in total, we uncovered 44 new bugs with 22 CVEs requested. Our results demonstrate that coverage guidance is highly valuable, even if a blind fuzzer can be significantly faster.

Saphire: Sandboxing PHP Applications with Tailored System Call Allowlists

Alexander Bulekov, Rasoul Jahanshahi, and Manuel Egele, Boston University

Available Media

Interpreted languages, such as PHP, power a host of platform-independent applications, including websites, instant messengers, video games, and development environments. With the flourishing popularity of these applications, attackers have honed in on finding and exploiting vulnerabilities in interpreted code. Generally, all parts of an interpreted application execute with uniform and superfluous privileges, increasing the potential damage from an exploit. This lack of privilege-separation is in stark violation of the principle of least privilege(PoLP).

Despite 1,980 web app remote code execution (RCE) vulnerabilities discovered in 2018 alone [25], current defenses rely on incomplete detection of vulnerable code, or extensive collections of benign inputs. Considering the limitations of bug-finding systems, the violation of the PoLP exposes systems to unnecessarily-high risks.

In this paper, we identify the current challenges with applying the PoLP to interpreted PHP applications, and propose a novel generic approach for automatically deriving system-call policies for individual interpreted programs. This effectively reduces the attack surface (i.e., set of system-calls) an exploit can leverage to the system-calls the script needs to perform its benign functionality.

We name our implementation of this approach, Saphire, and thoroughly evaluate the prototype with respect to its security and performance characteristics. Our evaluation on 21 known vulnerable web apps and plugins shows that Saphire successfully prevents RCE exploits, and is able to do so with negligible performance overhead (i.e., <2% in the worst case) for real-world web apps. Saphire performs its service without causing false positives over automatically and manually generated benign traffic to each web app.

Smart Contract Vulnerabilities: Vulnerable Does Not Imply Exploited

Daniel Perez and Benjamin Livshits, Imperial College London

Available Media

In recent years, we have seen a great deal of both academic and practical interest in the topic of vulnerabilities in smart contracts, particularly those developed for the Ethereum blockchain. While most of the work has focused on detecting vulnerable contracts, in this paper, we focus on finding how many of these vulnerable contracts have actually been exploited. We survey the 23,327 vulnerable contracts reported by six recent academic projects and find that, despite the amounts at stake, only 1.98% of them have been exploited since deployment. This corresponds to at most 8,487 ETH (~1.7 million USD 1 ), or only 0.27% of the 3 million ETH (600 million USD) at stake. We explain these results by demonstrating that the funds are very concentrated in a small number of contracts which are not exploitable in practice.

KeyForge: Non-Attributable Email from Forward-Forgeable Signatures

Michael A. Specter, MIT; Sunoo Park, MIT & Harvard; Matthew Green, Johns Hopkins University

Available Media

Email breaches are commonplace, and they expose a wealth of personal, business, and political data whose release may have devastating consequences. Such damage is compounded by email's strong attributability: today, any attacker who gains access to your email can easily prove to others that the stolen messages are authentic, a property arising from a necessary anti-spam/anti-spoofing protocol called DKIM. This greatly increases attackers' capacity to do harm by selling the stolen information to third parties, blackmail, or publicly releasing intimate or sensitive messages — all with built-in cryptographic proof of authenticity.

This paper introduces non-attributable email, which guarantees that a wide class of adversaries are unable to convince discerning third parties of the authenticity of stolen emails. We formally define non-attributability, and present two system proposals — KeyForge and TimeForge — that provably achieve non-attributability while maintaining the important spam/spoofing protections currently provided by DKIM. Finally, we implement both and evaluate their speed and bandwidth performance overhead. We demonstrate the practicality of KeyForge, which achieves reasonable verification overhead while signing faster and requiring 42% less bandwidth per message than DKIM's RSA-2048.

You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

Roei Schuster, Tel Aviv University and Cornell Tech; Congzheng Song, Cornell University; Eran Tromer, Tel Aviv University and Columbia University; Vitaly Shmatikov, Cornell Tech

Distinguished Paper Award Winner

Award: 
Distinguished Paper Award
Available Media

Code autocompletion is an integral feature of modern code editors and IDEs. The latest generation of autocompleters uses neural language models, trained on public open-source code repositories, to suggest likely (not just statically feasible) completions given the current context.

We demonstrate that neural code autocompleters are vulnerable to poisoning attacks. By adding a few specially-crafted files to the autocompleter's training corpus (data poisoning), or else by directly fine-tuning the autocompleter on these files (model poisoning), the attacker can influence its suggestions for attacker-chosen contexts. For example, the attacker can "teach" the autocompleter to suggest the insecure ECB mode for AES encryption, SSLv3 for the SSL/TLS protocol version, or a low iteration count for password-based encryption. Moreover, we show that these attacks can be targeted: an autocompleter poisoned by a targeted attack is much more likely to suggest the insecure completion for files from a specific repo or specific developer.

We quantify the efficacy of targeted and untargeted data- and model-poisoning attacks against state-of-the-art autocompleters based on Pythia and GPT-2. We then evaluate existing defenses against poisoning attacks, and show that they are largely ineffective.

PEARL: Plausibly Deniable Flash Translation Layer using WOM coding

Chen Chen, Anrin Chakraborti, and Radu Sion, Stony Brook University

Available Media

When adversaries are powerful enough to coerce users to reveal encryption keys, encryption alone becomes insufficient for data protection. Plausible deniability (PD) mechanisms resolve this by enabling users to hide the mere existence of sensitive data, often by providing plausible "cover texts" or "public data volumes" hosted on the same device.

Unfortunately, with the increasing prevalence of (NAND) flash as a high-performance cost-effective storage medium, PD becomes even more challenging in the presence of realistic adversaries who can usually access a device at multiple points in time ("multi-snapshot"). This is because read/write operations to flash do not result in intuitive corresponding changes to the underlying device state. The problem is further compounded by the fact that this behavior is mostly proprietary. For example, in a majority of commercially-available flash devices, an issued delete or overwrite operation from the upper layers almost certainly won't result in an actual immediate erase of the underlying flash cells.

To address these challenges, we designed a new class of write-once memory (WOM) codes to store hidden bits in the same physical locations as other public bits. This is made possible by the inherent nature of NAND flash and the possibility of issuing multiple writes to target cells that have not previous been written to in existing pages.

We designed PEARL, a general-purpose Flash Translation Layer (FTL) that allows users to plausibly deniably store hidden data in NAND flash devices. We implemented and evaluated PEARL on a widely used simulator FlashSim. PEARL performs well on real-world workloads, comparably to non-PD baselines. PEARL is the first system that achieves strong plausible deniability for NAND flash devices, secure against realistic multi-snapshot adversaries.

SHARD: Fine-Grained Kernel Specialization with Context-Aware Hardening

Muhammad Abubakar, Adil Ahmad, Pedro Fonseca, and Dongyan Xu, Purdue University

Available Media

With growing hardware complexity and ever-evolving user requirements, the kernel is increasingly bloated which increases its attack surface. Despite its large size, for specific applications and workloads, only a small subset of the kernel code is actually required. Kernel specialization approaches exploit this observation to either harden the kernel or restrict access to its code (debloating) on a per-application basis. However, existing approaches suffer from coarse specialization granularity and lack strict enforcement which limits their effectiveness.

This paper presents SHARD, a practical framework to enforce fine-grain kernel specialization. SHARD specializes at both the application and system call levels to significantly restrict the kernel code exposed to attackers. Furthermore, SHARD introduces context-aware hardening to dynamically enable code hardening during suspicious execution contexts. SHARD implements an instance of a context-aware hardening scheme using control-flow integrity (CFI), which provides near-native performance for non-hardened executions and strong security guarantees. Our analysis of the kernel attack surface reduction with SHARD as well as concrete attacks shows that SHARD exposes 181× less kernel code than the native kernel, an order of magnitude better than existing work, and prevents 90% of the evaluated attacks. Our evaluation shows that the average performance overhead ofSHARD on real-world applications is moderate — 10% to 36% on NGINX, 3% to 10% on Redis, and 0% to 2.7% on the SPEC CPU 2006 benchmarks.

Preventing Use-After-Free Attacks with Fast Forward Allocation

Brian Wickman, GTRI; Hong Hu, PennState; Insu Yun, Daehee Jang, and JungWon Lim, GeorgiaTech; Sanidhya Kashyap, EPFL; Taesoo Kim, GeorgiaTech

Available Media

Memory-unsafe languages are widely used to implement critical systems like kernels and browsers, leading to thousands of memory safety issues every year. A use-after-free bug is a temporal memory error where the program accidentally visits a freed memory location. Recent studies show that use-after-free is one of the most exploited memory vulnerabilities. Unfortunately, previous efforts to mitigate use-after-free bugs are not widely deployed in real-world programs due to either inadequate accuracy or high performance overhead.

In this paper, we propose to resurrect the idea of one-time allocation (OTA) and provide a practical implementation with efficient execution and moderate memory overhead. With one-time allocation, the memory manager always returns a distinct memory address for each request. Since memory locations are not reused, attackers cannot reclaim freed objects, and thus cannot exploit use-after-free bugs. We utilize two techniques to render OTA practical: batch page management and the fusion of bump-pointer and fixed-size bins memory allocation styles. Batch page management helps reduce the number of system calls which negatively impact performance, while blending the two allocation methods mitigates the memory overhead and fragmentation issues. We implemented a prototype, called FFmalloc, to demonstrate our techniques. We evaluated FFmalloc on widely used benchmarks and real-world large programs. FFmalloc successfully blocked all tested use-after-free attacks while introducing moderate overhead. The results show that OTA can be a strong and practical solution to thwart use-after-free threats.

Forecasting Malware Capabilities From Cyber Attack Memory Images

Omar Alrawi, Moses Ike, Matthew Pruett, Ranjita Pai Kasturi, Srimanta Barua, Taleb Hirani, Brennan Hill, and Brendan Saltaformaggio, Georgia Institute of Technology

Available Media

The remediation of ongoing cyber attacks relies upon timely malware analysis, which aims to uncover malicious functionalities that have not yet executed. Unfortunately, this requires repeated context switching between different tools and incurs a high cognitive load on the analyst, slowing down the investigation and giving attackers an advantage. We present Forecast, a post-detection technique to enable incident responders to automatically predict capabilities which malware have staged for execution. Forecast is based on a probabilistic model that allows Forecast to discover capabilities and also weigh each capability according to its relative likelihood of execution (i.e., forecasts). Forecast leverages the execution context of the ongoing attack (from the malware's memory image) to guide a symbolic analysis of the malware's code. We performed extensive evaluations, with 6,727 real-world malware and futuristic attacks aiming to subvert Forecast, showing the accuracy and robustness in predicting malware capabilities.

Partitioning Oracle Attacks

Julia Len, Paul Grubbs, and Thomas Ristenpart, Cornell Tech

Available Media

In this paper we introduce partitioning oracles, a new class of decryption error oracles which, conceptually, take a ciphertext as input and output whether the decryption key belongs to some known subset of keys. Partitioning oracles can arise when encryption schemes are not committing with respect to their keys. We detail adaptive chosen ciphertext attacks that exploit partitioning oracles to efficiently recover passwords and de-anonymize anonymous communications. The attacks utilize efficient key multi-collision algorithms—a cryptanalytic goal that we define—against widely used authenticated encryption with associated data (AEAD) schemes, including AES-GCM, XSalsa20/Poly1305, and ChaCha20/Poly1305.

We build a practical partitioning oracle attack that quickly recovers passwords from Shadowsocks proxy servers. We also survey early implementations of the OPAQUE protocol for password-based key exchange, and show how many could be vulnerable to partitioning oracle attacks due to incorrectly using non-committing AEAD. Our results suggest that the community should standardize and make widely available key-committing AEAD to avoid such vulnerabilities.

Understanding and Detecting Disordered Error Handling with Precise Function Pairing

Qiushi Wu, Aditya Pakki, Navid Emamdoost, Stephen McCamant, and Kangjie Lu, University of Minnesota

Available Media

Software programs may frequently encounter various errors such as allocation failures. Error handling aims to gracefully deal with the errors to avoid security and reliability issues, thus it is prevalent and vital. However, because of its complexity and corner cases, error handling itself is often erroneous, and prior research has primarily focused on finding bugs in the handling part, such as incorrect error-code returning or missing error propagation.

In this paper, we propose and investigate a class of bugs in error-handling code from a different perspective. In particular, we find that programs often perform "cleanup" operations before the actual error handling, such as freeing memory or decreasing refcount. Critical bugs occur when these operations are performed (1) in an incorrect order, (2) redundantly, or (3) inadequately. We refer to such bugs as Disordered Error Handling (DiEH). Our investigation reveals that DiEH bugs are not only common but can also cause security problems such as privilege escalation, memory corruption, and denial-of-service. Based on the findings from the investigation, we then develop a system, HERO (Handling ERrors Orderly), to automatically detect DiEH. The core of HERO is a novel technique that precisely pairs both common and custom functions based on the unique error-handling structures, which allows us to infer expected cleanup functions. With HERO, we found 239 DiEH bugs in the Linux kernel, the FreeBSD kernel, and OpenSSL, which can cause security and reliability issues. The evaluation results show that DiEH is critical and widely exists in system software, and HERO is effective in detecting DiEH. We also believe that the precise function pairing is of independent interest in other research areas such as temporal-rule inference and race detection.

HAWatcher: Semantics-Aware Anomaly Detection for Appified Smart Homes

Chenglong Fu, Temple University; Qiang Zeng, University of South Carolina; Xiaojiang Du, Temple University

Available Media

As IoT devices are integrated via automation and coupled with the physical environment, anomalies in an appified smart home, whether due to attacks or device malfunctions, may lead to severe consequences. Prior works that utilize data mining techniques to detect anomalies suffer from high false alarm rates and missing many real anomalies. Our observation is that data mining-based approaches miss a large chunk of information about automation programs (also called smart apps) and devices. We propose Home Automation Watcher (HAWatcher), a semantics-aware anomaly detection system for appified smart homes. HAWatcher models a smart home's normal behaviors based on both event logs and semantics. Given a home, HAWatcher generates hypothetical correlations according to semantic information, such as apps, device types, relations and installation locations, and verifies them with event logs. The mined correlations are refined using correlations extracted from the installed smart apps. The refined correlations are used by a Shadow Execution engine to simulate the smart home's normal behaviors. During runtime, inconsistencies between devices' real-world states and simulated states are reported as anomalies. We evaluate our prototype on the SmartThings platform in four real-world testbeds and test it against totally 62 different anomaly cases. The results show that HAWatcher achieves high accuracy, significantly outperforming prior approaches.

Kalεido: Real-Time Privacy Control for Eye-Tracking Systems

Jingjie Li, Amrita Roy Chowdhury, Kassem Fawaz, and Younghyun Kim, University of Wisconsin–Madison

Available Media

Recent advances in sensing and computing technologies have led to the rise of eye-tracking platforms. Ranging from mobiles to high-end mixed reality headsets, a wide spectrum of interactive systems now employs eye-tracking. However, eye gaze data is a rich source of sensitive information that can reveal an individual's physiological and psychological traits. Prior approaches to protecting eye-tracking data suffer from two major drawbacks: they are either incompatible with the current eye-tracking ecosystem or provide no formal privacy guarantee. In this paper, we propose Kalεido, an eye-tracking data processing system that (1) provides a formal privacy guarantee, (2) integrates seamlessly with existing eye-tracking ecosystems, and (3) operates in real-time. Kalεido acts as an intermediary protection layer in the software stack of eye-tracking systems. We conduct a comprehensive user study and trace-based analysis to evaluate Kalεido. Our user study shows that the users enjoy a satisfactory level of utility from Kalεido. Additionally, we present empirical evidence of Kalεido's effectiveness in thwarting real-world attacks on eye-tracking data.

CURE: A Security Architecture with CUstomizable and Resilient Enclaves

Raad Bahmani, Ferdinand Brasser, Ghada Dessouky, Patrick Jauernig, Matthias Klimmek, Ahmad-Reza Sadeghi, and Emmanuel Stapf, Technische Universität Darmstadt

Available Media

Security architectures providing Trusted Execution Environments (TEEs) have been an appealing research subject for a wide range of computer systems, from low-end embedded devices to powerful cloud servers. The goal of these architectures is to protect sensitive services in isolated execution contexts, called enclaves. Unfortunately, existing TEE solutions suffer from significant design shortcomings. First, they follow a one-size-fits-all approach offering only a single enclave type, however, different services need flexible enclaves that can adjust to their demands. Second, they cannot efficiently support emerging applications (e.g., Machine Learning as a Service), which require secure channels to peripherals (e.g., accelerators), or the computational power of multiple cores. Third, their protection against cache sidechannel attacks is either an afterthought or impractical, i.e., no fine-grained mapping between cache resources and individual enclaves is provided.

In this work, we propose CURE, the first security architecture, which tackles these design challenges by providing different types of enclaves: (i) sub-space enclaves provide vertical isolation at all execution privilege levels, (ii) user-space enclaves provide isolated execution to unprivileged applications, and (iii) self-contained enclaves allow isolated execution environments that span multiple privilege levels. Moreover, CURE enables the exclusive assignment of system resources, e.g., peripherals, CPU cores, or cache resources to single enclaves. CURE requires minimal hardware changes while significantly improving the state of the art of hardware-assisted security architectures. We implemented CURE on a RISC-V-based SoC and thoroughly evaluated our prototype in terms of hardware and performance overhead. CURE imposes a geometric mean performance overhead of 15.33% on standard benchmarks.

A Stealthy Location Identification Attack Exploiting Carrier Aggregation in Cellular Networks

Nitya Lakshmanan and Nishant Budhdev, National University of Singapore; Min Suk Kang, KAIST; Mun Choon Chan and Jun Han, National University of Singapore

Available Media

We present the SLIC that achieves fine-grained location tracking (e.g., finding indoor walking paths) of targeted cellular user devices in a passive manner. The attack exploits a new side channel in modern cellular systems through a universally available feature called carrier aggregation (CA). CA enables higher cellular data rates by allowing multiple base stations on different carrier frequencies to concurrently transmit to a single user. We discover that a passive adversary can learn the side channel — namely, the number of actively transmitting base stations for any user of interest in the same macrocell. We then show that a time series of this side channel can constitute a highly unique fingerprint of a walking path, which can be used to identify the path taken by a target cellular user. We first demonstrate the collection of the new side channel and a small-scale path identification attack in an existing LTE-A network with up to three CA capability (i.e., three base stations can be coordinated for concurrent transmission), showing the feasibility of SLIC in the current cellular networks. We then emulate a near-future 5G network environment with up to nine CA capability in various multi-story buildings in our institution. SLIC shows up to 98.4% of path-identification accuracy among 100 different walking paths in a large office building. Through testing in various building structures, we confirm that the attack is effective in typical office building environments; e.g., corridors, open spaces. We present complete and partial countermeasures and discuss some practical cell deployment suggestions for 5G networks.

Detecting Kernel Refcount Bugs with Two-Dimensional Consistency Checking

Xin Tan, Yuan Zhang, and Xiyu Yang, Fudan University; Kangjie Lu, University of Minnesota; Min Yang, Fudan University

Available Media

In the Linux kernel, reference counting (refcount) has become a default mechanism that manages resource objects. A refcount of a tracked object is incremented when a new reference is assigned and decremented when a reference becomes invalid. Since the kernel manages a large number of shared resources, refcount is prevalent. Due to the inherent complexity of the kernel and resource sharing, developers often fail to properly update refcounts, leading to refcount bugs. Researchers have shown that refcount bugs can cause critical security impacts like privilege escalation; however, the detection of refcount bugs remains an open problem.

In this paper, we propose CID, a new mechanism that employs two-dimensional consistency checking to automatically detect refcount bugs. By checking if callers consistently use a refcount function, CID detects deviating cases as potential bugs, and by checking how a caller uses a refcount function, CID infers the condition-aware rules for the function to correspondingly operate the refcount, and thus a violating case is a potential bug. More importantly, CID's consistency checking does not require complicated semantic understanding, inter-procedural data-flow tracing, or refcount-operation reasoning. CID also features an automated mechanism that systematically identifies refcount fields and functions in the whole kernel. We implement CID and apply it to the Linux kernel. The tool found 44 new refcount bugs that may cause severe security issues, most of which have been confirmed by the maintainers.

SIGL: Securing Software Installations Through Deep Graph Learning

Xueyuan Han, Harvard University; Xiao Yu, NEC Laboratories America; Thomas Pasquier, University of Bristol; Ding Li, Peking University; Junghwan Rhee, NEC Laboratories America; James Mickens, Harvard University; Margo Seltzer, University of British Columbia; Haifeng Chen, NEC Laboratories America

Available Media

Many users implicitly assume that software can only be exploited after it is installed. However, recent supply-chain attacks demonstrate that application integrity must be ensured during installation itself. We introduce SIGL, a new tool for detecting malicious behavior during software installation. SIGL collects traces of system call activity, building a data provenance graph that it analyzes using a novel autoencoder architecture with a graph long short-term memory network (graph LSTM) for the encoder and a standard multilayer perceptron for the decoder. SIGL flags suspicious installations as well as the specific installation-time processes that are likely to be malicious. Using a test corpus of 625 malicious installers containing real-world malware, we demonstrate that SIGL has a detection accuracy of 96%, outperforming similar systems from industry and academia by up to 87% in precision and recall and 45% in accuracy. We also demonstrate that SIGL can pinpoint the processes most likely to have triggered malicious behavior, works on different audit platforms and operating systems, and is robust to training data contamination and adversarial attack. It can be used with application-specific models, even in the presence of new software versions, as well as application-agnostic meta-models that encompass a wide range of applications and installers.