Check out the new USENIX Web site.
WORK-IN-PROGRESS REPORTS (WIPS)

Friday, August 5, 2:00 pm–3:30 p.m.

WiPs Program

1: The Program Counter Security Model: Automatic Detection and Removal of Control-Flow Side Channel Attacks (PDF)
David Molnar, Matt Piotrowski, David Schultz, and David Wagner

We introduce new methods for identifying control-flow based side channel attacks, transforming C source code to eliminate these attacks, and checking that the transformed code is free of control-flow side channels. We model control-flow side channels with a program counter transcript, in which the value of the program counter at each step is leaked to an adversary. The program counter transcript model captures a class of side channel attacks that includes timing attacks and error disclosure attacks. We then give a dynamic testing procedure for finding code fragments that may reveal sensitive information by key-dependent behavior, and we show our method finds side channel vulnerabilities in real implementations of IDEA and RC5, in binary modular exponentiation, and in the lsh implementation of the ssh protocol.

Further, we propose a generic source-to-source transformation that produces programs provably secure against control-flow side channel attacks. We implemented this transform for C together with a static checker that conservatively checks x86 assembly for violations of program counter security; our checker allows us to compile with optimizations while retaining assurance the resulting code is secure. We then measured our technique's effect on the performance of binary modular exponentiation and real-world implementations in C of RC5 and IDEA: we found it has a performance overhead of at most 5x and a stack space overhead of at most 2x. Our approach to side channel security is practical, generally applicable, and provably secure against an interesting class of side channel attacks. (Joint work with Matt Piotrowski, David Schultz, and David Wagner.)

2: Implementing N-Variant Systems (PDF)
Benjamin Cox, University of Virginia

An N-variant system protects a vulnerable service by requiring an attacker to simultaneously compromise multiple variants of the system with the same input. The N-Variant system framework executes the variants simultaneously with the same input, monitors the variants to check their behavior is consistent, and ensures the output is identical before forwarding it to the user. Variants are constructed to provide the same behavior under normal execution, but differ in some property on which a class of attack relies. For example, we might construct variants that have disjoint address spaces so any attack that depends on referencing an absolute address must fail on one of the variants, and hence, be detected by the system.

In this talk, I will describe a Linux kernel implementation of the N-variant system framework. Our kernel executes the variants simultaneous, monitoring all I/O at the system call level. When the variants reach a system call, the kernel ensures that they request the same system call with the same parameters. The modified kernel then performs the system call once and returns the same result to both variants. Currently, our implementation successfully thwarts attacks against a vulnerable web server and successfully runs Apache web server variants.

3: Effortless Secure Enrollment for Wireless Networks Using Location-Limited Channels (PDF)
Jeff Shirley, University of Virginia, Department of Computer Science

Automatically enrolling new and temporary users on secure wireless networks remains a significant challenge because of the problems associated with configuration and identity verification. Before a new user can be issued credentials and obtain access to network resources, the network needs to be able to verify that the wireless client being admitted is authorized to obtain access. Likewise, the new client should be able to verify that the wireless access point it is connecting to is in fact part of the intended network. Location-limited channels, such as audio tones, can be used to provide this mutual authentication property. Our system for wireless network enrollment uses this technique to ensure that only a user physically present at a specific location can obtain credentials and access to the wireless network. The same two-way audio channel verifies the identity of the wireless network for the client. Previously authorized users of the network may securely enroll new users through a protocol that takes advantage of the location limited channel to provide mutual verification of identity.

4: Revamping Security Patching with Virtual Patches (PDF)
Gautam Altekar, UC Berkeley, Department of Computer Science

Security patching is the only widely-deployed proactive defense against software vulnerabilities. Yet people don't apply security patches. The primary reason is that patches are unreliable, disruptive, and often hard to uninstall. Considering that ~90% of attacks exploit known vulnerabilities, we need to rethink how we create and apply security patches.

In this talk, I'll present a new type of patch called a ``virtual patch''. A virtual patch is a software patch with two clearly denoted parts: (1) a check and (2) a fix. By isolating the check in its own protection domain, a virtual patch provides a strong safety guarantee: the patch will not side-effect the application until the vulnerability is triggered. Moreover, since a virtual patch is simply a check followed by a fix, it can be inserted into a running application without requiring a restart. Finally, a virtual patch does not make any changes to the user's system and therefore it can be easily uninstalled.

5: Automatically Hardening Web Applications Using Precise Tainting
Salvatore Guarnieri, University of Virginia

Many web applications are vulnerable to attacks such as SQL injection, PHP injection, and cross-site scripting. This is a major problem and many efforts have been made to prevent attacks on web applications. Previous solutions to this problem have either required effort from the application developer, or prevented normal use of the applications. Our solution to this problem does not require effort from the application developer and it will not prevent normal use of the applications. We use a method called precise tainting to track information flow through a web application and prevent non-trusted data from being misused. In the past, course-grain tainting has been used, but it prevents normal use of the applications. We maintain taint information at a fine granularity within strings, even through function calls. To prevent attacks, we make sure no tainted data is used in possibly dangerous ways. The act of checking to make sure no tainted data is being used dangerously is also done at a fine level of granularity. When an attack is detected, our modified version of PHP prevents the attack from being executed without disrupting normal web application behavior.

6: Turtle: Safe and Private Data Sharing (PDF)
Bogdan C. Popescu, Bruno Crispo, and Andrew S. Tanenbaum, Vrije Universiteit, Amsterdam, The Netherlands; Petr Matejka Charles University, Prague, Czech Republic

In this WiP report, we present Turtle, a peer-to-peer (P2P) architecture for safe and private sharing of sensitive information. We envision Turtle as a tool for facilitating free speech, by allowing its users to exchange information deemed controversial or risky by traditional media, without being exposed to legal and economic pressure from parties that may want to suppress this information. The basic idea behind Turtle is to build a P2P overlay on top of pre-existent trust relationships among Turtle users. Each user acts as node in the overlay by running a copy of the Turtle client software. Different from existing P2P networks, Turtle does not allow arbitrary nodes to connect and exchange information. Instead, each user establishes secure and authenticated channels with a limited number of other nodes controlled by people she trusts (friends). Given that the users behind connecting Turtle nodes know and trust each other, they can agree on channel encryption keys out of band; this allows for a totally de-centralized key distribution mechanism, which fits well the P2P paradigm. In the Turtle overlay, both queries and results move hop by hop; the net result is that at any moment, information is only exchanged between people that trust each other, and is always encrypted, so an adversary has no way to determine who is requesting/providing information, and what that information is about. Given this design, a Turtle network offers a number of useful security properties, such as confined damage in case of node compromise, and resilience against Sybil attacks [3]. At this moment we are developing the Turtle client software (an early-beta prototype is available at http://sourceforge.net/projects/turtle-p2p/). We are also looking at ways to realistically simulate large Turtle networks, in order to evaluate Turtles performance against other P2P protocols such as Gnutella or BitTorrent; one idea is to model Turtle overlays based on social graphs derived from social networking services, such as Orkut [2], or Friendster [1]. References [1] Friendster Web Site. http://www.friendster.com. [2] Orkut Web Site. http://www.orkut.com. [3] J. Douceur. The Sybil Attack. In Proc. of the IPTPS 02 Workshop, Mar. 2002.

7: Towards Online Flow-level Anomaly/Intrusion Detection System for High-speed Networks
Yan Chen

Traffic anomalies and attacks are commonplace in today's networks, and identifying them rapidly and accurately is critical for large network operators. We propose a High-speed Router-based Anomaly and Intrusion Detection system, HRAID, leveraging the recent work on data streaming computation and in particular, sketches. We analyze the attributes in TCP/IP headers and select an optimal small set of metrics for flow-level sketch-based traffic monitoring and intrusion detection. To overcome the limitations of existing single-dimensional sketches, we design an efficient two-dimensional sketches to further distinguish different type of attacks for mitigation.

We further propose several heuristics to reduce the false positive for SYN flooding detection. Simulation with several router traces shows that HRAID is highly accurate, efficient, uses very small memory, and can effectively detect multiple types of attacks simultaneously. In addition, we compare HRAID with other state-of-the-art detection schemes, and validate the attacks detected.

To the best of our knowledge, HRAID is the first online flow-level anomaly/intrusion detection system for high-speed networks, even for the worst case traffic of 40-byte-packet streams with each packet forming a flow.

8: Mitigating DoS Through Basic TPM Operations
William Enck, SIIS Lab, Penn State University

Critical resources are increasingly subject to Denial of Service attacks. Client puzzles have proved to be a powerful technique for mitigation, but their adoption has been hindered by a lack of fairness in puzzle strength. The Trusted Computing Group's Trusted Platform Module (TPM), which has begun proliferating into a variety of devices, provides a cost efficient mechanism to increase system security. We propose a novel use of the TPM and its applicability to protecting resources from DoS.

9: Automatic IP Address Assignment for Efficient, Correct Firewalls
Jonathon Duerig, Robert Ricci, John Byers, and Jay Lepreau

We consider the problem of optimizing the assignment of IP addresses to nodes in a network in such a way as to minimize the sizes of routing tables and firewall rulesets. We have already done work on minimizing routing tables; we are now looking at extending this work into minimizing firewall rulesets. Firewalls typically match IP addresses using subnets, but this approach scales poorly if the sets of hosts that are protected by a particular firewall rule have discontinuous subnets. In addition to efficiency concerns, there are correctness problems with this. The more firewall rules, the more likely that one of them is incorrect, that is, does not express the desired policy. Given a complex topology with a large number of hosts and policies, an organization can end up with a huge number of rules.

We will outline our approach to the problem of automatically assigning IP addresses in such a way as to minimize the number of subnets that each rule must match. Our work on routing table minimization uses a metric we devised called Routing Equivalent Sets (RES), which quantifies the extent to which routes to sets of destinations can be aggregated. We will give an overview of this algorithm, and describe how we are considering extending it to the problem of firewall rule minimization.

10: PorKI: Making PKI Portable in Enterprise Environments (PDF)
Sara Sinclair and Sean Smith, Dartmouth College PKI/Trust Lab

PorKI is a keypair management tool for use on PDAs and smartphones. Through the use of proxy certificates and Bluetooth communication, it allows users to employ their long-term PKI credentials for authentication on potentially untrusted workstations without exposing those credentials to attack, and without requiring special drivers or software on the workstation. Moreover, if the workstation is equipped with a keypair and a signed statement from its administrator, PorKI can limit the capabilities of the temporary credentials issued to it. Such a statement might include information about the machine's location, its configuration, or who has access to it. This, in combination with policies configured by the user or by the relying party, can help both place an appropriate level of trust in the workstation without requiring the user to have specialized knowledge. Based on our experience with the working prototype, PorKI has the potential to be a highly usable way for average users to transport and use their PKI credentials securely in a variety of environments. In a brief talk, I will sketch the design of PorKI, its potential and limitations, as well as what other capabilities we're considering building into it.

11: DETER (PDF)
Terry V. Benzel

The goal of the DETER laboratory effort is to create, maintain, and support a collaborative and vendor-neutral experimental environment for cyber-security research. It is intended to provide a center for interchange and collaboration among security researchers and testbed builders. The DETER effort includes:

DETER testbed: a shared testbed infrastructure that is specifically designed for medium-scale (e.g., 100 - 200 node) repeatable experiments, and especially for experiments that may involve "risky" code. DETER research community: a community of academic, industry, and government researchers working toward better defenses against malicious attacks on our networking infrastructure, especially critical infrastructure.

The nucleus of the DETER laboratory effort is formed of two research projects, DETER and EMIST, funded by the National Science Foundation (NSF) and the U.S. Department of Homeland Security Advanced Research Projects Agency (HSARPA).

This presentation will describe the DETER testbed and current cyber defense experiments using the testbed.

More information on these projects can be found at http://www.isi.edu/deter.

12: Minimizing the TCB (PDF)
David Lie, University of Toronto

Systems have grown enourmously in complexity. General purpose system software, libraries and software components have enabled the rapid development of systems known today. Extensible and modular code allows new functionality to be quickly and easily added to existing programs.

However, such facilities can have detrimental effects on security because they tend to increase the size of the Trusted Computing Base (TCB) for an application. First, the security of a system if often inversely proportional to it's size and complexity. General purpose code will contain functionality that is not necessary for every application, so the inclusion of such code will unecessarily increase an application TCB. Second, a lot of code is extensible and thus can behave in unpredictable ways. For example, browser plugin's have provided an easy avenue for attacker to insert trojan horses and "spyware" into victim's systems.

These problems can be addressed by first trying to identify the "True TCB" for an application. We start by identifying the actual security related operations of an application, and then identify all data and code that is needed to fulfill the operation. We then protect this operation by isolating it from the main operating system by using a Virtual Machine Monitor (VMM). The isolated portion will contain the security operation running on a small, minimal operating system so that the security component will only have to trust a small amount of code.

13: Strider HoneyMonkeys: Active Client-Side Honeypots for Finding Web Sites That Exploit Browser Vulnerabilities (PDF)
Yi-Min Wang

Internet attacks that use Web servers to exploit browser vulnerabilities to install spyware programs are a serious emerging threat. In this paper, we introduce the concept of Automated Web Patrol, which aims at significantly reducing the cost for monitoring malicious Web sites to protect Internet users. We describe the implementation of the Strider HoneyMonkey Exploit Detection System, which consists of a network of monkey programs running on virtual machines with different patch levels and constantly patrolling the Web to hunt for Web sites that exploit browser vulnerabilities.

Within the first month of utilizing this new system, we identified 752 unique URLs hosted on 287 Web sites that can successfully exploit unpatched WinXP machines. The system automatically constructs topology graphs that capture the connections between the exploit sites based on traffic redirection, which leads to the identification of several major players who are responsible for a large number of exploit pages and appear to be building a business model based on such attacks. By monitoring the 752 exploit URLs on a daily basis, we were able to find a malicious Web site that was performing zero-day exploits of the unpatched javaprxy.dll vulnerability at that time. It was confirmed to be the first in-the-wild, zero-day exploit URL of the vulnerability reported to the Microsoft Security Response Center

14: Making Intrusion Detection Systems Interactive and Collaborative (PDF)
Scott Campbell and Steve Chan, Lawrence Berkeley National Laboratory, NERSC

Current Open Source IDS systems are configured and managed by static files read at startup. Borrowing from devices such as routers, we have created an interactive shell that allows for changes in configuration and run time attributes as well as the modification of state information within the running IDS. Combined with the ability to query the internal state of the IDS, we hope to make intrusion detection systems more interactive - allowing administrators to "drive" the IDS in real time.

In parallel, having examined work practices of production security teams, collaboration is clearly vital to coordinating security response - both to weed out false positives, and for swiftly reacting to actual intrusions. Instant Messaging and chatrooms are important tools for collaboration and coordination. Interfacing an IDS shell into an Instant Messaging system allows members of the team to collaborate more easily. An IM bot that listens for specially prefixed commands, while reporting status in a chatroom provides a real time log of IDS activity, and enables inline discussion and commentary as well as leaving an audit trail of the commands issued to the IDS.

This work is at an early stage, but we feel that it is a useful evolution, motivated by recent interest in the field of usable security.

?Need help? Use our Contacts page.

Last changed: 22 Aug. 2005 ch