Check out the new USENIX Web site.
Security '09 Banner

WORK-IN-PROGRESS REPORTS (WIPS)

Ballroom Centre (Level 4)

Accepted WiPs

The following talks will be presented on Friday, August 14, 2009, 2:00 p.m.–3:30 p.m.

Full-Datapath Secure Deletion
Sarah M. Diesburg, Christopher R. Meyers, and An-I Andy Wang, Florida State University

With an unprecedented amount of personal information stored in digital form, the ability to erase file data securely is more important than ever. Although secure deletion solutions exist, fine-grained solutions (e.g., per-file deletion) tend to concentrate on one segment of the operating system data path.  As a result, current solutions cannot guarantee per-file secure deletion throughout the legacy storage data path.

We propose a holistic approach that takes all components and interactions of the legacy data path in account.  In particular, our secure deletion solution (1) can irrevocably delete all corresponding data and information about that data (i.e., metadata), (2) is easy to use, (3) allows per-file deletion, (4) achieves acceptable performance, (5) behaves correctly in the presence of system failures, (6) works with modern file system mechanisms (e.g., journaling), (7) works with emerging solid-state storage media, and (8) may be retrofitted into the legacy storage data path.


Easier Passwords for the iPhone
Bill Cheswick, AT&T Research

Traditionally, strong passwords are hard to enter: their rules seeking to enforce high entropy passwords.  These are particularly hard to enter into the iPhone, resulting in many "tappos".  But a series of words could provide the same entropy and, if chosen well, the iPhone spelling checker can help one enter a strong password quickly.

I will show a list of suitable (and unsuitable) words and invite attendees to try entering some strong passphrases very quickly, hopefully without uncorrected errors.


Lightweight Information Tracking for Mobile Phones
William Enck and Patrick McDaniel, Penn State University; Jaeyeon Jung and Anmol Sheth, Intel Labs Seattle; Byung-gon Chun, Intel Labs Berkeley

Emerging mobile phones are approaching the functional levels of general purpose computers and therefore face many of the same threats.  However, these threats have different risks with potentially higher stakes as a deluge of downloaded third party applications execute within the pockets of millions of users. In the Android mobile phone operating system, applications access resources (e.g., location, microphone, address book) based on coarse-grained interface-level policy. This policy cannot, however, ensure sensitive information is not leaked to the network. We observe that interfaces to sensitive information are frequently single purpose and thereby allow automatic tagging for information flow tracking. As such, we have instrumented Android's Dalvik virtual machine interpreter to provide lightweight, system-wide, dynamic information flow tracking on a mobile phone.  Preliminary results indicate our modifications execute with minimal perceived runtime overhead.  Enhanced with dynamic information flow tracking, mobile phones can more fully enforce security policies to protect user privacy and ultimately provide a more secure computing platform.


Cacophony: BitTorrent over VoIP
Rhandi Martin and Angelos Stavrou, George Mason University

Peer-to-Peer (P2P) networks are an excellent resource for distributing large amounts of data and particularly large data files. The most popular P2P service is BitTorrent (BT) which has consistently represented over 40% of Internet traffic for over 7 years, and as much as 85% of all P2P traffic. Due to volume and the prevalence of illegal distribution of copyrighted materials have prompted some companies, educational institutions and Internet Service Providers (ISPs) to take preventative and legal actions against this technology and its users.

In this WIP, we present a novel approach to conceal the BitTorrent traffic as Voice Over IP (VoIP) using existing VoIP protocols and tunneling the traffic over ports allowed by the network policy. Our novel mechanism uses Voice-over-IP (VoIP) protocols, in particular Session Initiation Protocol (SIP) and Real-time Transport Protocol (RTP) to create sessions that are not discernible to network packet inspection. To achieve that, we emulate legal SIP connections and RTP tunnels such that a listening network analysis tool - firewall, Intrusion Detection system, deep packet analysis tool would be unable to distinguish between legitimate VoIP traffic and BitTorrent. Of course, our ability to conceal the traffic imposes has some limitations in the amount of traffic we can transfer per unit of time and the number of connections. We propose to alleviate some of these restrictions by using features as teleconferencing and relaying to involve more nodes and increase the capacity of our communication channel


The OSCKAR Virtualization Security Policy Enforcement Framework
Todd Deshane and Patrick F. Wilbur, Clarkson University

In this presentation we introduce our open source virtualization security policy enforcement framework called OSCKAR. We discuss how OSCKAR helps enforce the principle of least privilege in a virtualization environment, as well as how OSCKAR can enforce this principle for user applications and protect against malware. We present its architecture and benefits, including an extensible contract language that allows security experts and those that know the operating systems and applications best to specify the unique security and resource environment that is needed.

OSCKAR enables individuals to design virtual appliances--self-contained packages of any combination of pre-existing disk images, specifications for on-the-fly image generation, and environmental security policies--to allow the secure deployment of operating systems and end user applications.  We have designed and implemented several front-ends for OSCKAR, covering use cases that include consolidated servers, public access terminals with kiosked operating systems, rapid recovery desktops, and application segregation.


Challenges in Sinkholing a Resilient P2P Botnet (Is It Possible or Not?)
Greg Sinclair, iDefense/UNCC; Brent Hoon Kang, UNCC

Waledac is a botnet that has striking similarities to the Storm botnet while at the same time exhibiting unique refinements that in part make the botnet more robust and in part vulnerable to attack. Through our research of the Waledac botnet, we were able to identify the communication schemes and network topology of this botnet.

In this WIP report, we will share our observations regarding recent changes to the Waledac botnet that occurred as a result of a list of researchers probing the Waledac infrastructure. These recent changes include abandoning the existing botnet in favor of a newer Waledac botnet, changes to the node identification scheme and defenses against repeated attacks from specific IP addresses.  If time permits, we will describe a set of methods that the defender community can use to temporarily sinkhole the Waledac botnet. These principles can be applied to similar designed botnet, which are resilient to more traditional takedown techniques.


Smart Meter Security
Steve McLaughlin, Penn State University

The Advanced Metering Infrastructure (AMI) is the set of smart meters and communication networks forming the basis of the smart grid.  In order to understand the security implications of extending meter functionality, we are conducting penetration testing of a commercially available smart metering system. In this WIP, we describe the functionality and components of AMI, as well as the initial results of our study. We describe attacks against both old and new types of electric meters based on electromechanical tampering and vulnerabilities in meter-to-utility communications.


Enhancing the Formal Cyberforensic Approach with Observation Modeling with Credibility Factors and Mathematical Theory of Evidence
Serguei A. Mokhov, Concordia University

This work refines the theoretical structure and formal model of the observation tuple with the credibility weight and other factors for cyberforensic analysis and event reconstruction. The model first proposed for finite-state automata approach by Gladyshev et al. and later extended and realized in the first iteration of Forensic Lucid specification language. It is augmented in this work with the Dempster-Shafer theory of mathematical evidence to include the credibility factors and the like that were previously lacking in the model. Specifically, Gladyshev et al. created a finite-state automata (FSA) model to encode the evidence and the stories "told" by witnesses in order to combine them into an evidential statement, then model the FSA for a particular case, and given the FSA, verify if certain claim agrees with the evidential statement or not and if it does - what were possible event sequences that explain that claim. On the other hand, an earlier work suggested a mathematical theory of evidence by Dempster, Shafer and others where factors like credibility, trustworthiness, and the like play a role in the evaluation of mathematical expressions, which Gladushev lacked. Thirdly, an even earlier work on intensional logics and programming provided a formal model that throughout development placed the context as a first-class value in logical and programming expressions in a system, called Lucid that has produced various Lucid dialects and context-aware systems. Thus, in this work we blend the three together - we augment the Gladyshev's formalization with the credibility weight and other properties derived from the mathematical theory of evidence and we encode it as a context in the Forensic Lucid language, a Lucid derivative for forensic case management, evaluation, and event reconstruction.


An Introduction to LR-AKE (Leakage-Resilient Authenticated Key Exchange) Project
SeongHan Shin and Kazukuni Kobara, Research Center for Information Security (RCIS), National Institute of Advanced Industrial Science and Technology (AIST), Japan

In information security, remote user authentication is one of the important requirements to provide security against active attacks. This security can be realized by using an authenticated key exchange (called, AKE) protocol from which both mutual authentication and establishment of secure channels are achieved at the same time. Until now, we have proposed several Leakage-Resilient AKE (for short, LR-AKE) protocols whose main motivation is to design a new concept of AKE to be secure against active attacks as well as leakage of stored secrets under the assumption that data/secrets leakage is a practical threat in the real world.

The LR-AKE project has stared with the intention to provide the maximum level of security against both active attacks and leakage of stored secrets in an information security system. In this talk, we present a work-in-progress report of the LR-AKE project whose main goal is to build up a secure user authentication and data storage system, based on our theoretical works. Specifically, we introduce a concept of the LR-AKE protocols, their advantages and several add-on functionalities. Later, we also show some demonstrations of the LR-AKE project: (1) the core part of LR-AKE; (2) LR-SSH (OpenSSH invoking LR-AKE as the AKE part); and (3) LR-Passwords (password-retrieval for applications with the help of LR-AKE).


Towards Exploitation-Resistant Trust Models for Open Distributed Systems
Amirali Salehi-Abari and Tony White, Carleton University

Many computer applications are open distributed systems in which the components are located on a large-scale network. These systems are decentralized and subject to change over the system's lifetime. E-business systems (e.g., eBay), peer-to-peer systems, web services, the Semantic Web, and pervasive computing fall into the category of open distributed systems. With the growth of these open distributed systems throughout the Internet, artificial societies have been formed. In such artificial societies, entities require trust and reputation concepts in order to identify communities of entities with which to interact reliably. Although many trust and reputation models have been proposed, much of the research on trust and reputation models deals with unrealistic attack models.

We have noted in real environments that adversaries tend to focus on exploitation of the trust and reputation model. Furthermore, we have noted the exposure of such models to individual and collusion attacks. These vulnerabilities reinforce the need for new evaluation criteria for trust and reputation models called exploitation resistant which reflects the ability of a trust model to be unaffected by agents who try to manipulate the trust model. We introduce a decentralized Exploitation-Resistant Trust (ERT) model. ERT incorporates multiple sources of information to assess the trustworthiness of agents and is exploitation resistant against both individual and collusion attacks.


Scalable Web Content Attestations
Thomas Moyer, Penn State University

The web is a primary means of information sharing for most organizations and people. Currently, a recipient of web content knows nothing about the environment in which that information was generated other than the specific server from whence it came (and even that information can be unreliable).  In this paper, we develop and evaluate the Spork system that uses the Trusted Platform Module (TPM) to tie the web server integrity state to the web content delivered to browsers, thus allowing a client to verify that the origin of the content was functioning properly when the received content was generated and/or delivered. We discuss the design and implementation of the Spork service and its browser-side Firefox validation extension.  In particular, we explore the challenges and solutions of scaling the delivery of mixed static and dynamic content using exceptionally slow TPM hardware. We perform an in-depth empirical analysis of the Spork system within Apache web servers.  This analysis shows Spork can deliver nearly 8,000 static or over 7,000 dynamic integrity-measured web objects per-second.  More broadly,we identify how TPM-based content web services can scale with manageable overheads and deliver integrity-measured content with manageable overhead.


Decompiling Android Applications
Damien Octeau, Penn State University

Android is one of the most promising operating systems for smartphones, partly due to its extensive capabilities in terms of intercomponent communication. This requires accurate and well-defined access control policies. The problem is that some refinements of these security policies are buried in the source code and thus are inaccessible when the source is not provided. In this work we present a method of decompiling the Android application bytecode back to Java source code. Our application translates Android bytecode into traditional Java bytecode, which can then be processed by existing Java bytecode tools to recapture the source code. Initial results show that it is possible with this approach to get access to the security information buried in the code and that we can obtain a decompiled output very close to the original source code. Our tool allows programmers to obtain functionally equivalent source code to gain a more thorough understanding of the security policy of Android applications.


Exploring the Trusted Computing Base of User Applications
Hayawardh Vijayakumar, Penn State University

An application typically includes the OS it is running on in its TCB. However, this includes the whole OS kernel as well as some privileged userspace programs. There are many trusted programs, and the kernel is a huge piece of code. However, a particular application depends only on certain trusted programs and some part of the kernel. We aim to discover these dependencies, and hope that they are few enough that formal techniques could justify the trust in them. We also explore the automatic generation of a working system that includes only these dependencies, and no more.

As a first step, we find the parts of the kernel that a program touches, or the "slice of the kernel w.r.t an application". Initial results suggest that a process touches 5-15% of the kernel. We aim to identify the security sensitive portions in these, look at proving correctness of these operations, and possibly automating the process of preparing a minimal kernel for an application, that is provably correct.


Further Improving Tor's Transport Layer
Chris Alexander, University of Waterloo

We present useful enhancements to improve the latency and the fairness of Tor's transport layer.  Our work builds on the improvements presented by Joel Reardon earlier in this conference.  Specficially, we replace his user-level TCP stack with a user-level SCTP implementation to decrease memory requirements and add bundling support.  We also replace the round-robin circuit sheduling algorithm to allow bursty http traffic to compete with large steady transfers such as BitTorrent.

?Need help? Use our Contacts page.

Last changed: 18 Aug. 2009 jel