Check out the new USENIX Web site.
Security '10 Banner

CONFERENCE PROGRAM ABSTRACTS

Tech Sessions: Wednesday, November 10 | Thursday, November 11 | Friday, November 12
Wednesday, November 10
11:00 a.m.–12:30 p.m.

A Survey of System Configuration Tools
Back to Program
We believe that informed choices are better choices. When you adopt a system configuration tool, it implies a significant investment in time and/or money. Before making such an investment, you want to pick the right tool for your environment. Therefore, you want to compare tools with each other before making a decision. To help you make an informed choice, we develop a comparison framework for system configuration tools. We evaluate 11 existing open-source and commercial system configuration tools with this framework. If you use our framework, you will make a better choice in less time.

High Performance Multi-Node File Copies and Checksums for Clustered File Systems
Back to Program
Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multi-node cooperation allows different nodes to take part in the same copy/checksum. Split file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. This paper presents the design of mcp and msum and detailed performance numbers for each implemented optimization. It will be shown how mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x.

Fast and Secure Laptop Backups with Encrypted De-duplication
Back to Program
Many people now store large quantities of personal and corporate data on laptops or home computers. These often have poor or intermittent connectivity, and are vulnerable to theft or hardware failure. Conventional backup solutions are not well suited to this environment, and backup regimes are frequently inadequate. This paper describes an algorithm which takes advantage of the data which is common between users to increase the speed of backups, and reduce the storage requirements. This algorithm supports client-end per-user encryption which is necessary for confidential personal data. It also supports a unique feature which allows immediate detection of common subtrees, avoiding the need to query the backup system for every file. We describe a prototype implementation of this algorithm for Apple OS X, and present an analysis of the potential effectiveness, using real data obtained from a set of typical users. Finally, we discuss the use of this prototype in conjunction with remote cloud storage, and present an analysis of the typical cost savings.

2:00 p.m.–3:30 p.m.

The Margrave Tool for Firewall Analysis
Back to Program
Writing and maintaining firewall configurations can be challenging, even for experienced system administrators. Tools that uncover the consequences of configurations and edits to them can help sysadmins prevent subtle yet serious errors. Our tool, Margrave, offers powerful features for firewall analysis, including enumerating consequences of configuration edits, detecting overlaps and conflicts among rules, tracing firewall behavior to specific rules, and verification against security goals. Margrave differs from other firewall-analysis tools in supporting queries at multiple levels (rules, filters, firewalls, and networks of firewalls), comparing separate firewalls in a single query, supporting reflexive ACLs, and presenting exhaustive sets of concrete scenarios that embody queries. Margrave supports real-world firewall-configuration languages, decomposing them into multiple policies that capture different aspects of firewall functionality. We present evaluation on networking-forum posts and on an in-use enterprise firewall-configuration.

Towards Automatic Update of Access Control Policy
Back to Program
Role-based access control (RBAC) has significantly simplified the management of users and permissions in computing systems. In dynamic environments, systems are subject to changes, so that the associated configurations need to be updated accordingly in order to reflect the systems' evolution. Access control update is complex, especially for large-scale systems; because the updated system is expected to meet necessary constraints. This paper presents a tool, RoleUpdater, which answers administrators' high-level update request for role-based access control systems. RoleUpdater is able to automatically check whether a required update is achievable and, if so, to construct a reference model. In light of this model, administrators could fulfill the changes to RBAC systems. RoleUpdater is able to cope with practical update requests, e.g., that include role hierarchies and administrative rules in effect. Moreover, RoleUpdater can also provide minimal update in the sense that no redundant changes are implemented.

First Step Towards Automatic Correction of Firewall Policy Faults
Back to Program
Firewalls are critical components of network security and have been widely deployed for protecting private networks. A firewall determines whether to accept or discard a packet that passes through it based on its policy. However, most real-life firewalls have been plagued with policy faults, which either allow malicious traffic or block legitimate traffic. Due to the complexity of firewall policies, manually locating the faults of a firewall policy and further correcting them are difficult. Automatically correcting the faults of a firewall policy is an important and challenging problem. In this paper, we make three major contributions. First, we propose the first comprehensive fault model for firewall policies including five types of faults. For each type of fault, we present an automatic correction technique. Second, we propose the first systematic approach that employs these five techniques to automatically correct all or part of the misclassified packets of a faulty firewall policy. Third, we conducted extensive experiments to evaluate the effectiveness of our approach. Experimental results show that our approach is effective to correct a faulty firewall policy with three of these types of faults.

4:00 p.m.–5:30 p.m.

When Anti-virus Doesn't Cut It: Catching Malware with SIEM
Back to Program
Malware is a problem that cuts across most every industry that uses computers today. It is also a good example of a threat that most companies assume is stopped at the perimeter. However, this presentation will discuss the danger of relying simply on desktop or perimeter security to catch an insidious threat like malware. SIEM is short for Security Information & Event Management. Simply put, these are event correlation engines for logs. These systems help your limited security staff make sense of millions of events per day. They put many log types into one console which reduces training and analytical time.

In-Flight Mechanics: A Software Package Management Conversion Project
Back to Program
A software tools group at Yahoo had been working for several years to develop a state-based package and setting management solution called Igor. Igor sits on top of yinst and enforces the exact package and setting list for a given host, through a versioned tagging mechanism. In essence this moves dependency checking from runtime to compile time (with compile time being the creation of the release). Igor had been in use for several years in other properties at Yahoo and appeared to work well. Thus in early 2009 my team was given the task of converting to Igor (igorizing) all the 7000 mail farm hosts. At the highest level this process included: modeling existing server environment in Igor; converting to "All Apps All Environments"; converting all production servers to use Igor. Along the way we learned some useful lessons about planning and implementing large-scale software conversions.

Experiences with Eucalyptus: Deploying an Open Source Cloud
Back to Program
With the recent trend of exploiting resources of the cloud, we have embarked on a journey to deploy an open source cloud using Eucalyptus. During the past year we have learned many lessons about the use of Eucalyptus and clouds in general. The area of security provides significant challenges in operating a cloud, the scalability supposedly inherent in clouds isn't a given, and the process of supporting cloud users is different and more complicated than supporting desktop or even HPC users.

Thursday, November 11, 2010
9:00 a.m.–10:30 a.m.

Using TCP/IP Traffic Shaping to Achieve iSCSI Service Predictability
Back to Program
This paper addresses the unpredictable service availability of large centralized storage solutions. Fibre Channel is a common connection type for storage area networks (SANs) in enterprise storage and currently there are no standard mechanisms for prioritizing workloads using this technology. However, the increasing use of TCP/IP based network communication in SANs has introduced the possibility of employing well known techniques and tools for prioritizing IP-traffic. A method for throttling traffic to an iSCSI target server is devised: the packet delay throttle, using common TCP/IP traffic shaping techniques. It enables close-to-linear rate reduction for both read and write operations. All throttling is achieved without triggering TCP retransmit timeout and subsequent slow start caused by packet loss. A control mechanism for dynamically adapting throttling values to rapidly changing workloads is implemented using a modified proportional integral derivative (PID) controller. An example prototype of an autonomic resource prioritization framework is designed. The framework identifies and maintains information about resources, their consumers, response time for active consumers and their set of throttleable consumers. The framework is exposed to extreme workload changes and demonstrates high ability to keep read response time below a predefined threshold. It exhibits low overhead and resource consumption, promising suitability for large scale operation in production environments.

YAF: Yet Another Flowmeter
Back to Program
A flow meter generates flow data—which contains information about each connection observed on a network—from a stream of observed packets. Flow meters can be implemented in standalone measurement devices or inline on packet forwarding devices, such as routers. YAF (Yet Another Flowmeter) was created as a reference implementation of an IPFIX Metering and Exporting Process, and to provide a platform for experimentation and rapid deployment of new flow meter capabilities. Significant engineering effort has also gone into ensuring that YAF is a high performance, flexible, stable, and capable flow collector. This paper describes the some of the issues we encountered in designing and implementing YAF, along with some background on some of the technologies that we chose for implementation. In addition we will describe some of our experiences in deploying and operating YAF in large-scale networks.

Nfsight: NetFlow-based Network Awareness Tool
Back to Program
Network awareness is highly critical for network and security administrators. It enables informed planning and management of network resources, as well as detection and a comprehensive understanding of malicious activity. It requires a set of tools to efficiently collect, process, and represent network data. While many such tools already exist, there is no flexible and practical solution for visualizing network activity at various granularities, and quickly gaining insights about the status of network assets. To address this issue, we developed Nfsight, a NetFlow processing and visualization application designed to offer a comprehensive network awareness solution. Nfsight constructs bidirectional flows out of the unidirectional NetFlow flows and leverages these bidirectional flows to provide client/server identification and intrusion detection capabilities. We present in this paper the internal architecture of Nfsight, the evaluation of the service, and intrusion detection algorithms. We illustrate the contributions of Nfsight through several case studies conducted by security administrators on a large university network.

11:00 a.m.–12:30 p.m.

Implementing IPv6 at ARIN
Back to Program
ARIN began implementing IPv6 in 2003 and finished enabling most systems and services in 2008. Today all new networks and services are designed with IPv6 in mind. What follows is a time line of events and an experience report on how we accomplished our IPv6 deployment.

Internet on the Edge
Back to Program
We take broadband Internet access for granted sometimes, because it's all around us--at our work, in our homes, at restaurants and coffee shops. But what if you need to provide broadband access out to the middle of a field through a tree line and hundreds of meters away from any buildings or utilities? What if you only have a few weeks to develop a solution and put it into production? How can we create a solution that will support a moving target? This paper describes how we provided Internet and Voice over Internet Protocol (VoIP) service to a group of researchers who were testing autonomous robotic vehicles on the grounds of the National Institute of Standards and Technology (NIST) by using commercially available wireless products and a few scraps of wood.

Managing Vendor Relations: A Case Study of Two HPC Network Issues
Back to Program
High performance computing requires fast networks to move large amounts of data between compute nodes and disk storage. For a variety of reasons, however, the fast speeds achieved by bleeding edge network technology tend to come along with a higher risk of system failure. When designing and building high end computing systems using the fastest available network equipment, a good relationship with the network vendor becomes absolutely crucial for success. Through 2008 and 2009, the Argonne Leadership Computing Facility (ALCF) faced two serious challenges with the high-speed Myricom data network at the center of its production supercomputers. By studying these challenges and how we worked with Myricom to resolve them, we establish a five-step method for vendor-assisted problem resolution that system and network administrators can use to improve their own vendor relations and time to problem resolution. Furthermore, we include specific examples that can help Myricom network administrators study and debug their own networks.

2:00 p.m.–3:30 p.m.

Using Syslog Message Sequences for Predicting Disk Failures
Back to Program
Mitigating the impact of computer failure is possible if accurate failure predictions are provided. Resources, and services can be scheduled around predicted failure and limit the impact. Such strategies are especially important for multi-computer systems, such as compute clusters, that experience a higher rate of failure due to the large number of components. However providing accurate predictions with sufficient lead time remains a challenging problem. This research uses a new spectrum-kernel Support Vector Machine (SVM) approach to predict failure events based on system log files. These files contain messages that represent a change of system state. While a single message in the file may not be sufficient for predicting failure, a sequence or pattern of messages may be. This approach uses a sliding window (sub-sequence) of messages to predict the likelihood of failure. Then, a frequency representation of the message sub-sequences observed are used as input to the SVM. The SVM associates the messages to a class of failed or non-failed system. Experimental results using actual system log files from a Linux-based compute cluster indicate the proposed spectrum-kernel SVM approach can predict hard disk failure with an accuracy of 80% about one day in advance.

Log Analysis and Event Correlation Using Variable Temporal Event Correlator (VTEC)
Back to Program
System administrators have utilized log analysis for decades to monitor and automate their environments. As compute environments grow, and the scope and volume of the logs increase, it becomes more difficult to get timely, useful data and appropriate triggers for enabling automation using traditional tools like Swatch. Cloud computing is intensifying this problem as the number of systems in datacenters increases dramatically. To address these problems at AMD, we developed a tool we call the Variable Temporal Event Correlator, or VTEC. VTEC has unique design features, such as inherent multi-threaded/multi-process design, a flexible and extensible programming interface, built-in job queuing, and a novel method for storing and describing temporal information about events, that well suit it for quickly and efficiently handling a broad range of event correlation tasks in realtime. These features also enable VTEC to scale to tens of gigabytes of log data processed per day. This paper describes the architecture, use, and efficacy of this tool, which has been in production at AMD for more than four years.

Chukwa: A System for Reliable Large-Scale Log Collection
Back to Program
Large Internet services companies like Google, Yahoo, and Facebook use the MapReduce programming model to process log data. MapReduce is designed to work on data stored in a distributed filesystem like Hadoop's HDFS. As a result, a number of log collection systems have been built to copy data into HDFS. These systems often lack a unified approach to failure handling, with errors being handled separately by each piece of the collection, transport and processing pipeline. We argue for a unified approach, instead. We present a system, called Chukwa, that embodies this approach. Chukwa uses an end-to-end delivery model that can leverage local on-disk log files for reliability. This approach also eases integration with legacy systems. This architecture offers a choice of delivery models, making subsets of the collected data available promptly for clients that require it, while reliably storing a copy in HDFS. We demonstrate that our system works correctly on a 200-node testbed and can collect in excess of 200 MB/sec of log data. We supplement these measurements with a set of case studies describing real-world operational experience at several sites.

4:00 p.m.–5:30 p.m.

How to Tame Your VMs: An Automated Control System for Virtualized Services
Back to Program
Modern datacenters contain a large number of virtualized applications and services with constantly changing demands for computing resources. Today's virtualization management tools allow administrators to monitor current resource utilization of virtual machines. However, it is quite challenging to manually translate user-oriented service level objectives (SLOs), such as response time or throughput, to suitable resource allocation levels. We presented an adaptive control system which automates the task of tuning resource allocations and maintains service level objectives. Our system focuses on maintaining the expected response time for multi-tier web applications. Our control system is capable of adjusting resource allocation for each VM so that the applications' response time matches the SLOs. Our approach uses individual tier's response time to model the end-to-end performance of the system. The system helps stabilize applications' response time. It can reduce the mean deviation of the response time from specified targets by up to 80%. Our system also allows the physical servers to double the number of VMs hosted while maintaining the target response time.

Empirical Virtual Machine Models for Performance Guarantees
Back to Program
Existing Virtual Machine (VM) management systems rely on host resource utilization metrics to allocate and schedule VMs. Many management systems only consolidate and migrate VMs based on hosts' CPU utilizations. However, the performance of delay-sensitive workloads, such as web services and online transaction processing, can be severely degraded by contention on numerous of the hosts' components. Current VM management systems typically use threshold based rules to decide when to migrate VMs, rather than using application-level performance. This means that they cannot easily provide application-level service level objective (SLO) guarantees. Providing SLO guarantees is even more difficult when considering that today's enterprise applications often consist of multiple VM tiers. In this paper we show how the performance of a multi-tiered VM application can be empirically captured, modeled and scaled. This allows our management system to guarantee application-level performance, despite variable host utilization and VM workload levels. Additionally, it can predict the performance of an application at host utilization levels that have not been previously observed. This is achieved by performing regression analysis on the previously observed values and scaling the applications performance model. This allows the performance of a VM to be predicted before it is migrated to or from a new host. We have found that by dynamically, rather than statically, allocating resources, average response time can be improved by 30%. Additionally, we found that resource allocations can be reduced by 20%, with no degradation in response time.

RC2—A Living Lab for Cloud Computing
Back to Program
In this paper we present our experience in building the Research Compute Cloud (RC2), a cloud computing platform for use by the worldwide IBM Research community. Within eleven months of its official release RC2 has reached a community of 631 users spanning 34 countries, and serves on average 350 active users and 1800 active VM instances per month. Besides offering a utility computing platform across a heterogeneous pool of servers, RC2 aims at providing a living lab for experimenting with new cloud technologies and accelerating their transfer to IBM products. This paper describes our experience in designing and implementing a flexible infrastructure to enable rapid integration of novel ideas while preserving the overall stability and consumability of the system.

Friday, November 12, 2010
9:00 a.m.–10:30 a.m.

PeerMon: A Peer-to-Peer Network Monitoring System
Back to Program
We present PeerMon, a peer-to-peer resource monitoring system for general purpose Unix local area network (LAN) systems. PeerMon is designed to monitor system resources on a single LAN, but it also could be deployed on several LANs where some inter-LAN resource sharing is supported. Its peer-to-peer design makes PeerMon a scalable and fault tolerant monitoring system for efficiently collecting system-wide resource usage information. Experiments evaluating PeerMon's performance show that it adds little additional overhead to the system and that it scales well to large-sized LANs. PeerMon was initially designed to be used by system services that provide load balancing and job placement, however, it can be easily extended to provide monitoring data for other system-wide services. We present three tools (smarterSSH, autoMPIgen, and a dynamic DNS binding system) that use PeerMon data to pick "good" nodes for job or process placement in a LAN. Tools using PeerMon data for job placement can greatly improve the performance of applications running on general purpose LANs. We present results showing application speed-ups of up to 4.6 using our tools.

Keeping Track of 70,000+ Servers: The Akamai Query System
Back to Program
The Akamai platform is a network of over 73,000 servers supporting numerous web infrastructure services including the distribution of static and dynamic HTTP content, delivery of live and on-demand streaming media, high-availability storage, accelerated web applications, and intelligent routing. The maintenance of such a network requires significant monitoring infrastructure to enable detailed understanding of its state at all times. For that purpose, Akamai has developed and uses Query, a distributed monitoring system in which all Akamai machines participate. Query collects data at the edges of the Internet and aggregates it at several hundred places to be used to answer SQL queries about the state of the Akamai network. We explain the design of Query, outline some of its critical features, discuss who some of its users are and what Query allows them to do, and explain how Query scales to meet demand as the Akamai network grows.

Troubleshooting with Human-readable Automated Reasoning
Back to Program
In troubleshooting a complex system, hidden dependencies manifest in unexpected ways. We present a methodology for uncovering dependencies between behavior and configuration by exploiting what we call "weak transitive relationships" in the architecture of a system. The user specifies known architectural relationships between components, plus a set of inference rules for discovering new ones. A software system uses these to infer new relationships and suggest culprits that might cause a specific behavior. This serves both as a memory aid and to quickly enumerate potential causes of symptoms. Architectural descriptions, including selected data from Configuration Management Databases (CMDB) contain most of the information needed to perform this analysis. Thus the user can obtain valuable information from such a database with little effort.

11:00 a.m.–12:30 p.m.

Configuration Management for Mac OS X: It's Just Unix, Right?
Back to Program
For some time we have worked to automate our host configurations across a number of operating systems. Configuration of Windows clients was fairly straightforward with the information (INF) file provided by NIST through the Federal Desktop Core Configurations (FDCC) program (https://fdcc.nist.gov). Linux and Solaris were also fairly simple because we used Cfengine community version to provide self-healing configurations and to support exceptions for our research community. What about our Mac OS X clients?

Anycast as a Load Balancing Feature
Back to Program
Our IT organization is made up of many sub-teams, each providing a service such as DNS, LDAP, HTTP proxy, and so on. Each one is deployed globally, using their own replication mechanisms. Our team provides Load Balancing and failover services in a way that other teams can use without having to manage the underlying technology. We recently added Anycast as a service we offer to other teams that need to be able to failover between Load Balancers. While Anycast is complex and mysterious to many systems administrators, our architecture provides the service in a way that the other teams do not need to worry about the details. They simply provide the service behind Load Balancers they currently use, with an additional virtual IP address. This paper describes how Anycast works, it's benefits, and the architecture we used to provide Anycast failover as a service.

iSCSI SANs Don't Have to Suck
Back to Program
We have created a iSCSI SAN architecture which permits maintenance of network components without any downtime, thus improving our ability to maintain the SAN beyond iSCSI's mediocre reputation. We currently use this system to provide web-based services for all of Answers.com. During development of this architecture we learned three important lessons: Packet loss/congestion is absolutely fatal. Network design must be smart about Spanning Tree events, and a separate network, not separate VLANs, is required. Forethought in design creates a system that is highly maintainable, permitting an "isolate and swap" methodology for upgrades and maintenance.

?Need help? Use our Contacts page.

Last changed: 12 Nov. 2010 jp