Skip to main content
USENIX
  • Conferences
  • Students
Sign in
  • Overview
  • Workshop Organizers
  • Registration Information
  • Registration Discounts
  • At a Glance
  • Calendar
  • Workshop Program
  • Birds-of-a-Feather Sessions
  • Co-located Workshops
  • Sponsorship
  • Activities
  • Hotel and Travel Information
  • Students
  • Questions
  • Help Promote!
  • For Participants
  • Call for Papers
  • Past Workshops

twitter

Tweets by @usenix

usenix conference policies

  • Event Code of Conduct
  • Conference Network Policy
  • Statement on Environmental Responsibility Policy

You are here

Home » Workshop Program
Tweet

connect with us

http://twitter.com/usenixsecurity
https://www.facebook.com/usenixassociation
http://www.linkedin.com/groups/USENIX-Association-49559/about
https://plus.google.com/108588319090208187909/posts
http://www.youtube.com/user/USENIXAssociation

Workshop Program

To access a presentation's content, please click on its title below.

All sessions will be held in Regency B unless otherwise noted.

The full papers published by USENIX for the CSET '13 are available for download or individually below to workshop registrants immediately and to everyone beginning August 12, 2013. Everyone can view the abstracts immediately. Copyright to the individual works is retained by the author[s]. 

Download Paper Archives

Attendee Files 

(Registered attendees: Sign in to your USENIX account to download this file.)

CSET '13 Papers ZIP

 

8:30 a.m.–9:00 a.m. Monday

Continental Breakfast

Hall of Battles

9:00 a.m.–10:30 a.m. Monday

Malware Testing

BugBox: A Vulnerability Corpus for PHP Web Applications

Gary Nilson, Kent Wills, Jeffrey Stuckman, and James Purtilo, University of Maryland, College Park

Web applications are a rich source of vulnerabilities due to their high exposure, diversity, and popularity. Accordingly, web application vulnerabilities are useful subjects for empirical security research. Although some information on vulnerabilities is publicly available, there are no publicly available datasets that couple vulnerabilities with their source code, metadata, and exploits through an executable test environment. We describe BugBox, a corpus and exploit simulation environment for PHP web application vulnerabilities. BugBox provides a test environment and a packaging mechanism that allows for the distribution and sharing of vulnerability data. The goal is to facilitate empirical vulnerability studies, security tool evaluation, and security metrics research. In addition, the framework promotes developer education by demonstrating exploits and providing a sandbox where they can be run safely. BugBox and its modules are open source and available online, and new modules may be contributed by community members.

Available Media

MINESTRONE: Testing the SOUP

Azzedine Benameur, Nathan S. Evans, Matthew C. Elder, Symantec Research Labs

Software development using type-unsafe languages (e.g., C and C++) is a challenging task for several reasons, security being one of the most important. Ensuring that a piece of code is bug or vulnerability free is one of the most critical aspects of software engineering. While most software development life cycle processes address security early on in the requirement analysis phase and refine it during testing, it is not always sufficient. Therefore the use of commercial security tools has been widely adopted by the software industry to help identify vulnerabilities, but they often have a high false-positive rate and have limited effectiveness. In this paper we present MINESTRONE, a novel architecture that integrates static analysis, dynamic confinement, and code diversification to identify, mitigate, and contain a broad class of software vulnerabilities in Software Of Uncertain Provenance (SOUP). MINESTRONE has been tested against an extensive test suite and showed promising results. MINESTRONE showed an improvement of 34.6% over the state-of-the art for memory corruption bugs that are commonly exploited.

Available Media

MalwareLab: Experimentation with Cybercrime Attack Tools

Luca Allodi, Vadim Kotov, and Fabio Massacci, University of Trento

Cybercrime attack tools (i.e. Exploit Kits) are reportedly responsible for the majority of attacks affecting home users. Exploit kits are traded in the black markets at different prices and advertising different capabilities and functionalities. In this paper we present our experimental approach in testing 10 exploit kits leaked from the markets that we deployed in an isolated environment, our MalwareLab. The purpose of this experiment is to test these tools in terms of resiliency against changing software configurations in time. We present our experiment design and implementation, discuss challenges, lesson learned and open problems, and present a preliminary analysis of the results.

Available Media
10:30 a.m.–11:00 a.m. Monday

Break with Refreshments

Hall of Battles

11:00 a.m.–12:30 p.m. Monday

Panel Discussion

Moderator: Chris Kanich, University of Illinois at Chicago

Conducting Research Using Data of Questionable Provenance

Panelists: Michael Bailey, University of Michigan; Lujo Bauer, Carnegie Mellon University; L. Jean Camp, Indiana University; Sven Dietrich, Stevens Institute of Technology; Damon McCoy, George Mason University

Empirical research often requires access to good datasets. Funding agencies have pushed for increased data sharing among investigators, and have established several publicly accessible repositories and data warehouses. However, the speed of progress in computing and the sensitive nature of data, especially when it pertains to security, makes access to useful datasets a challenge. At the same time, a number of data sets of questionable provenance have recently become available to researchers. As a prominent example, the author(s) of the anonymously published Internet Census 2012 project compromised a large number of open embedded devices to perform a distributed port scan of the Internet. The collected data are available for download and are arguably of great use to network researchers.

Empirical research often requires access to good datasets. Funding agencies have pushed for increased data sharing among investigators, and have established several publicly accessible repositories and data warehouses. However, the speed of progress in computing and the sensitive nature of data, especially when it pertains to security, makes access to useful datasets a challenge. At the same time, a number of data sets of questionable provenance have recently become available to researchers. As a prominent example, the author(s) of the anonymously published Internet Census 2012 project compromised a large number of open embedded devices to perform a distributed port scan of the Internet. The collected data are available for download and are arguably of great use to network researchers. This panel explores the ethics and best practices concerned with using datasets whose origin is either unknown or is morally ambiguous. In particular, the panel will debate the merits of using (or not using) the data, potential methods of verifying the integrity of anonymously published data, and safeguards for preventing misuse. Audience participation is highly encouraged.

Available Media
  • Read more about Conducting Research Using Data of Questionable Provenance
12:30 p.m.–2:00 p.m. Monday

Workshop Luncheon

Hall of Battles

2:00 p.m.–3:30 p.m. Monday

Security Education, Policy, and Data

Valuing Security by Getting [d0x3d!]: Experiences with a Network Security Board Game

Mark Gondree, Naval Postgraduate School; Zachary N.J. Peterson, California Polytechnic State University, San Luis Obispo

We motivate using non-digital games to teach computer security concepts and describe the inspirations driving the design of our board game, [d0x3d!]. We describe our experiences in designing game mechanics that teach security principles and our observations in developing an open-source game product. We survey our experiences with playing the game with students and our plans for supporting the game in and out of the classroom.

Available Media

Internet Measurements and Public Policy: Mind the Gap

Hadi Asghari and Michel J.G. van Eeten, Delft University of Technology; Milton L. Mueller, Syracuse University

Large and impressive data collection efforts often fail to make their data useful for answering policy questions. In this paper, we argue that this is due to a systematic gap between the ways measurement engineers think about their data, and how other disciplines typically make use of data. We recap our own efforts to use the data generated by a number of such projects to address questions of Internet and telecommunication policy, and based on our experience, propose five points for engineers to consider when building measurement systems to reduce the gap. Ignoring the gap means that fewer researchers use the data and significantly lowers a project's impact on policy debates and outcomes.

Available Media

Bridging the Data Gap: Data Related Challenges in Evaluating Large Scale Collaborative Security Systems

John Sonchack, University of Pennsylvania; Adam J. Aviv, Swarthmore College; Jonathan M. Smith, University of Pennsylvania

Data-sharing approaches such as collaborative security have been successfully applied to systems addressing multiple classes of cyber security threats. In spite of these results, scale presents a major challenge to further advances: collaborative security systems are designed to operate at a large scale (Internet- or ISP-scale), and obtaining and sharing traces suitable for experimentation is difficult. We illustrate these challenges via an analysis of recently proposed collaborative systems. We argue for the development of simulation techniques designed specifically to address these challenges and sketch one such technique, parameterized trace scaling, which expands small traces to generate realistic large scale traces sufficient for analyzing collaborative security systems.

Available Media
3:30 p.m.–4:00 p.m. Monday

Break with Refreshments

Hall of Battles

4:00 p.m.–5:30 p.m. Monday

Real-World Evaluation and Testing

OCTANE (Open Car Testbed and Network Experiments): Bringing Cyber-Physical Security Research to Researchers and Students

Christopher E. Everett and Damon McCoy, George Mason University

Security research and teaching using cyber-physical systems (e.g., automotive networks) is challenging because of the need to replicate the interactions between the hardware components and the control software of the systems. These interactions are challenging to replicate because of the dynamic inputs in real-world environments that cause various interactions of the hardware components and control software within the network. In particular, automotive networks are challenging for security research and teaching because although the protocols of the automotive networks are standardized (e.g., CAN, LIN), the implementation details by each automotive manufacturer are not standardized and are generally not publicly available.

In this paper we present Open Car Testbed And Network Experiments (OCTANE), which reduces the barrier of entry into the security research and teaching of automotive networks by providing a software package and a hardware framework for the reverse engineering and testing of automotive networks. OCTANE provides a platform for security research and teaching by replicating the interactions between the hardware components and control software of the systems so that the user can focus on the security aspects of the automotive network instead of the tool configuration and setup.

Available Media

Generation of SSH Network Traffic Data for IDS Testbeds

Hristo Djidjev, Los Alamos National Laboratory; Lyudmil Aleksandrov, Institute of Information and Communication Technologies, Bulgaria

We develop an algorithm for generating secure shell (ssh) network traffic that can find use as a part of a testbed for evaluating anomaly detection and intrusion detection systems in cyber security. Given an initial dataset describing real network traffic, the generator produces synthetic traffic with characteristics close to the original. The objective is to match parameters of the original traffic such as traffic volumes, session durations, diurnal patterns, and relationships between hosts in terms of communicating pairs and subsets.

Available Media

RTRlib: An Open-Source Library in C for RPKI-based Prefix Origin Validation

Matthias Wählisch, Freie Universität Berlin; Fabian Holler and Thomas C. Schmidt, Hamburg University of Applied Sciences; Jochen H. Schiller, Freie Universität Berlin

A major step towards secure Internet backbone routing started with the deployment of the Resource Public Key Infrastructure (RPKI). It allows for the cryptographic strong binding of an IP prefix and autonomous systems that are legitimate to originate this prefix. A fundamental design choice of RPKI-based prefix origin validation is the avoidance of cryptographic load at BGP routers. Cryptographic verifications will be performed only by cache servers, which deliver valid AS/prefix mappings to the RPKI-enabled BGP router using the RPKI/RTR protocol.

In this paper, we give first insights into the additional system load introduced by RPKI at BGP routers. For this purpose, we design and implement a highly efficient C library of the RPKI/RTR router part and the prefix origin validation scheme. It fetches and stores validated prefix origin data from an RTR-cache and performs origin verification of prefixes as obtained from BGP updates. We measure a relatively small overhead of origin validation on commodity hardware (5% more RAM than required for full BGP table support, 0.41% load in case of ≈ 92,000 prefix updates per minute), which meets real-world requirements of today.

Available Media

© USENIX

  • Privacy Policy
  • Contact Us