Skip to main content
USENIX
  • Conferences
  • Students
Sign in
  • Home
  • Attend
    • Registration
    • Discounts
    • Venue, Hotel, and Travel
    • Why Attend?
    • Students and Grants
    • Speaker Resources
  • Program
    • Program at a Glance
    • Conference Program
    • Training Program
    • Workshop Program
    • Conference Topics
    • Co-Located Events
      • URES '14 West
      • SESA '14
      • Puppet Camp Seattle
      • LISA Data Storage Day
      • CentOS Dojo Seattle
    • Activities
      • LISA Build
      • LISA Lab
      • Birds-of-a-Feather Sessions
      • Poster Session
      • LISA14 Expo
  • Sponsors and Expo
    • LISA14 Expo
    • Sponsors/Exhibitors List
    • Exhibitor Services
    • Download Prospectus (PDF)
  • About
    • Conference Organizers
    • Past Conferences
    • Services
    • Contact Us
    • Code of Conduct
    • Original Call for Participation
    • Help Promote

sponsors

Gold Sponsor
Gold Sponsor
Gold Sponsor
Silver Sponsor
Silver Sponsor
Silver Sponsor
Silver Sponsor
Silver Sponsor
Bronze Sponsor
Bronze Sponsor
Bronze Sponsor
Bronze Sponsor
Bronze Sponsor
General Sponsor
General Sponsor
General Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Industry Partner
Industry Partner
Industry Partner
Industry Partner
Industry Partner
Industry Partner

connect with us


  •  Twitter
  •  Facebook
  •  LinkedIn
  •  Google+
  •  YouTube

usenix conference policies

  • Event Code of Conduct
  • Conference Network Policy
  • Statement on Environmental Responsibility Policy

You are here

Home » Program » Activities » Poster Session
Tweet

connect with us

http://twitter.com/lisaconference
https://www.facebook.com/usenixassociation
http://www.linkedin.com/groups/USENIX-Association-49559/about
https://plus.google.com/108588319090208187909/posts
http://www.youtube.com/user/USENIXAssociation

Poster Session

Note: Poster Presenters should consult the Instructions for Authors of Posters section of the LISA 14 Participants page for general information and to download consent forms.

6:30 pm–7:30 pm Wednesday

Poster Session

Pedagogical Framework and Network Laboratory Management Tool for Practical Network Administration Education

Zahid Nawaz, University of Oslo

Available Media
  • Read more about Pedagogical Framework and Network Laboratory Management Tool for Practical Network Administration Education

An Investigation of Cultural and Organizational Impacts on EKR Use and EKR Use Outcomes Among System Administrators at NASA

Nicole Forsgren, Utah State University; E. Branson Matheson III, Blackphone

Available Media
  • Read more about An Investigation of Cultural and Organizational Impacts on EKR Use and EKR Use Outcomes Among System Administrators at NASA

Linux NFSv4.1 Performance Under a Microscope

Ming Chen, Stony Brook University; Dean Hildebrand, IBM Research—Almaden; Geoff Kuenning, Harvey Mudd College; Soujanya Shankaranarayana, Stony Brook University; Vasily Tarasov, Stony Brook University and IBM Research—Almaden; Arun O. Vasudevan and Erez Zadok, Stony Brook University; Ksenia Zakirova, Harvey Mudd College

Available Media
  • Read more about Linux NFSv4.1 Performance Under a Microscope

Demand-Provisioned Linux Containers for Private Network Access

Patrick T. Cable II, MIT Lincoln Laboratory

Available Media
  • Read more about Demand-Provisioned Linux Containers for Private Network Access

Virtual Interfaces for Exploration of Heterogeneous & Cloud Computing Architectures

Hal Martin and Wayne G. Lutters, University of Maryland

Available Media
  • Read more about Virtual Interfaces for Exploration of Heterogeneous & Cloud Computing Architectures

Time-Synchronized Visualization of Arbitrary Data Streams for Real-Time Monitoring and Historical Analysis

Paul Z. Kolano, NASA Ames Research Center

Available Media
  • Read more about Time-Synchronized Visualization of Arbitrary Data Streams for Real-Time Monitoring and Historical Analysis

Formalising Configuration Languages: Why is this Important in Practice?

Paul Anderson and Herry Herry, University of Edinburgh

Available Media
  • Read more about Formalising Configuration Languages: Why is this Important in Practice?

HotRestore: A Fast Restore System for Virtual Machine Cluster

2:00 pm-2:15 pm
Refereed Paper

Lei Cui, Jianxin Li, Tianyu Wo, Bo Li, Renyu Yang, Yingjie Cao, and Jinpeng Huai, Beihang University

A common way for virtual machine cluster (VMC) to tolerate failures is to create distributed snapshot and then restore from the snapshot upon failure. However, restoring the whole VMC suffers from long restore latency due to large snapshot files. Besides, different latencies would lead to discrepancies in start time among the virtual machines. The prior started virtual machine (VM) thus cannot communicate with the VM that is still restoring, consequently leading to the TCP backoff problem.

In this paper, we present a novel restore approach called HotRestore, which restores the VMC rapidly without compromising performance. Firstly, HotRestore restores a single VM through an elastic working set which prefetches the working set in a scalable window size, thereby reducing the restore latency. Second, HotRestore constructs the communication-induced restore dependency graph, and then schedules the restore line to mitigate the TCP backoff problem. Lastly, a restore protocol is proposed to minimize the backoff duration. In addition, a prototype has been implemented on QEMU/ KVM. The experimental results demonstrate that HotRestore can restore the VMC within a few seconds whilst reducing the TCP backoff duration to merely dozens of milliseconds.

Available Media

Compiling Abstract Specifications into Concrete Systems—Bringing Order to the Cloud

2:15 pm-2:30 pm
Refereed Paper

Ian Unruh, Alexandru G. Bardas, Rui Zhuang, Xinming Ou, and Scott A. DeLoach, Kansas State University

Currently, there are important limitations in the abstractions that support creating and managing services in a cloud-based IT system. As a result, cloud users must choose between managing the low-level details of their cloud services directly (as in IaaS), which is time-consuming and error-prone, and turning over significant parts of this management to their cloud provider (in SaaS or PaaS), which is less flexible and more difficult to tailor to user needs. To alleviate this situation we propose a high-level abstraction called the requirement model for defining cloud-based IT systems. It captures important aspects of a system’s structure, such as service dependencies, without introducing low-level details such as operating systems or application configurations. The requirement model separates the cloud customer’s concern of what the system does, from the system engineer’s concern of how to implement it. In addition, we present a “compilation” process that automatically translates a requirement model into a concrete system based on pre-defined and reusable knowledge units. When combined, the requirement model and the compilation process enable repeatable deployment of cloud-based systems, more reliable system management, and the ability to implement the same requirement in different ways and on multiple cloud platforms. We demonstrate the practicality of this approach in the ANCOR (Automated eNterprise network COmpileR) framework, which generates concrete, cloud-based systems based on a specific requirement model. Our current implementation targets OpenStack and uses Puppet to configure the cloud instances, although the framework will also support other cloud platforms and configuration management solutions.

Available Media

An Administrator’s Guide to Internet Password Research

2:30 pm-2:45 pm
Refereed Paper

Dinei Florêncio and Cormac Herley, Microsoft Research; Paul C. van Oorschot, Carleton University

The research literature on passwords is rich but little of it directly aids those charged with securing web-facing services or setting policies. With a view to improving this situation we examine questions of implementation choices, policy and administration using a combination of literature survey and first-principles reasoning to identify what works, what does not work, and what remains unknown. Some of our results are surprising. We find that offline attacks, the justification for great demands of user effort, occur in much more limited circumstances than is generally believed (and in only a minority of recently-reported breaches). We find that an enormous gap exists between the effort needed to withstand online and offline attacks, with probable safety occurring when a password can survive 106 and 1014 guesses respectively. In this gap, eight orders of magnitude wide, there is little return on user effort: exceeding the online threshold but falling short of the offline one represents wasted effort. We find that guessing resistance above the online threshold is also wasted at sites that store passwords in plaintext or reversibly encrypted: there is no attack scenario where the extra effort protects the account.

Available Media

Analyzing Log Analysis: An Empirical Study of User Log Mining

2:45 pm-3:00 pm
Refereed Paper

S. Alspaugh, University of California, Berkeley and Splunk Inc.; Beidi Chen and Jessica Lin, University of California, Berkeley; Archana Ganapathi, Splunk Inc.; Marti A. Hearst and Randy Katz, University of California, Berkeley

Awarded Best Student Paper! 

We present an in-depth study of over 200K log analysis queries from Splunk, a platform for data analytics. Using these queries, we quantitatively describe log analysis behavior to inform the design of analysis tools. This study includes state machine based descriptions of typical log analysis pipelines, cluster analysis of the most common transformation types, and survey data about Splunk user roles, use cases, and skill sets. We find that log analysis primarily involves filtering, reformatting, and summarizing data and that non-technical users increasingly need data from logs to drive their decision making. We conclude with a number of suggestions for future research.

Available Media

Realtime High-Speed Network Traffic Monitoring Using ntopng

3:00 pm-3:15 pm
Refereed Paper

Luca Deri, IIT/CNR and ntop; Maurizio Martinelli, IIT/CNR; Alfredo Cardigliano, ntop

Awarded Best Paper!

Luca Deri is the leader of the ntop project aimed at developing an open-source monitoring platform. He previously worked for University College of London and IBM Research, prior receiving his PhD at the University of Berne. When not working at ntop, he shares his time between the .it Internet Domain Registry (nic.it) and the University of Pisa where he has been appointed as lecturer at the CS department.

Monitoring network traffic has become increasingly challenging in terms of number of hosts, protocol proliferation and probe placement topologies. Virtualised environments and cloud services shifted the focus from dedicated hardware monitoring devices to virtual machine based, software traffic monitoring applications. This paper covers the design and implementation of ntopng, an open-source traffic monitoring application designed for high-speed networks. ntopng’s key features are large networks real-time analytics and the ability to characterise application protocols and user traffic behaviour. ntopng was extensively validated in various monitoring environments ranging from small networks to .it ccTLD traffic analysis.

Available Media

Towards Detecting Target Link Flooding Attack

3:15 pm-3:30 pm
Refereed Paper

Lei Xue, The Hong Kong Polytechnic University; Xiapu Luo, The Hong Kong Polytechnic University Shenzen Research Institute; Edmond W. W. Chan and Xian Zhan, The Hong Kong Polytechnic University

A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hop-by-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.

Available Media

Automatic and Dynamic Configuration of Data Compression for Web Servers

2:00 pm-2:15 pm
Refereed Paper

Eyal Zohar, Yahoo! Labs; Yuval Cassuto, Technion—Israel Institute of Technology

HTTP compression is an essential tool for web speed up and network cost reduction. Not surprisingly, it is used by over 95% of top websites, saving about 75% of webpage traffic.

The currently used compression format and tools were designed over 15 years ago, with static content in mind. Although the web has significantly evolved since and became highly dynamic, the compression solutions have not evolved accordingly. In the current most popular web-servers, compression effort is set as a global and static compression-level parameter. This parameter says little about the actual impact of compression on the resulting performance. Furthermore, the parameter does not take into account important dynamic factors at the server. As a result, web operators often have to blindly choose a compression level and hope for the best.

In this paper we present a novel elastic compression framework that automatically sets the compression level to reach a desired working point considering the instantaneous load on the web server and the content properties. We deploy a fully-working implementation of dynamic compression in a web server, and demonstrate its benefits with experiments showing improved performance and service capacity in a variety of scenarios. Additional insights on web compression are provided by a study of the top 500 websites with respect to their compression properties and current practices.

Available Media

The Truth About MapReduce Performance on SSDs

2:15 pm-2:30 pm
Refereed Paper

Karthik Kambatla, Cloudera Inc. and Purdue University; Yanpei Chen, Cloudera Inc.

Yanpei Chen is a member of the Performance Engineering Team at Cloudera, where he works on internal and competitive performance measurement and optimization. His work touches upon multiple interconnected computation frameworks, including Cloudera Search, Cloudera Impala, Apache Hadoop, Apache HBase, and Apache Hive. He is the lead author of the Statistical Workload Injector for MapReduce (SWIM), an open source tool that allows someone to synthesize and replay MapReduce production workloads. SWIM has become a standard MapReduce performance measurement tool used to certify many Cloudera partners. He received his doctorate at the UC Berkeley AMP Lab, where he worked on performance-driven, large-scale system design and evaluation.

Solid-state drives (SSDs) are increasingly being considered as a viable alternative to rotational hard-disk drives (HDDs). In this paper, we investigate if SSDs improve the performance of MapReduce workloads and evaluate the economics of using PCIe SSDs either in place of or in addition to HDDs. Our contributions are (1) a method of benchmarking MapReduce performance on SSDs and HDDs under constant-bandwidth constraints, (2) identifying cost-per-performance as a more pertinent metric than cost-per-capacity when evaluating SSDs versus HDDs for performance, and (3) quantifying that SSDs can achieve up to 70% higher performance for 2.5x higher cost-per-performance.

Available Media

ParaSwift: File I/O Trace Modeling for the Future

2:30 pm-2:45 pm
Refereed Paper

Rukma Talwadker and Kaladhar Voruganti, NetApp Inc.

Historically, traces have been used by system designers for designing and testing their systems. However, traces are becoming very large and difficult to store and manage. Thus, the area of creating models based on traces is gaining traction. Prior art in trace modeling has primarily dealt with modeling block traces, and file/NAS traces collected from virtualized clients which are essentially block I/O’s to the storage server. No prior art exists in modeling file traces. Modeling file traces is difficult because of the presence of meta-data operations and the statefulness NFS operation semantics.

In this paper we present an algorithm and a unified framework that models and replays NFS as well SAN workloads. Typically, trace modeling is a resource intensive process where multiple passes are made over the entire trace. In this paper, in addition to being able to model the intricacies of the NFS protocol, we provide an algorithm that is efficient with respect to its resource consumption needs by using a Bloom Filter based sampling technique. We have verified our trace modeling algorithm on real customer traces and show that our modeling error is quite low.

Available Media

Gold Sponsors

Silver Sponsors

Bronze Sponsors

General Sponsors

Media Sponsors & Industry Partners

© USENIX

  • Privacy Policy
  • Contact Us

LISA is a registered trademark of the USENIX Association.