Check out the new USENIX Web site.
USITS '03, 4th USENIX Symposium on Internet Technologies and Systems
USITS '03 Home  | USENIX Home  | Events  | Publications  | Membership

Register

Invitation

At a Glance

At a Glance

Work in Progess Reports

At a Glance

At a Glance

At a Glance

At a Glance

At a Glance

At a Glance

Work-in-Progress Reports

Session Chair: Peter Honeyman, CITI, University of Michigan

Session Agenda
Title
Speaker
A Method to Speed Up Dynamic Web Pages Using Partial Evaluation Yasuaki Takebe, Cognitive Research Labs
A Self-Tuning, Self-Protecting, Self-Healing Session State Management Layer Benjamin C. Ling, Stanford University
Decoupled Storage: Free the Replicas! Andy Huang, Stanford University
A Case for Hierarchical Distributed Hash Tables Krishna P. Gummadi and Steven D. Gribble, University of Washington
Crash-Only Software George Candea and Armando Fox, Stanford University
Denial of Service via Algorithmic Complexity Attacks Dan S. Wallach and Scott Crosby, Rice University
The Effect of Faults on Layered Systems and Its End-to-end Implications Rich Martin, Rutgers University
Beyond the Tree: High Bandwidth Streaming Using an Overlay Mesh Dejan Kostic, Adolfo Rodriguez, Jeannie Albrecht, and Amin Vahdat, Duke University
GridFTP; Threat or Menace Arun Venkataramani and Michael Dahlin, University of Texas

A Method to Speed Up Dynamic Web Pages Using Partial Evaluation
Yasuaki Takebe, Cognitive Research Labs
Back to top

On many web sites today, web pages are generated dynamically by programs, which are installed on web servers and execute queries to databases. In many cases, contents of those web pages consist of nearly static data and dynamic data. We are developing a system to speed up generation of such kind of web pages. This system, PHP-Mix, uses a program transformation technique called partial evaluation.

PHP-Mix consists of a partial evaluator for a subset of PHP, a widely used programming language for web site development, and of a deployment system which installs residual programs to web servers. This partial evaluator analyzes a program that generates a web page and regards those data that are not updated frequently as static. It executes queries on those static data and generates a residual program that contains parts of a web page to generate. PHP-Mix performs partial evaluation and program deployment when such data are updated. On web servers, only residual programs are executed at request time. By this method, we can reduce the cost to generate many sorts of web pages, including personalized web pages. Our experiment shows that we can obtain 3 times the before-transformation throughput with out method.


A Self-Tuning, Self-Protecting, Self-Healing Session State Management Layer
Benjamin C. Ling, Stanford University
Back to top

We present a self-tuning, self-protecting, and self-healing session state management layer that provides a storage and retrieval mechanism for semi-persistent serial-access user session state. SSM is fast, scalable, fault-tolerant, and recovers instantly from individual node failures. By self-tuning, we mean that SSM discovers the maximum load capacity of the system, without requiring a system administrator to specifically state the load capacity of each component of the system, or to conduct experiments to discover the load capacity of the aggregate system. By self-protecting, we mean that SSM knows when to say "no" to an influx of incoming requests to protect itself from collapsing from overload. By self-healing, we mean that SSM is able to continue functioning correctly in the presence of non-properly functioning modules, either in degraded performance, incorrect results, livelocks, or crashes. SSM should be able to 1) detect when any of the above conditions occur, and 2) take appropriate actions, either by restarting the faulty module, shutting it down permanently, or simply continue functioning with the faulty module. SSM recovers instantly from individual node failures; data is replicated, and no component in SSM stores hard state or requires hard state to operate, and therefore, recovery is simple and usually requires at most a restart of the process.


Decoupled Storage: Free the Replicas!
Andy Huang, Stanford University
Back to top

In Internet services, stateless HTTP frontends and application servers can be easily added to increase the performance and availability of their respective layers. However, when persistent state is invovled, desigining a system that is scalable, highly available, and easy-to-configure is a challenge. For example, scaling the performance of a database or file system often requires repartitioning data, which requires administrator intervention and significant downtime. Likewise, data availability, which can be improved using replication, neccesitates the use of mechanisms that ensure data consistency across replicas, but also limit data availability. Although databases are well-suited for reliably storing and accessing state such as billing information, the transaction and querying mechanisms are overkill for state like user preferences and workflow data. For this latter type of state, a non-transactional, hash table API state store is sufficient.

We hypothesize that designing a state store specifically designed for non-transactional, hash table data can result in a system that is more scalable, available, and easier-to-manage than a transactional state store. Furthermore, moving non-transactional state off the database sheds load allowing the database to devote its resources to serving requests for the state it is designed for. In the Decoupled Storage project, our goal is to design a distributed persistent hash table whose nodes are decoupled so that they behave more like stateless nodes, operating independently of each other. The result of removing coupling among storage nodes is a system that has scalable performance, high availability (even during recovery and administrative tasks), and the ability to add/remove nodes in a "plug-and-play" manner.


A Case for Hierarchical Distributed Hash Tables
Krishna P. Gummadi and Steven D. Gribble, University of Washington,
Back to top

Distributed hash tables (DHTs) are currently being proposed as a substrate for many large-scale distributed applications like overlay multicast, file-systems and DNS. But prior to the advent of DHT algorithms such as Chord, Pastry and Tapestry, the concept of hierarchy was the default choice to build these scalable distributed applications. An important characteristic feature of the hierarchical systems that current DHT based systems lack is their ability to "isolate faults" within a domain from affecting peers outside that domain. In this work, we propose the concept of "hierarchical DHTs". Hierarchical DHTs realize the benefits of fault isolation by incorporating the following two important properties into their routing algorithms: (i) Locality of Intra-domain paths, i.e., the routing paths between any two nodes in the same hierarchical domain stay local and (ii) Convergence of Inter-domain paths, i.e., routing paths from any two nodes in the same hierarchical domain A to a different domain B converge before departing from domain A. Taking Chord as an example DHT, we suggest simple modifications to Chord's finger selection algorithm that would enable it to achieve these routing properties without altering either the amount of state stored at each node or the efficiency of routing.


Crash-Only Software
George Candea and Armando Fox, Stanford University
Back to top

Crash-only programs crash safely and recover quickly. There is only one way to stop such software—by crashing it; and only one way to bring it up—by initiating recovery. Crash-only systems are built from crash-only components, and the use of transparent component-level retries hides intra-system component crashes from end users. We advocate a crash-only design for Internet systems, showing that it can lead to more reliable code, easier failure prevention, and faster, more effective recovery. We present ideas on how to build such crash-only Internet services, taking successful techniques to their logical extreme.


Denial of Service via Algorithmic Complexity Attacks
Dan S. Wallach and Scott Crosby, Rice University
Back to top

We present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have ``average-case'' expected running time that's far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU.


The Effect of Faults on Layered Systems and Its End-to-end Implications.
Rich Martin, Rutgers University
Back to top

In this work we are investigating the availability impact of faults on layered systems. We argue that to achieve high availability, all system layers must perform intermediate checks and expose operation status in a timely manner. Using network and disk fault examples, we show how current per-node and system-wide layering interfere with delivering high availability. Our reasoning on the impact of layering suggests that designs blindly following the end-to-end argument for correctness can lead to less available systems. To achieve high availability, intermediate checks are a necessity.


Beyond the Tree: High Bandwidth Streaming Using an Overlay Mesh
Dejan Kostic, Adolfo Rodriguez, Jeannie Albrecht, and Amin Vahdat, Duke University
Back to top

In this talk we show how our ideas and papers of the last several years have generated new ideas; papers to follow.


GridFTP; Threat or Menace
Arun Venkataramani and Michael Dahlin, University of Texas
Back to top

TCP congestion control has come to be considered fundamental to the stability of the Internet. Congestion control specifies a limit on how fast a single flow can inject bytes into the network as a function of its loss rate; this policy, also known as TCP-friendliness is meant to ensure fair utilization of network capacity. Unfortunately, no such fairness guideline exists for the number of connections a single user can open. A user opening 'k' connections to download a single document gets 'k' times the throughput of another using just one connection through the same bottleneck. Opening multiple connections to get increased throughput appears to be emerging as a popular trend; example applications are GridFTP, Parallel FTP, Mozilla etc. Uncontrolledly opening connections can damage the stability of the Internet rendering congestion control meaningless. We propose a connection control model that advocates conscientiously limiting the number of connections opened by a single user. Our solution, based on TCP Nice, provides a mechanism that can provide improved and fair utilization in underutilized networks without compromising the stability of the Internet.

?Need help? Use our Contacts page.

Last changed: 27 March 2003 aw