Check out the new USENIX Web site. next up previous
Next: Experimental environment Up: A scalable and explicit Previous: Implementation

Performance

We measured the performance of our new API using a simple event-driven HTTP proxy program. This proxy does not cache responses. It can be configured to use either select() or our new event API.

In all of the experiments presented here, we generate load using two kinds of clients. The ``hot'' connections come from a set of processes running the S-Client software [2], designed to generate realistic request loads, characteristic of WAN clients. As in our earlier work [4], we also use a load-adding client to generate a large number of ``cold'' connections: long-duration dummy connections that simulate the effect of large WAN delays. The load-adding client process opens as many as several thousand connections, but does not actually send any requests. In essence, we simulate a load with a given arrival rate and duration distribution by breaking it into two pieces: S-Clients for the arrival rate, and load-adding clients for the duration distribution.

The proxy relays all requests to a Web server, a single-process event-driven program derived from thttpd [20], with numerous performance improvements. (This is an early version of the Flash Web server [17].) We take care to ensure that the clients, the Web server, and the network itself are never bottlenecks. Thus, the proxy server system is the bottleneck.



 
next up previous
Next: Experimental environment Up: A scalable and explicit Previous: Implementation
Gaurav Banga
1999-04-26