Check out the new USENIX Web site. next up previous
Next: Case Study Up: Making the ``Box'' Transparent: Previous: Overhead

Experimental Setup & Workload


We describe our experimental setup and the relevant software components of the system in this section. All of our experiments, except for the portability measurements1, are performed on a uniprocessor server running FreeBSD 4.6, with a 933MHz Pentium III, 1GB of memory, one 5400 RPM Maxtor IDE disk, and a single Netgear GA621 gigabit ethernet network adapter. The clients consist of ten Pentium II machines running at 300 MHz connected to a switch using Fast Ethernet. All machines are configured to use the default (1500 byte) MTU as required by SpecWeb99.

Our main application is the event-driven Flash Web Server, although we also perform some tests on the widely-used multi-process Apache (6) server. The Flash Web Server consists of a main process and a number of helper processes. The main process multiplexes all client connections, is intended to be nonblocking, and is expected to serve all requests only from memory. The helpers load disk data and metadata into memory to allow the main process to avoid blocking on disk. The number of main processes in the system is generally equal to the number of physical processors, while the number of helper processes is dynamically adjusted based on load. In previous tests, the Flash Web Server has been shown to compare favorably to high-performance commercial Web servers (30). We run with logging disabled to simplify comparison with Apache, where enabling logging degrades performance noticeably.

We focus on the SpecWeb99 benchmark, an industry-standard test of the overall scalability of Web servers under realistic conditions. It is designed by SPEC, the developers of the widely-used SpecCPU workloads (38), and is based on traffic at production Web sites. Although not common in academia, it is the de facto standard in industry (27), with over 190 published results, and is different from most other Web server benchmarks in its complexity and requirements. It measures scalability by reporting the number of simultaneous connections the server can handle while meeting a specified quality of service. The data set and working set sizes increase with the number of simultaneous connections, and quickly exceed the physical memory of commodity systems. 70% of the requests are for static content, with the other 30% for dynamic content, including a mix of HTTP GET and POST requests. 0.15% of the requests require the use of a CGI process that must be spawned separately for each request.



next up previous
Next: Case Study Up: Making the ``Box'' Transparent: Previous: Overhead
Yaoping Ruan
2004-05-04