Check out the new USENIX Web site. next up previous
Next: Excess parallelism Up: Latency Previous: SpecWeb99 workload

Diskbound static workload


To determine our latency benefit on a more disk-bound workload and to compare our results with those of other researchers, we construct a static workload similar to the one used to evaluate the Haboob server (41). In this workload, 1020 simulated clients generate static requests to a 3.3GB data set. Persistent connections are used, with clients issuing 5 requests per connection before closing it. To avoid overload, the request rate is fixed at 2300 requests/second, which is roughly 90% of the slowest server's capacity.

We compare several configurations to determine the latency benefits and the impact of parallelism in the server. We run the new and original versions of Flash with a single instance and four instances, to compare uniprocessor configurations with what would be expected on a 4-way SMP. We also run Apache with 150 and 300 server processes.

Figure: Response latencies for the 3.3GB static workload
\begin{figure}
\centerline {\epsfig{figure=results-latency.eps,width=4in,height=2.5in}}\vspace{-.125in}\vspace{-.125in}\end{figure}


Table: Summaries of the static workload latencies
  5% median 95% mean
  (ms) (ms) (ms) (ms)
New Flash 0.37 0.79 7.45 7.56
New Flash, 4p 0.38 0.82 7.51 7.72
Old Flash 3.36 37.59 326.40 92.37
Old Flash, 4p 7.05 142.65 1924.42 420.85
Apache 150p 0.70 6.64 1599.50 360.62
Apache 300p 0.78 124.98 2201.63 545.93




Table: Server static workload capacities (Mb/s)
data set Apache Old Flash New Flash
500MB 240.3 485.2 660.9
1.5GB 230.7 410.6 580.3
3.3GB 210.6 264.5 326.4



The results, given in Figure 11 and Table 7, show the response time of our new server under this workload exhibits improvements of more than a factor of twelve in mean response time, and a factor of 47 in median latency. With four instances, the differences are a factor of 54 in mean response time and 174 in median time. We measure the maximum capacities of the servers when run in infinite-demand mode, and these results are shown in Table 8. While the throughput gain from our optimizations is significant, the scale of gain is much lower than the SpecWeb99 test, indicating that our latency benefits do not stem purely from extra capacity.



next up previous
Next: Excess parallelism Up: Latency Previous: SpecWeb99 workload
Yaoping Ruan
2004-05-04