Check out the new USENIX Web site. next up previous
Next: Related Work Up: Making the ``Box'' Transparent: Previous: Excess parallelism

Results Portability


The main goal of this work is to provide developers with tools to diagnose and correct the performance problems in their own applications. Thus we hope that the optimizations made on one platform also have benefit on other platforms. To test this premise, we test the performance on Linux, which has no DeBox support.

Unfortunately, we were unable to get Linux to run properly on our existing hardware, despite several attempts to resolve the issue on the Linux kernel list. So, for these numbers, we use a server machine with a 3.0 GHz Pentium 4 processor and two Intel Pro1000/MT Gigabit adapters, 1GB of memory, and a similar disk. The experiments were performed on 2.4.21 kernel with epoll() support.

We compare the throughput and latency of four servers: Apache 1.3.27, Haboob, Flash, and the new Flash. We increase the max number of clients to 1024 in Apache and disable logging. Both the original Flash and the new Flash server use the maximum available cache size for LRU. We also adjust the cache size in Haboob for the best performance. The throughput results, shown in Table 10, are quite surprising. The Haboob server, despite having aggressive optimizations and event-driven stages, performs slightly better than Apache on diskbound workload but worse than Apache on an in-memory workload. We believe that its dependence on excess parallelism (via its threaded design) may have some impact on its performance. The new Flash server gains about 17-24% over the old one for the smaller workloads, and all four servers have similar throughput on the larger workload because of diskbound.


Table: Throughput measurement on Linux with 1GB memory
Throughput (Mb/s)
data set Haboob Apache Flash New Flash
500MB 324.9 434.3 1098.1 1284.7
1.5GB 303.4 372.4 661.7 822.5
3.3GB 184.1 177.4 173.8 199.1
Response Time (ms)
profile Haboob Apache Flash New Flash
5% 78.2 0.22 0.21 0.15
median 414.3 0.61 1.56 0.42
95% 1918.9 661.8 412.5 3.68
mean 656.2 418.0 512.5 141.9



Figure: Response time on Linux with 3.3GB dataset
\begin{figure}
\centerline {\epsfig{figure=linux-latency.eps,width=4in,height=2.5in}}\vspace{-.125in}\vspace{-.125in}\end{figure}

Despite similar throughputs at the 3.3GB data set size, the latencies of the servers, shown in Figure 12 and Table 10, are markedly different. The Haboob latency profile is very close to their published results, but are beaten by all of the other servers. We surmise that the minimal amount of tuning done to configurations of Apache and the original Flash yield much better results than the original Haboob comparison (41). The benefit of our optimization is still valid on this platform, with a factor of 4 both in median and mean latency over the original Flash. One interesting observation is that the 95% latency of the new Flash is a factor of 39 lower than the mean value. This result suggests that the small fraction of long-latency requests is the major contribution to the mean latency. Though our Linux results are not directly comparable to our FreeBSD ones due to the hardware differences, we do notice this phenomenon is less obvious on FreeBSD. Presumably one of the causes of this is the blocking disk I/O feature of sendfile() on Linux. Another reason may be Linux's filesystem performance, since this throughput is worse than what we observed on FreeBSD.



next up previous
Next: Related Work Up: Making the ``Box'' Transparent: Previous: Excess parallelism
Yaoping Ruan
2004-05-04