Check out the new USENIX Web site. next up previous
Next: Glimpse Indexing Up: Evaluating Predictive Prefetching Previous: EPCM Results

   
Linking the Linux Kernel

Our next benchmark of file system activity adapts a test used by Chang et al. [5] that focuses on the Gnu linker. A significantly larger workload than the Andrew benchmark, this workload consists of primarily non-sequential file accesses to temporary files. Our predictive prefetching was able to reduce the total I/O latency of this benchmark by as much as 34%, and again reduced the total runtime by 11%.

This test used the Linux kernel source and linked together all of the top level modules (e.g. fs.o, mm.o, net.o, kernel.o ...) which were then linked into a final kernel image. It linked a total of 180 object files through 51 commands to create a kernel image of approximately twelve megabytes. Tables 3 and 4 show the summary statistics for our Gnu ld benchmark's workload. The cold cache test of our Gnu ld benchmark took approximately 36 seconds, with about 24 seconds of compute time for a 65% CPU utilization. We observed a miss ratio of 0.12. The latency for read events is significantly higher than those of the Andrew benchmark. The Gnu linker does not access individual files sequentially. This foils Linux's sequential read-ahead within each file and explains the high average read latencies, despite the low cache miss ratio. Additionally, the files being read in this benchmark are object files which are typically temporary in nature. As a result it is quite possible that the disk placement of these object files is not contiguous.



 
Table: Workload time summary for the Gnu ld benchmark. Elapsed times are in seconds, read times are in microseconds. Numbers in italics represent 90% confidence intervals.
Test Elap. 90% Compute 90% Read 90%
Cold 36.12 0.13 23.96 0.03 2866 18.84
Hot 23.98 0.01 23.95 0.01 596 3.12
 





 
Table: Read event count summary for the Gnu ld benchmark. Counts are the number of events that fell in that category averaged across the last 20 runs of the each test.
Test calls hits partial misses
Cold 6362 4794 767 799
Hot 6362 5694 668 0
 


Figure 9 shows the results for our Gnu ld benchmark. These results are consistent with those seen from the Andrew benchmark. Although not as dramatic, we still saw significant reductions in total I/O latency and read latencies. Again, these reductions increase as model order and partition size increase. PCM and last successor based prefetching do better than the normal Linux kernel with as much as a 8% reduction in the total I/O latency. The advanced predictions of EPCM seem to again offer a more substantial reduction of 34%. The reductions for read system calls are also not as astounding as those of the Andrew benchmark. Nevertheless, 33% reductions in read latencies are still a welcome improvement.


  
Figure: Reductions in elapsed times and read latencies for the Gnu ld benchmark with the last successor, PCM, EPCM and hot cache tests. Bars marked with P and E represent PCM and EPCM tests respectively. Partition sizes (ps) and model order (mo) are labeled as ps/mo.
\begin{figure}
\subfigure[Elapsed Time Reduction]{
\epsfig{figure=graphs/gnuld...
... Reduction]{
\epsfig{figure=graphs/gnuld.read.eps,height=1.8in} }
\end{figure}


next up previous
Next: Glimpse Indexing Up: Evaluating Predictive Prefetching Previous: EPCM Results
Tom M. Kroeger
2001-05-01