Check out the new USENIX Web site. next up previous
Next: Training with Multiple Patterns Up: Evaluating Predictive Prefetching Previous: Glimpse Indexing

Patching and Building SSH

For our last benchmark we used the package SSH, versions 1.2.18 through version 1.2.31 to represent the compile and edit cycle. Thus the system is able to train on the initial version but needs to perform on subsequently modified versions of the source code. This benchmark represents our largest workload in that it consists of over 44,000 read events. However, a good percentage of these requests are already satisfied from the I/O caches. Here again we see results similar to those of the Gnu ld and Glimpse benchmarks. Total elapsed time was reduced by 11%, total I/O latency was reduced by 84% and read latencies were reduced by 70%.

We created the SSH benchmark to represent a typical compile and edit process. It addresses the concern that our other three benchmarks were being tested on a repeating sequence of the same patterns that it was trained on. This benchmark consists of compiling version 1.2.18 of the SSH package. Then the code base is patched to become 1.2.19 and recompiled. This process is iterated until version 1.2.31 is built. The result is a benchmark that provides a set of access patterns that change in a manner typical of a common software package. Our models are trained on three compiles of version 1.2.18. We test predictive prefetching on a workload that patches the source to the next version and then compiles the new source code. This patching and build is repeated through the building of version 1.2.31. Because we are changing the source code with the various patches the patterns that result from the building represent a more realistic sequence of changing access patterns. This benchmark represents a case where our model may learn from the first build but will have to apply its predictions to a changing workload.

Tables 7 and 8 show the summary statistics for our SSH benchmark's workload. This workload has a CPU utilization of 89%. We observed a miss ratio of 0.12. The workload here represents that of a compile, edit and recompile process.



 
Table: Workload time summary for the SSH benchmark. Elapsed times are in seconds, all other times are in microseconds. Numbers in italics represent 90% confidence intervals.
Test Elap. 90% Compute 90% Read 90%
Cold 302.0 1.13 263.6 .82 2813 19.92
Hot 268.4 1.03 262.8 0.04 861 2.19
 





 
Table: Read event count summary for the SSH benchmark. Counts are the number of events that fell in that category averaged across the last 20 runs of the each test.
Test calls hits partial misses
Cold 44805 29552 13971 11282
Hot 44805 40839 13966 0
 


Figure 11 shows the results for our SSH benchmark. These results are consistent with those for our three previous benchmarks. Total elapsed time is reduced by 11%, while the I/O latency has been reduced by 84% and read latency has been reduced by 70%.


  
Figure: Reductions in elapsed times and read latencies for the SSH benchmark with the last successor, PCM, EPCM and hot cache tests. Bars marked with P and E represent PCM and EPCM tests respectively. Partition sizes (ps) and model order (mo) are labeled as ps/mo.
\begin{figure}
\subfigure[Elapsed Time Reduction]{
\epsfig{figure=graphs/ssh.e...
...cy Reduction]{
\epsfig{figure=graphs/ssh.read.eps,height=1.8in} }
\end{figure}


next up previous
Next: Training with Multiple Patterns Up: Evaluating Predictive Prefetching Previous: Glimpse Indexing
Tom M. Kroeger
2001-05-01