Check out the new USENIX Web site. next up previous
Next: Normalized Benefits and the Up: Results of Detailed Simulations Previous: Results of Detailed Simulations

Wide Range Results

For each of our test programs, we chose a wide range of memory sizes to simulate. The plots of this section show the entire simulated range for each program. Subsequent sections, however, concentrate on the interesting region of memory sizes. This range usually begins around the size where a program spends 90% of its time paging and 10% of its time executing on the CPU, and ends at a size where the program causes very little paging.


  
Figure 3: Compressed caching yields consistent benefits across a wide range of memory sizes.
\begin{figure*}\begin{center}
\mbox{\epsfig{file=espresso-raw-by-processor.eps,w...
...sfig{file=rscheme-raw-by-processor.eps,width=3.2in} }\end{center}\end{figure*}

Figure 3 shows log-scale plots of the paging time of each of our programs as a function of the memory size. Each line in the plot represents the results of simulating a compressed cache using a particular algorithm on our SPARC 168 MHz machine. The paging time of a regular LRU memory system (i.e., with no compression) is shown for a comparison. As can be seen, compressed caching yields benefits for a very wide range of memory sizes, indicating that our adaptivity mechanism reliably detects locality patterns of different sizes. Note that all compression algorithms exhibit benefits, even though there are definite differences in their performance.

Figure 3 only aims at conveying the general idea of the outcome of our experiments. The same results are analyzed in detail in subsequent sections (where we isolate interesting memory regions, algorithms, architectures, and trends).


next up previous
Next: Normalized Benefits and the Up: Results of Detailed Simulations Previous: Results of Detailed Simulations
Scott F. Kaplan
1999-04-27