Check out the new USENIX Web site. next up previous
Next: Related Work Up: Technology Trends Previous: Is Memory Bandwidth a

Sensitivity Analysis

The cost and benefits of compressed caching are dependent on the relative costs of compressing (and uncompressing) a page vs. fetching a page from disk. If compression is insufficiently fast relative to disk paging, compressed virtual memory will not be worthwhile.

On the other hand, if CPU speeds continue to increase far faster than disk speeds, as they have for many years, then compressed virtual memory will become increasingly effective and increasingly attractive. Over the last decade, CPU speeds have increased by about 60% a year, while disk latency and bandwidth have increased by only about 20% a year. This works out to an increase in CPU speeds relative to disk speeds of one third a year--or a doubling every two and a half years, and a quadrupling every five years.


  
Figure 6: A sensitivity analysis studying disks of various speeds. This conservatively covers the cases of slower CPUs, perfect prefetching, and larger page sizes.
\begin{figure*}\begin{center}
\mbox{\epsfig{file=espresso-disk-speeds.eps,width=...
...n}\epsfig{file=rscheme-disk-speeds.eps,width=3.2in} }\end{center}\end{figure*}

Figure 6 shows plots of simulated performance of our adaptive caching system, using page compression timings measured on a 300 MHz UltraSPARC. Each line represents the paging costs for simulations using a given disk fault cost. Costs are normalized to the performance of a conventional LRU memory with the same disk page access time; that is, each curve represents the speedup or slowdown that comes from using compressed caching.

The middle line in each plot can be regarded as the performance of a machine the speed of a 300 MHz UltraSPARC with an average page fetch cost (for 4KB pages) of only 2.5ms, about one third the average disk seek time of a fast disk. Note that, in normalized performance terms, assuming a twice as fast disk is exactly equivalent to assuming a twice as slow CPU. At the same time, studying the case of a fast disk conservatively covers the case of perfect prefetching of multiple pages (a twice as fast disk is equivalent to always prefetching the next two needed pages with one seek). This, in turn, conservatively covers the case of using larger page sizes. Hence, our sensitivity analysis (taking into account fast disks) also subsumes many other scenarios.

Looking at the middle line of each plot, we can see that with a disk page access cost of 2.5ms, most programs show a reduction of paging times by 30 to 70 percent, averaged across the interesting range of memory sizes. Thus, compressed virtual memory is a very clear win even for a disk access cost of 2.5ms per 4KB page. The line above the middle one can be taken to represent a system with the same CPU speed and disk costs a factor of two lower, at 1.25ms per 4KB page. Even though performance for this system is significantly worse, still much speedup is obtained. The top line represents a system where disk page accesses cost only 0.625ms per 4KB page. For some programs, this degrades performance overall to the point that compressed caching is not worthwhile.

Going the other direction, along with the technology trends, we can look at the next lower line to see the performance of a system with twice as fast a processor relative to its disk. For most of our programs, each doubling of CPU speed offers a significant additional speedup, typically decreasing remaining paging costs by ten to forty percent.


next up previous
Next: Related Work Up: Technology Trends Previous: Is Memory Bandwidth a
Scott F. Kaplan
1999-04-27