Check out the new USENIX Web site. next up previous
Next: Compression Algorithms Up: The Case for Compressed Previous: The Case for Compressed

   
Introduction

For decades, CPU speeds have continued to double every 18 months to two years, but disk latencies have improved only very slowly. Disk latencies are five to six orders of magnitude greater than main memory access latencies, while other adjacent levels in the memory hierarchy typically differ by less than one order of magnitude. Programs that run entirely in RAM benefit from improvements in CPU speeds, but the runtime of programs that page is likely to be dominated by disk seeks, and may run many times more slowly than CPU-bound programs.

In [Wil90,Wil91b] we proposed compressed caching for virtual memory--storing pages in compressed form in a main memory compression cache to reduce disk paging. Appel also promoted this idea [AL91], and it was evaluated empirically by Douglis [Dou93] and by Russinovich and Cogswell [RC96]. Unfortunately Douglis's experiments with Sprite showed speedups for some programs, but no speedup or some slowdown for others. Russinovich and Cogswell's data for a mixed PC workload showed only a slight potential benefit. There is a widespread belief that compressed virtual memory is attractive only for machines without a fast local disk, such as diskless handheld computers or network computers, and laptops with slow disks. As we and Douglis pointed out, however, compressed virtual memory is more attractive as CPUs continue to get faster. This crucial point seems to have been generally overlooked, and no operating system designers have adopted compressed caching.

In this paper, we make a case for the value of compressed caching in modern systems. We aim to show that the discouraging results of former studies were primarily due to the use of machines that were quite slow by current standards. For current, fast, disk-based machines, compressed virtual memory offers substantial performance improvements, and its advantages only increase as processors get faster. We also study future trends in memory and disk bandwidths. As we show, compressed caching will be increasingly attractive, regardless of other OS improvements (like sophisticated prefetching policies, which reduce the average cost of disk seeks, and log-structured file systems, which reduce the cost of writes to disk).

We will also show that the use of better compression algorithms can provide a significant further improvement in the performance of compressed caching. Better Ziv-Lempel variants are now available, and we introduce here a new family of compression algorithms designed for in-memory data representations rather than file data.

The concrete points in our analysis come from simulations of programs covering a variety of memory requirements and locality characteristics. At this stage of our experiments, simulation was our chosen method of evaluation because it allowed us to easily try many ideas in a controlled environment. It should be noted that all our simulation parameters are either relatively conservative or perfectly realistic. For instance, we assume a quite fast disk in our experiments. At the same time, the costs of compressions and decompressions used in our simulations are the actual runtime costs for the exact pages whose compression or decompression is being simulated at any time.

The main value of our simulation results, however, is not in estimating the exact benefit of compressed caching (even though it is clearly substantial). Instead, we demonstrate that it is possible to detect reliably how much memory should be compressed during a phase of program execution. The result is a compressed virtual memory policy that adapts to program behavior. The exact amount of compressed memory crucially affects program performance: compressing too much memory when it is not needed can be detrimental, as is compressing too little memory when slightly more would prevent many memory faults. Unlike any fixed fraction of compressed memory, our adaptive compressed caching scheme yields uniformly high benefits for all test programs and a wide range of memory sizes.


next up previous
Next: Compression Algorithms Up: The Case for Compressed Previous: The Case for Compressed
Scott F. Kaplan
1999-04-27