Check out the new USENIX Web site. next up previous
Next: Ziv-Lempel compression. Up: Compression Algorithms Previous: Compression Algorithms

Background: Compression

To understand our algorithms and their relationship to other algorithms, it is necessary to understand a few basic ideas about data compression. (We will focus on lossless compression, which allows exact reconstruction of the original data, because lossy compression would generally be a disaster for compressed VM.)

All data compression algorithms are in a deep sense ad hoc--they must exploit expected regularities in data to achieve any compression at all. All compression algorithms embody expectations about the kinds of regularities that will be encountered in the data being compressed. Depending on the kind of data being compressed, the expectations may be appropriate or inappropriate and compression may work better or worse. The main key to good compression is having the right kinds of expectations for the data at hand.

Compression can be thought of as consisting of two phases, which are typically interleaved in practice: modeling and encoding [BCW90,Nel95]. Modeling is the process of detecting regularities that allow a more concise representation of the information. Encoding is the construction of that more concise representation.



 

Scott F. Kaplan
1999-04-27