Check out the new USENIX Web site. next up previous
Next: 6 Conclusions Up: Improving Application Performance through Previous: 4.4 Increase of the

5 Related Work


Not much research has been done in the area of compressing the swap space. The compression cache proposed by Fred Douglis [4] is very similar, in essence, to our work, but some important differences can be found. In that work, the swap pages are also compressed and kept in a cache to increase both the size of the virtual memory and the performance of the applications that have to swap. One big difference between our work and the one done by Douglis is that the results we present are not so dependent on the compression ratio as they were. In the previous work, no performance gains were obtained with compression ratios worse than 30% while we obtain nice performance improvements with even a compression ratio of 62%. This might be either due to design issues or due to the improvements in the technology (compressing is much faster now). It is also important that the previous work lacked a study on the kinds of read hits obtained in the cache. This study has led us to significant design modifications such as having two different paths: one for swapping in and one for swapping out. As we have shown in this paper, this distinction has obtained significant performance benefits. Finally, all their benchmarks were single process while we believe that multi-process benchmarks have also to be studied.

If we examine our work in a more general way we can divide it in two basic issues: increasing the size of the memory and reducing the average time needed to swap in/out a page. Let's discus what has been done in both fields.

Following the idea of increasing the size of the memory, there are some commercial products that compress the physical memory. With these software mechanisms the applications believe that the system has a larger amount of physical memory. Anyway, the achievements obtained by such systems are not clear [8, 14]. The same idea has also been done in hardware with much better performance gains [6].

There have also been many proposals to decrease the number of disk accesses for swapping issues. For instance, some work has been devoted to minimize the number of pages that have to be swapped out. If the contents of a page is irrelevant to the application execution, this page does not need to be kept in the swap [15, 5]. In the same line, software has been developed to study the utilization of the pages and thus improve the programs and reduce the number of pages swapped in/out [12]. There has also been some work that tried to group pages when swapped out so that larger writes were done [1].

Finally, the approach of compressing information before sending it to the disk is widely used in database environments and in some file systems.

next up previous
Next: 6 Conclusions Up: Improving Application Performance through Previous: 4.4 Increase of the

Toni Cortes
Tue Apr 27 17:43:22 MET DST 1999