Adding a cache to the swapping mechanism means that some memory available for processes is now taken away for the cache. This means that the applications will have less memory to work with. If nothing else is done, we have only taken some fast memory from the applications to offer the same amount of memory but somewhat slower. This does not seem to be the solution to increase the performance of the applications. The ideal solution would be to take some fast memory from the users to offer them a somewhat slower but 2 or 3 times larger one. Of course, this new memory has to be faster than the disk. This would reduce the number of times the system has to access the disk for paging reasons. This can be achieved by compressing the swapped pages. In a compressed cache, the system can keep more pages than the ones taken from the applications.
Whenever a page is swapped out, the system compresses it before storing it in the cache. On the other hand, when the swap module requests a page, the system gets it either from the cache or the disk and decompresses it before handling it to the swap module. Figure 1 shows the first version of the path proposed for swapping in/out pages.
Figure 1: Conceptual vision of the compression and cache mechanism.