Check out the new USENIX Web site. next up previous
Next: Our Approach Up: SARC: Sequential Prefetching in Previous: SARC: Sequential Prefetching in


Prior Work

A number of approaches have been proposed for integrated caching of sequential and random data. Almost all of them employ an LRU variant. One simple approach [20] that we call LRU-Top is to maintain a single LRU list for both sequential and random data, and whenever tracks are prefetched or accessed, they are placed at the MRU end of the list. The tracks are evicted from the LRU end of the list. Another approach [35,36] that we call LRU-Bottom is to maintain a single LRU list for both sequential and random data; however, while random data is inserted at the MRU end of the list, sequential data is inserted near the LRU end. For another interesting LRU variant, see [37]. [1] suggested holding sequential data for a fixed amount of time, while [29] suggested giving sequential data a fixed, predetermined portion of the cache space. [34] studied offline, optimal policies for integrated caching and prefetching. In a recent work, [38] focused on general demand prepaging and noted that the amount of cache space devoted to prefetched data is ``critical, and its ideal value depends not only on the predictor and the degree, but also on the main memory size, the application, and the reference behavior of the process.'' In other words, no strategy that is independent of the workload characteristics is likely to be universally useful.

In the context of demand paging, in addition to LRU, a number of cache replacement policies have been studied, see, for example, LFU, FBR, LRU-2, 2Q, MQ, LRFU, and ARC. For a detailed overview of these algorithms, see [7,8]. Our context is different than that of these algorithms, since we are interested in integrated policies for caching and prefetching. Previously, [14] have considered adaptively balancing cache amongst three partitions: LRU cache, hinted cache, and prefetch cache. It is not clear how to efficiently extend the algorithm in [14] in presence of potentially a very large number of sequential streams.

Our algorithm, namely, Sequential Prefetching in Adaptive Replacement Cache (SARC), is closely related to-but distinct from-Adaptive Replacement Cache (ARC). In particular, the idea of two adaptive lists in SARC is inspired by ARC. There are several differences between the two algorithms: (i) ARC is applicable only in a demand paging scenario, whereas SARC combines caching along with sequential prefetching. (ii) While ARC maintains a history of recently evicted pages, SARC does not need history and is also simpler to implement.


next up previous
Next: Our Approach Up: SARC: Sequential Prefetching in Previous: SARC: Sequential Prefetching in
Binny Gill 2005-02-14