Check out the new USENIX Web site. next up previous
Next: SARC: Sequential Prefetching in Up: Sequential Prefetching Previous: Synchronous and Asynchronous Prefetching


Combining Caching and Prefetching for Sequential Data

So far, we have discussed two crucial aspects of sequential prefetching: (i) What to prefetch? and (ii) When to prefetch? We now turn our attention to the next issue, namely, management of prefetched data in the cache. Given a fixed amount of cache, prefetched data cannot be kept forever and must eventually be replaced. Prefetching and caching are intertwined, and cannot be studied in isolation.

In practice, the most widely used online policy for cache management is LRU that maintains a doubly-linked list of tracks according to recency of access. In the context of sequential prefetching, when tracks are prefetched or accessed, they are placed at the MRU (most recently used) end of the list. And, for cache replacement, tracks are evicted from the LRU end of the list.

We now describe an interesting situation that arises when the above described synchronous+asynchronous prefetching strategy is used along with the LRU-based caching. Suppose that the asynchronous trigger track in a currently active group of prefetched tracks is accessed. This will cause an asynchronous prefetch of the next group of tracks. In an LRU-based cache, these newly fetched group of tracks along with the asynchronous trigger track will be placed at the MRU end of the list. The unaccessed tracks within the current prefetch group remain where they were in the LRU list, and, hence, in some cases, could be near the LRU end of the list. However, observe that these unaccessed tracks within the current prefetch group will be accessed before the tracks in the newly prefetched group. Hence, depending upon the amount of cache space available for sequential data, it can happen that some of these unaccessed tracks may be evicted from the cache before they are accessed resulting in a sequential miss. Furthermore, this may happen repeatedly, thus defeating the purpose of employing asynchronous prefetching. This observation is related to ``Do No Harm'' rule of [34] in the context of offline policies for integrated caching and prefetching.

In a purely demand-paging context, LRU is a stack algorithm [2]. Quite surprisingly, when LRU-based caching is used along with the above prefetching strategy, the resulting algorithm violates the stack property. As a result, when the amount of cache space given to sequentially prefetched data increases, sequential misses do not necessarily decrease. This anomaly is illustrated in the right-hand panel of Figure 1. As will be seen in the sequel, a stack property is a crucial ingredient in our algorithm.

At the cost of increasing sequential misses, both of the above problems can be hidden if (i) only synchronous prefetching is used or (ii) both synchronous+asynchronous prefetching are used and the asynchronous trigger is always set to be the last track in a prefetched group. The first approach amounts to a relapse in which we forego all potential benefits of asynchronous prefetching. The second approach is attractive in principle; however, if the track being prefetched is accessed before it is in the cache, then the resulting sequential miss will not be avoided. To avoid this sequential miss, in real life, the asynchronous trigger track is set sufficiently before the last prefetched track so that the next prefetched group arrives in cache before it is actually accessed.

Unlike the above two approaches, we are interested in fixing the anomalous behavior without incurring additional sequential misses. To this end, we now propose a simple to implement and elegant algorithmic enhancement. As mentioned above, in an LRU-based cache, the newly prefetched group of tracks along with the asynchronous trigger track in the current group of tracks are placed at the MRU end of the list. We propose, in addition, to also move all unaccessed tracks in the current group of tracks to the MRU end of the list. As can be seen in the right-hand panel of Figure 1, this enhancement retains all benefits of asynchronous prefetching while ridding it of its anomalous behavior.


next up previous
Next: SARC: Sequential Prefetching in Up: Sequential Prefetching Previous: Synchronous and Asynchronous Prefetching
Binny Gill 2005-02-14