Check out the new USENIX Web site. next up previous
Next: Combining Caching and Prefetching Up: Sequential Prefetching Previous: Sequential Detection


Synchronous and Asynchronous Prefetching

The simplest sequential read-ahead strategy is synchronous prefetching which on a sequential miss on track $ n$ simply brings tracks $ n$ through $ n+m$ into the cache, where $ m$ is known as the degree of sequential read-ahead and is a tunable parameter [33]. As mentioned in the introduction, the additional cost of read-ahead on a seek is becoming progressively smaller. The number of sequential misses decrease with increasing $ m$, if (i) all the read-ahead tracks are accessed and (ii) not discarded before they are accessed. Consider the well known OBL (one block look-ahead) scheme [20] that uses $ m=1$. This scheme reduces the number of sequential misses by $ 1/2$. Generalizing, with $ m$ track look-ahead, number of sequential misses decrease by $ 1/(m+1)$. To eliminate misses purely by using synchronous prefetching, $ m$ needs to become infinite. This is impractical, since prefetching too far in advance, will cause cache pollution, and will ultimately degrade performance. Also, depending upon the amount of cache space available to sequential data, not all tracks can be used before they are evicted. Hence, by simply increasing $ m$, it is not always possible to drive the number of sequential misses to zero. The behavior of sequential misses for synchronous prefetching is illustrated in the left-hand panel of Figure 1.

Figure 1: Both of the above graphs were obtained via a simulation for a single sequential stream. The left-hand panel depicts behavior of sequential misses for synchronous prefetching. The lower most hyperbolic curve corresponds to an idealized situation with an infinite degree of read-ahead. It can be seen that for a fixed, finite degree of read-ahead $ m$, sequential misses initially follow the hyperbolic curve, and, then become constant. Moreover, higher the degree of read-ahead, the smaller the minimum constant attained. The right-hand panel depicts behavior of sequential misses for synchronous+asynchronous prefetching with and without an anomaly. The anomalous curve initially follows the hyperbolic curve, but has a bump before going to zero.
\begin{figure*}\centerline{
\epsfig{figure=epsplots/sync.eps,height=1.75in,width...
...n}
\epsfig{figure=epsplots/anomaly.eps,height=1.75in,width=2.5in} }\end{figure*}

To effectively eradicate misses, asynchronous prefetches can be used [29,32]. An asynchronous prefetch is carried out in the absence of a sequential miss, typically, on a hit. The basic idea is to read-ahead a first group of tracks synchronously, and after that when a preset fraction of the prefetched group of tracks is accessed, asynchronously (meaning in the absence of a sequential miss) read-ahead next group of tracks, and so forth and so on. Typically, asynchronous prefetching is done on an asynchronous trigger, namely, a special track in a prefetched group of tracks. When the asynchronous trigger track is accessed, the next group of tracks is asynchronously read-ahead and a new asynchronous trigger is set. The intent is to exploit sequential structure to continuously stage tracks ahead of their access without incurring a single additional miss other than the initial sequential miss. A good analogy is that of an on-going relay race where each group of tracks passes the baton on to the next group of tracks and so on.

To summarize, in state-of-the-art sequential prefetching, synchronous prefetching is used initially when the sequence is first detected. After this bootstrapping stage, asynchronous prefetching can sustain itself as long as all the tracks within the current prefetched group are accessed before they are evicted. As a corollary, the asynchronous trigger track will also be accessed, and in turn, will prefetch the next group of tracks amongst which will also be the next asynchronous trigger.


next up previous
Next: Combining Caching and Prefetching Up: Sequential Prefetching Previous: Sequential Detection
Binny Gill 2005-02-14