Check out the new USENIX Web site. next up previous
Next: Exclusivity via Smart Higher Up: Introduction Previous: Working towards Exclusion

Exclusivity via Smart Lower Caches

It has been known that the Least Frequently Used ([13,32]) algorithm performs better at second-level caches that the traditional LRU algorithm. A more sophisticated second-level cache management algorithm is the MQ algorithm [36] which maintains multiple lists geared to capture frequently accessed pages with long reuse intervals. However, it is not studied for more than two levels of cache and also cannot achieve complete exclusivity of the caches where desirable.

A more recent algorithm, X-RAY [2], constructs in the RAID controller cache an approximate image of the contents of the filesystem cache by monitoring the meta-data updates, which allows for better cache replacement decisions and exclusivity. Such gray-box approaches, however, are domain-specific and not easily applicable to more than two levels of cache.

A similar approach is to use a Client Cache Tracking (CCT) table [6] in the lower cache to simulate the contents of the higher cache. This allows the lower cache to proactively reload the evicted pages from the storage media. The extra cost of these requests, however, may overburden the storage media resulting in high read response times.


next up previous
Next: Exclusivity via Smart Higher Up: Introduction Previous: Working towards Exclusion
root 2008-01-08