Check out the new USENIX Web site. next up previous
Next: Implementation and Results Up: Simulation and Results Previous: Results

Performance Analysis


Table 3: Oracle Miss Trace-128M hits and misses distribution with a 512 MBytes cache (Note: first-time accesses to any blocks are not counted in either category).
Algor- distance $< $ 64k distance $\ge $ 64k
ithms #hits #misses #hits #misses
MQ 1553k 293k 1919k 2646k
2Q 1846k 0 1330k 3235k
FBR 1611k 234k 1146k 3418k
LFRU 1412k 434k 1470k 3094k
LRU2 1606k 239k 1179k 3385k
LRU 1846k 0 407k 4157k
LFU 1077k 769k 1196k 3368k
MRU 285k 1560k 434k 4131k


To understand the performance results in more detail, we use temporal distance as a measure to analyze the algorithms. Since the traces in our study have similar access patterns, this section reports the analysis using the Oracle Miss Trace-128M trace as a representative. The performance of a replacement algorithm at server caches primarily depends on how well they can satisfy the life time property. As we have observed from Section 3, accesses to server caches tend to have long temporal distances. If the majority of accesses have temporal distances greater than $D$, a replacement algorithm that cannot keep blocks longer than $D$ time is unlikely to perform well. Our method to analyze the performance is to classify all accesses into two categories according to their temporal distances: $< C$ and $\ge C$ where $C$ is the number of entries in the server buffer cache. Table 3 shows the number of hits and misses in the two categories for a 512 MBytes server buffer cache. LRU has no miss in the left category because any access in this category is less than $C$ references away from its previous access to the same block. The block being accessed should still remain in the cache since the buffer cache can hold $C$ blocks. However, LRU has a large number of misses in the right category because any block that has not been accessed for more than $C$ time can be evicted from the cache and therefore lead to a miss for the next access to this block. Since the right category dominates the total number of accesses (Figure 3(a)), LRU does not perform well. The 2Q, FBR, LFRU and LRU2 algorithms reduce the number of misses in the right catefory by 15-25% because these algorithms can keep warm blocks in the cache longer than $C$ time. However, in order to achieve this, the FBR, LFRU and LRU2 algorithms have to sacrifice some blocks, which are kept in the cache for a short period of time. As a result, these three algorithms have some misses in the left category. But the number of such misses is much smaller than the number of misses avoided in the right category. Overall, the three algorithms have fewer misses than LRU. Because the 2Q algorithm has no misses in the left category, it outperforms the FBR, LFRU and LRU2 algorithms. MQ significantly reduces the number of misses in the right category. As shown on Table  3, MQ has 2,646k misses in the right category, 36% fewer than LRU. Similarly to the FBR algorithm, MQ also has some misses in the left category. However, the number of such misses is so small that it contributes to only 10% of the total number of misses. Overall, the MQ algorithm performs better than other on-line algorithms.
next up previous
Next: Implementation and Results Up: Simulation and Results Previous: Results
Yuanyuan Zhou
2001-04-29