Check out the new USENIX Web site. next up previous
Next: Software Setup Up: Experimental Set-up Previous: Experimental Set-up


Traces

We use both synthetic and real-life scenario traces that have been widely used for evaluating caching algorithms.

P1-P14: These traces [25,17] were collected over several months from workstations running Windows NT by using VTrace [23].

Financial1 and Financial2: These traces [16] were collected by monitoring requests to disks of OLTP applications at two large financial institutions.

SPC1: We use a trace (as seen by a subset of disks) when servicing the SPC-1 benchmark [24]. It combines three workloads that follow purely random, sequential, and hierarchical reuse access models. This synthetic workload has been widely used for evaluating cache algorithms [25,14,16,3].

Zipf Like: We use a synthetic trace that follows a Zipf-like [37] distribution, where the probability of the $ i^{th}$ page being referenced is proportional to $ 1/i^\alpha$ ( $ \alpha = 0.75$, over $ 400$K blocks). This approximates common access patterns, such as in web traffic [9,5]. Multi-level caching algorithms [33,12] have employed this trace for evaluations.

Since write cache management policies need to leverage both temporal and spatial locality (see [15]), the write cache is typically managed using a policy distinct from the read cache. Following previous work [33,12], we focus on the read component of caches and choose to ignore the writes for simplicity. Including the writes would only turn the comparisons more in favor of PROMOTE as they would increase contention for the disk and network resources, a scenario in which PROMOTE easily outshines DEMOTE. Each trace is limited to the first two million reads to shorten the experiment durations.


next up previous
Next: Software Setup Up: Experimental Set-up Previous: Experimental Set-up
root 2008-01-08