Check out the new USENIX Web site. next up previous
Next: Server Access Pattern Up: Methodology Previous: Algorithms


Table 1: Characteristics of the four traces used in the study.
  First Level # Reads # Writes # Clients or
  Cache Size (MBytes) (millions) (millions) # First level Servers
Oracle Miss Trace-128M 128 7.3 4.3 single
Oracle Miss Trace-16M 16 3.8 2.0 single
HP Disk Trace 30 0.2 0.3 multiple
Auspex Server Trace 8 per client 1.8 0.8 multiple

We have collected four server buffer cache traces from file servers, disk subsystems and database back-end storage servers. These traces are chosen to represent different types of workloads. All traces contained only misses from one or multiple client buffer caches that use LRU or its variations as their replacement algorithms. The block sizes for these traces are 8 Kbytes. Table 1 shows the characteristics of the four traces. The first level buffer cache size clearly affects server buffer cache performance. We set the first level buffer cache sizes for the two Oracle traces to represent typical configurations in real systems. However, we could not change the first level buffer cache sizes of the other two traces because they were obtained from other sources. Oracle Miss Trace-128M is collected from a storage system connecting to an Oracle 8i database client running the standard TPC-C benchmark [42,27] for about two hours. The Oracle buffer cache replacement algorithm is similar to LRU [5]. The TPC-C database contains 256 warehouses and occupies around 100 GBytes of storage excluding log disks. The trace captures all I/O accesses from the Oracle process to the storage system. That is, the trace includes only reads that are missed on the Oracle buffer cache and writes that are flushed back to the storage system periodically or at commit time. The trace ignores all accesses to log disks. In order to better represent the workload on a real database system, we used 128 MBytes for the Oracle buffer cache. Oracle Miss Trace-16M is collected with the same setup as the previous trace except the database buffer cache (first level buffer cache) size is set to 16 MBytes. For both traces, we fixed the execution time to be 2 hours instead of fixing the total number of transactions. Oracle Miss Trace-128M has performed many more transactions than the second trace. That is why both traces have similar amount of misses. HP Disk Trace was collected at Hewlett-Packard Laboratories in 1992 [33,32]. It captured all low-level disk I/O performed by the system. We used the trace gathered on Cello, which is a timesharing system used by a group of researchers at Hewlett-Packard Laboratories to do simulations, compilation, editing and mail. We have also tried other HP disk trace files, and the results are similar. Auspex Server Trace was an NFS file system activity trace on an Auspex file server in 1993 at UC Berkeley  [16]. The system included 237 clients spread over four Ethernets, each of which connected directly to the central server. The trace covers seven days. We preprocessed the trace to include only block and directory read and write accesses. Similarly to [16], we first split the trace into small trace files according to the client host ID. We then ran these traces through a multi-node cache simulator and collected the interleaved misses from different client caches as our server buffer cache trace. The multi-node client cache simulator uses 8 MBytes for each client cache and runs the LRU replacement algorithm.
next up previous
Next: Server Access Pattern Up: Methodology Previous: Algorithms
Yuanyuan Zhou