Check out the new USENIX Web site. next up previous
Next: Initial Results Up: Andrew Benchmark Previous: Andrew Benchmark

Characterizing the Workload

The Andrew benchmark consists of five phases, however, the only phase that contained testing relevant to predictive prefetching is phase five, the compile phase. So when we refer to the Andrew benchmark we are referring to phase five of this benchmark. This test consists of compiling 17 C files and linking the created object files into two libraries and one executable program. The total source code consists of 431 kilobytes in 11,215 lines of code.

Tables 1 and 2 show summaries of time and event count statistics for the Andrew benchmark on the test machine under the unmodified Linux 2.2.12 kernel. The rows marked Cold represent tests where the I/O caches were cleared out prior to each run of the benchmark, while the rows marked Hot represent tests where the I/O caches were not cleared out. Note that the hot cache test required no disk accesses because all of the data for the Andrew benchmark was kept within the I/O caches on the test machine.

Table 1 shows latency statistics. The column marked Elapsed represents the mean elapsed time for that test. The column marked Compute represents the amount of time the benchmark process was computing: the sum of the user time and system time for that test. This time represents a lower bound on how fast we can make our benchmark run. The column marked Read shows the average duration of read system calls. A 90% confidence interval follows each of these measures.

Table 2 shows read event count statistics. We divided read calls into three categories, hits, partial hits and misses. Hits required no disk access: data was already available in the page or buffer cache. Partial hits represent cases where the necessary data was already in the process of being read, but wasn't yet available. Misses represent events where the data request required new disk activity.

The Andrew benchmark workload is I/O intensive. However, many of the events are satisfied from the I/O caches. On our test machine this workload consisted of 919 read events, of which 47 required disk access with a cold cache, a miss ratio of 0.05. From the cold cache test we can see that it spent 7.94 seconds in the running state and it had a total elapsed time of 9.15 seconds. So we can bound its total I/O latency to at most the difference of these two numbers, which is 1.21 seconds for this case.

Table: Workload time summary for phase five of the Andrew benchmark. Elapsed and compute times are in seconds; read times are in microseconds. Numbers in italics represent 90% confidence intervals.
Test Elapsed 90% Compute 90% Read 90%
Cold 9.15 0.05 7.94 0.01 646 31.06
Hot 7.95 0.02 7.93 0.00 139 0.31

Table: Read event count summary for the Andrew benchmark. Counts are the number of events that fell in that category averaged across the last 20 runs of the each test.
Test Calls Hits Partial Misses
Cold 919 334 537 47
Hot 919 382 537 0


next up previous
Next: Initial Results Up: Andrew Benchmark Previous: Andrew Benchmark
Tom M. Kroeger