Check out the new USENIX Web site. next up previous
Next: 4. Experimental Results Up: 3. Implementation Previous: 3.4 Delayed Writes

  
3.5 Validating the Integrated Simulator


 
Table 1:  Platform characteristics.
Operating system Microsoft Windows 2000
CPU type Intel Pentium III 733 MHz
Memory 128 MB
SCSI Interface Adaptec 39160
SCSI bus speed 160 MB/s
Disk model Seagate ST39133LWV 9.1 GB
RPM 10000
Average seek 5.2 ms read, 6.0 ms write

So far, we have described the architecture and the components of the integrated MimdRAID simulator and device driver. To establish 1) the accuracy of the head-tracking mechanism, and 2) the validity of the simulator, we perform a series of experiments using ``Iometer'', a benchmark developed by the Intel Server Architecture Lab [13]. Iometer can generate different workloads of various characteristics including read/write ratio, request size, and the maximum number of outstanding requests. We use Iometer to generate equivalent workloads to drive both the device driver and the simulator. Table 1 lists some platform characteristics of the prototype. Figure 5 shows the Iometer result. The throughput discrepancy between the simulator and the prototype under all queueing conditions is under 3%.


  
Figure 5: Comparison of throughput results from the prototype system and the simulator. We use two random workloads, one with just reads, and another with an equal number of reads and writes. The request size is 512 bytes. The array configuration is a $2 \times 3$ SR-Array based on the RSATF scheduler. Writes are synchronously propagated in the foreground. We vary the number of outstanding requests (on the x-axis).
\includegraphics*[width=2.5in]{eps/validation_new.eps}

To shed more light on the accuracy of the model, in Table 2, we give more detailed statistics of subjecting the model and the prototype to the ``Cello base'' file system workload (described in Section 4.1). The mean prediction error and low standard deviation show that there are essentially only two types of requests: 99.8% of the predictions are almost right on target, and 0.2% of the predictions miss their targets by a very small amount of time and incur a full rotation penalty. The net effect of these rare rotation misses, however, is insignificant in terms of overall access time. These results indicate that the simulator faithfully simulates a real SR-Array, allowing us to understand the behavior of the SR-Array using simulation-based results in later sections.


 
Table 2:   Detailed statistics of model accuracy when subjected to the ``Cello base'' file system workload. The configuration is a $2 \times 3$ SR-Array based on RSATF scheduling. I/O requests in this experiment are physical I/O requests sent to drives; and access time is that of a physical I/O. Prediction error is the difference between the access time predicted by the scheduler and the actual measured access time of a single request. We calculate demerit using the definition by Ruemmler and Wilkes [21].
Misses 0.22%
Mean Prediction Error 3 $\mu s$
Standard Deviation of Error 31 $\mu s$
Average Access Time 2746 $\mu s$
Demerit 52 $\mu s$
Demerit/Access Time 1.9 %


next up previous
Next: 4. Experimental Results Up: 3. Implementation Previous: 3.4 Delayed Writes
Xiang Yu
2000-09-11