Check out the new USENIX Web site.


We experimented with identical workloads accessing 16 GB virtual disks from four hosts with equal $ \beta $ values. This is similar to the setup that led to divergent behavior in Figure 3. Using our filesystem-based aggregation, PARDA converges as desired, even in the presence of different latency values observed by hosts. Table 3 presents results for this workload without any control, and with PARDA using equal shares for each host; plots are omitted due to space constraints. With PARDA, latencies drop, making the overall average close to the target $ \cal {L}$ . The aggregate throughput achieved by all hosts is similar with and without PARDA, exhibiting good work-conserving behavior. This demonstrates that the algorithm works correctly in the simple case of equal shares and uniform workloads.

Table 3: Fairness with 16 KB random reads from four hosts.

Uncontrolled PARDA $ \cal {L}$ = 30 ms
Host IOPS Latency (ms) $ \beta $ IOPS Latency (ms)
1 780 41 1 730 34
2 900 34 1 890 29
3 890 35 1 930 29
4 790 40 1 800 33
Aggregate 3360 Avg = 37 3350 Avg = 31

Figure 8: PARDA Fairness. Four hosts each run a 16 KB random read workload with $ \beta $ values of $ 1:1:2:2$ . Window sizes allocated by PARDA are in proportion to $ \beta $ values, and latency is close to the specified threshold $ \cal {L}$ = 25 ms.




(a) Window Size (b) Latency (ms) (c) Throughput (IOPS)

Next, we experimented with a share ratio of $ 1:1:2:2$ for four hosts, setting $ \cal {L}$ = 25 ms, shown in Figure 8. PARDA converges on windows sizes for hosts 1 and 2 that are roughly half those for hosts 3 and 4, demonstrating good fairness. The algorithm also successfully converges latencies to $ \cal {L}$ . Finally, the per-host throughput levels achieved while running this uniform workload also roughly match the specified share ratio. The remaining differences are due to some hosts obtaining better throughput from the array, even with the same window size. This reflects the true IO costs as seen by the array scheduler; since PARDA operates on window sizes, it maintains high efficiency at the array.

Ajay Gulati 2009-01-14