Check out the new USENIX Web site. next up previous
Next: Conclusion and Future Work Up: Performance Evaluation Previous: Methodology

Performance Results and Analysis


Table 2: The overall performance measurements of the test applications under Linux and under AASFP. All reported measurements are in seconds or percentage. AASFP overhead (included in the AASFP time, and is also shown within the parenthesis) is mainly the time to execute the prefetch thread, which does not exist in the conventional approaches. Percentages of Masked I/O shows the percentages of disk I/O time that AASFP can effectively mask.
Test Linux AASFP % of Disk Perf. CPU % of Syn. % of Com.
Case   (Overhead) I/O Masked Improv. Time Overhead Overhead
Vol Vis 1 68.95 31.76 (3.59) 62.14% 53.94% 24.47 0.06% 0.18%
Vol Vis 2 83.05 64.95 (3.22) 12.99% 21.56% 15.18 0.76% 0.09%
Vol Vis 3 36.87 31.23 (3.02) 66.61% 15.30% 25.93 0.00% 0.19%
Vol Vis 4 30.99 29.78 (3.02) 30.00% 3.90% 15.46 2.04% 0.20%
FFT 256K 33.42 33.74 (0.00) 0.00% -0.94% 24.70 2.16% 0.12%
FFT 512K 66.68 67.84 (0.00) 0.00% -1.71% 54.75 1.35% 0.06%
Forward 1 4.78 4.76 (0.00) 0.54% 0.42% 0.48 0.21% 0.21%
Backward 1 52.54 7.63 (0.00) 84.75% 85.48% 0.48 0.65% 0.13%
Forward 2 4.61 4.84 (0.00) 0.00% -4.75% 0.48 0.00% 0.21%
Backward 2 25.02 6.19 (0.00) 78.40% 75.26% 0.48 0.32% 0.16%


Table 2 shows the measured performance of the test cases in Table 1 under generic Linux and when using AASFP. All numbers are the average results of 5 runs. To get correct results, the file system buffer cache must be flushed each time. Instead of rebooting our testing machine each time, we generate a file with random contents and whose size is 128MBytes. Notice this size is bigger than or equal to the size of the system's physical memory size plus the system's swap space size. Therefore, a sequential read through out this file should wipe out all the related content of the previous run. To verify this, we compare the results of Backward 1 under normal Linux with the machine rebooted each time and with reading the above big file. The numbers are shown in Table 3.

Under Linux, only sequential disk prefetching is supported. Under AASFP, only application-specific disk prefetching is supported. The number within the parenthesis shows the AASFP overhead, which is due to the extra time to run the prefetch thread. This overhead is in general insignificant because the cost of performing computation in the prefetch thread, and the associated process scheduling and context switching is relatively small when compared to the computation overhead. The percentages of disk I/O time that AASFP can mask are listed in the fourth column. This is calculated by taking the ratio between the disk I/O time that is masked and the total disk I/O time without prefetching.

The fifth column in Table 2 shows performance improvement column of AASFP over Linux. For reference, we also list the CPU Time, which is the pure computation time of the application (i.e., excluded I/O), in the sixth column. The overhead due to synchronization, as explained in Section 3.2 is shown in the seventh column here. The last column shows the associated compilation overhead of AASFP to extract the prefetch thread from each program.

For the volume visualization application with a 4-KByte block size, AASFP achieves 54% and 22% overall performance improvement for orthonormal and non-orthonormal viewing directions, respectively. There is not much performance gain for the cases that use 32-KByte block size. Retrieving 32-KByte blocks corresponds to fetching eight 4K pages consecutively. There is substantial spatial locality in this access pattern, which detracts the relative performance gain from AASFP. This also explains why the generic Linux prefetching is comparable to AASFP when 32-KByte blocks are used.

For the out-of-core FFT apparently there is no significant performance improvement. This is due to its extremely sequential accessing patterns. Although FFT is well known by its butterfly algorithmic structure, which suggests random disk access patterns, a more careful examination revealed that not only out-of-core FFT, but also many other out-of-core applications exhibit sequential disk access patterns. Nevertheless our results show that even under such fairly regular access patterns AASFP can still provide as good performance as sequential prefetching. This means that AASFP does not mistakenly prefetch something that is unnecessary and eventually hurt the overall performance, and that the prefetch thread does not add any noticeable overhead in this case.

For the micro-benchmark, AASFP provides 86% performance improvement for the Backward 1 case, which represents the worst case for the sequential prefetching scheme used in Linux. Note that AASFP almost does not lose any performance for the Forward 1 and Forward2 case when compared to Linux. This is the best case for the Linux kernel. The last measurement again demonstrates that AASFP performs as well as generic Linux for sequential access patterns.

Another potential factor that can hurt the prefetching efficiency is synchronization. Too many forced synchronizations sometimes will slow down how far ahead the prefetch thread can go, thus limiting the performance improvement of AASFP. In all the above test cases, the synchronization-related overhead is either non-existent or negligible, because there are neither conditional branches nor user-inputs (for deciding which data file to use), as can be shown in Table 2.

Finally, the associated compilation overhead is also minimal, and furthermore, the prefetch thread code extraction needs to be done only once in a preprocessing mode. This suggests not only the simplicity of AASFP but also its applicability to other similar programs.

Table: The comparison between rebooting a system and reading a gigantic file to flush the file system's buffer cache. Reported numbers are the average times of 5 runs of the Backward 1 case under Linux.
Reboot 52.31 52.60 52.87 52.47 52.88
Flush 52.40 52.53 52.60 52.53 52.58



next up previous
Next: Conclusion and Future Work Up: Performance Evaluation Previous: Methodology
chuan-kai yang 2002-04-15