Check out the new USENIX Web site. next up previous
Next: UFS Up: Experimental Platform Previous: The Regular Disk

The Virtual Log Disk

 

The VLD adds the virtual log to the disk simulator described above. It exports the same device driver interface so it can be used by existing file systems. The VLD maintains an in-memory indirection map while updating the on-disk representation as described in Section 3.2. To avoid trapping the disk head in regions of high utilization during eager writing, the VLD performs cylinder seeks only in one direction until it reaches the last cylinder, at which point it starts from the first cylinder again.

One challenge of implementing the VLD while preserving the existing disk interface is handling deletes, which are not visible to the device driver. This is a common problem faced by logical disks. Our solution is to monitor overwrites: when a logical address is re-used, the VLD detects that the old logical-to-physical mapping can be freed. The disadvantage of the approach is that it does not capture the freed blocks that are not yet overwritten.

Another question that arose during the implementation of the VLD is the choice of the physical disk block size. As shown in Appendix A.1, the latency is best when the physical block size matches the file system logical block size. In all our experiments, we have used a physical block size of 4 KB. The resulting internal fragmentation when writing data or metadata blocks that are smaller only biases against the performance of UFS running on the VLD. Each physical block requires a four byte map entry; so the entire map consumes 24 KB.

The third issue concerns the interaction between eager writing and the disk track buffer read-ahead algorithm. When reading, the Dartmouth simulator keeps in cache only the sectors from the beginning of the current request through the current read-ahead point and discards the data whose addresses are lower than that of the current request. This algorithm makes sense for sequential reads of data whose physical addresses increase monotonically. This is the case for traditional disk allocation strategies. For VLD, however, the combination of eager writing and the logical-to-physical address translation means that the sequentially read data may not necessarily have monotonically increasing physical addresses. As a result, the Dartmouth simulator tends to purge data prematurely from its read-ahead buffer under VLD. The solution is to aggressively prefetch the entire track as soon as the head reaches the target track and not discard data until it is delivered to the host during sequential reads. The measurements of sequential reads on the VLD in Section 5 were taken with this modification.

Lastly, the VLD module also implements a free space compactor. Although the eager writing strategy should allow the compactor to take advantage of idle intervals of arbitrary length, for simplicity, our compactor compacts free space at the granularity of tracks. During idle periods, the compactor reads the current track and uses eager writing to copy the live data to other tracks in a way similar to hole-plugging under LFS [22, 36]. Currently, we choose compaction targets randomly and plan to investigate optimal VLD compaction algorithms (e.g., ones that preserve or enhance read and write locality) in the future. Applying the lessons learned from the models in Section 2.3, the VLD fills empty tracks to a certain threshold (75% in the experiments). After exhausting empty tracks generated by the compactor, the VLD reverts to the greedy algorithm modeled in Section 2.2.


next up previous
Next: UFS Up: Experimental Platform Previous: The Regular Disk

Randolph Wang
Tue Jan 5 14:30:32 PST 1999