We employ the volatile DDR memory as our write cache or NVS. The fact that this memory is not battery-backed does not impact the correctness or relevance of our results to real-life storage controllers. We shall, therefore, still refer to this memory as NVS. The write cache is implemented in shared memory and is managed by user space libraries that are linked to all the applications that refer to this shared memory cache. The size of the NVS can be set to any size up to the maximum size of the shared memory. This approach provides tremendous flexibility for our experiments by allowing us to benchmark various algorithms across a large range of NVS sizes.
For our experiments, we do not use a read cache as all disk I/Os are direct (or raw) and bypass the Linux buffer caches. This helps us eliminate an unnecessary degree of freedom for our experiments. Recall that read misses must be served concurrently, and disrupt the sequential destaging operation of WOW and CSCAN. Also, read misses compete for head time, and affect even LRW. Eliminating the read cache serves to maximize read misses, and, hence, our setup is the most adversarial for NVS destage algorithms. In real-life storage controllers equipped with a read cache, the aggregate performance will depend even more critically on the write caching algorithm and thus magnify even further the performance differences between these algorithms.
A side benefit of maintaining a write cache is the read hits that it produces. The write caching algorithms are not intended to improve the read hit ratio primarily because the read cache is larger and more effective in producing read hits. Nevertheless, in our setup we do check the write cache for these not-so-numerous read hits and return data from the write cache on a hit for consistency purposes.