You are here
Toward a Principled Framework for Benchmarking Consistency
Muntasir Raihan Rahman, HP Labs, Palo Alto and University of Illinois at Urbana Champaign; Wojciech Golab, Alvin AuYoung, Kimberly Keeton, and Jay J. Wylie, HP Labs, Palo Alto
Large-scale key-value storage systems sacrifice consistency in the interest of dependability (i.e., partitiontolerance and availability), as well as performance (i.e., latency). Such systems provide eventual consistency, which—to this point—has been difficult to quantify in real systems. Given the many implementations and deployments of eventually-consistent systems (e.g., NoSQL systems), attempts have been made to measure this consistency empirically, but they suffer from important drawbacks. For example, state-of-the art consistency benchmarks exercise the system only in restricted ways and disrupt the workload, which limits their accuracy.
In this paper, we take the position that a consistency benchmark should paint a comprehensive picture of the relationship between the storage system under consideration, the workload, the pattern of failures, and the consistency observed by clients. To illustrate our point, we first survey prior efforts to quantify eventual consistency. We then present a benchmarking technique that overcomes the shortcomings of existing techniques to measure the consistency observed by clients as they execute the workload under consideration. This method is versatile and minimally disruptive to the system under test. As a proof of concept, we demonstrate this tool on Cassandra.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.