GL-Cache: Group-level learning for efficient and high-performance caching

Authors: 

Juncheng Yang, Carnegie Mellon University; Ziming Mao, Yale University; Yao Yue, Pelikan Foundation; K. V. Rashmi, Carnegie Mellon University

Abstract: 

Web applications rely heavily on software caches to achieve low-latency, high-throughput services. To adapt to changing workloads, three types of learned caches (learned evictions) have been designed in recent years: object-level learning, learning-from-distribution, and learning-from-simple-experts. However, we argue that the learning granularity in existing approaches is either too fine (object-level), incurring significant computation and storage overheads, or too coarse (workload or expert-level) to capture the differences between objects and leaves a considerable efficiency gap.

In this work, we propose a new approach for learning in caches (group-level learning), which clusters similar objects into groups and performs learning and eviction at the group level. Learning at the group level accumulates more signals for learning, leverages more features with adaptive weights, and amortizes overheads over objects, thereby achieving both high efficiency and high throughput.

We designed and implemented GL-Cache on an open-source production cache to demonstrate group-level learning. Evaluations on 118 production block I/O and CDN cache traces show that GL-Cache has a higher hit ratio and higher throughput than state-of-the-art designs. Compared to LRB (object-level learning), GL-Cache improves throughput by 228$\times$ and hit ratio by 7\% on average across cache sizes. For 10\% of the traces (P90), GL-Cache provides a 25\% hit ratio increase from LRB. Compared to the best of all learned caches, GL-Cache achieves a 64\% higher throughput, a 3\% higher hit ratio on average, and a 13\% hit ratio increase at the P90.

FAST '23 Open Access Sponsored by
NetApp

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

This content is available to:

BibTeX
@inproceedings {285742,
author = {Juncheng Yang and Ziming Mao and Yao Yue and K. V. Rashmi},
title = {{GL-Cache}: Group-level learning for efficient and high-performance caching},
booktitle = {21st USENIX Conference on File and Storage Technologies (FAST 23)},
year = {2023},
isbn = {978-1-939133-32-8},
address = {Santa Clara, CA},
pages = {115--134},
url = {https://www.usenix.org/conference/fast23/presentation/yang-juncheng},
publisher = {USENIX Association},
month = feb
}

Presentation Video