Driving Cache Replacement with ML-based LeCaR

Website Maintenance Alert

Due to scheduled maintenance on Wednesday, October 16, from 10:30 am to 4:30 pm Pacific Daylight Time (UTC -7), parts of the USENIX website (e.g., conference registration, user account changes) may not be available. We apologize for the inconvenience.

If you are trying to register for LISA19, please complete your registration before or after this time period.


Giuseppe Vietri, Liana V. Rodriguez, Wendy A. Martinez, Steven Lyons, Jason Liu, and Raju Rangaswami, Florida International University; Ming Zhao, Arizona State University; Giri Narasimhan, Florida International University


Can machine learning (ML) be used to improve on existing cache replacement strategies? We propose a general framework called LeCaR that uses the ML technique of regret minimization to answer the question in the affirmative. Surprisingly, we show that the LeCaR framework outperforms A RC using only two fundamental eviction policies – LRU and LFU. We also show that the performance gap increases when the size of the available cache gets smaller relative to the size of the working set.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@inproceedings {216890,
author = {Giuseppe Vietri and Liana V. Rodriguez and Wendy A. Martinez and Steven Lyons and Jason Liu and Raju Rangaswami and Ming Zhao and Giri Narasimhan},
title = {Driving Cache Replacement with ML-based LeCaR},
booktitle = {10th {USENIX} Workshop on Hot Topics in Storage and File Systems (HotStorage 18)},
year = {2018},
address = {Boston, MA},
url = {https://www.usenix.org/conference/hotstorage18/presentation/vietri},
publisher = {{USENIX} Association},
month = jul,