OmniCache: Collaborative Caching for Near-storage Accelerators


Jian Zhang and Yujie Ren, Rutgers University; Marie Nguyen, Samsung; Changwoo Min, Igalia; Sudarsun Kannan, Rutgers University


We propose OmniCache, a novel caching design for near-storage accelerators that combines near-storage and host memory capabilities to accelerate I/O and data processing. First, OmniCache introduces a "near-cache" approach, maximizing data access to the nearest cache for I/O and processing operations. Second, OmniCache presents collaborative caching for concurrent I/O and data processing using host and device caches. Third, OmniCache incorporates a dynamic model-driven offloading support, which actively monitors hardware and software metrics for efficient processing across host and device processors. Finally, OmniCache explores the extensibility of the newly introduced CXL, a memory expansion technology. Evaluation of OmniCache demonstrates significant performance gains of up to 3.24X for I/O workloads and 3.06X for data processing workloads.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@inproceedings {294779,
author = {Jian Zhang and Yujie Ren and Marie Nguyen and Changwoo Min and Sudarsun Kannan},
title = {{OmniCache}: Collaborative Caching for Near-storage Accelerators},
booktitle = {22nd USENIX Conference on File and Storage Technologies (FAST 24)},
year = {2024},
isbn = {978-1-939133-38-0},
address = {Santa Clara, CA},
pages = {35--50},
url = {},
publisher = {USENIX Association},
month = feb

Presentation Video