Skip to main content
USENIX
  • Conferences
  • Students
Sign in
  • Home
  • Attend
    • Registration Information
    • Registration Discounts
    • Venue, Hotel, and Travel
    • Students and Grants
  • Activities
    • Birds-of-a-Feather Sessions
    • Poster Session and Happy Hour
  • Program
    • At a Glance
    • Technical Sessions
  • Sponsorship
  • Participate
    • Instructions for Participants
    • Call for Papers
    • Call for Posters
  • About
    • Organizers
    • Help Promote
    • Questions
    • Past Symposia
  • Home
  • Attend
  • Activities
  • Program
  • Sponsorship
  • Participate
  • About

sponsors

Silver Sponsor
Silver Sponsor
Silver Sponsor
Silver Sponsor
Bronze Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Industry Partner

help promote

NSDI '16 button

Get more
Help Promote graphics!

connect with us


  •  Twitter
  •  Facebook
  •  LinkedIn
  •  Google+
  •  YouTube

twitter

Tweets by @usenix

usenix conference policies

  • Event Code of Conduct
  • Conference Network Policy
  • Statement on Environmental Responsibility Policy

You are here

Home » FairRide: Near-Optimal, Fair Cache Sharing
Tweet

connect with us

FairRide: Near-Optimal, Fair Cache Sharing

Authors: 

Qifan Pu and Haoyuan Li, University of California, Berkeley; Matei Zaharia, Massachusetts Institute of Technology; Ali Ghodsi and Ion Stoica, University of California, Berkeley

Abstract: 

Memory caches continue to be a critical component to many systems. In recent years, there has been larger amounts of data into main memory, especially in shared environments such as the cloud. The nature of such environments requires resource allocations to provide both performance isolation for multiple users/applications and high utilization for the systems. We study the problem of fair allocation of memory cache for multiple users with shared files. We find that, surprisingly, no memory allocation policy can provide all three desirable properties (isolation-guarantee, strategy-proofness and Paretoefficiency) that are typically achievable by other types of resources, e.g., CPU or network. We also show that there exist policies that achieve any two of the three properties. We find that the only way to achieve both isolation-guarantee and strategy-proofness is through blocking, which we efficiently adapt in a new policy called FairRide. We implement FairRide in a popular memorycentric storage system using an efficient form of blocking, named as expected delaying, and demonstrate that FairRide can lead to better cache efficiency (2.6× over isolated caches) and fairness in many scenarios.

Qifan Pu, University of California, Berkeley

Haoyuan Li, University of California, Berkeley

Matei Zaharia, Massachusetts Institute of Technology

Ali Ghodsi, University of California, Berkeley

Ion Stoica, University of California, Berkeley

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {194950,
author = {Qifan Pu and Haoyuan Li and Matei Zaharia and Ali Ghodsi and Ion Stoica},
title = {FairRide: Near-Optimal, Fair Cache Sharing},
booktitle = {13th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 16)},
year = {2016},
address = {Santa Clara, CA},
pages = {393--406},
url = {https://www.usenix.org/conference/nsdi16/technical-sessions/presentation/pu},
publisher = {{USENIX} Association},
month = mar,
}
Download
Pu PDF
View the slides

Presentation Audio

MP3 Download

Download Audio

  • Log in or    Register to post comments

Silver Sponsors

Bronze Sponsors

Media Sponsors & Industry Partners

Open Access Publishing Partner

© USENIX

  • Privacy Policy
  • Conference Policies
  • Contact Us