Skip to main content
USENIX
  • Conferences
  • Students
Sign in
  • Home
  • Attend
  • Program
  • Participate
    • Instructions for Participants
    • Call for Participation
  • Sponsorship
  • About
    • Summit Organizers
    • Help Promote
    • Questions
    • Past Summits
  • Home
  • Attend
  • Program
  • Participate
  • Sponsorship
  • About

help promote

URES '15 button

Get more
Help Promote graphics!

connect with us


  •  Twitter
  •  Facebook
  •  LinkedIn
  •  Google+
  •  YouTube

twitter

Tweets by @usenix

usenix conference policies

  • Event Code of Conduct
  • Conference Network Policy
  • Statement on Environmental Responsibility Policy

You are here

Home » Spyre: A Resource Management Framework for Container-Based Clouds
Tweet

connect with us

Spyre: A Resource Management Framework for Container-Based Clouds

Karthick Rajamani, Wes Felter, Alexandre Ferreira, and Juan Rubio, IBM Research—Austin

Abstract: 

Linux container technology is seeing rapid adoption in the last few years with its usage and enhancement as platform for building and deploying applications, for example, by Docker. Services are being stood up in public clouds that offer different frameworks for launching and managing user-built containers. At the same time cloud services are beginning to examine container-based deployment as possible alternative to or in conjunction with virtual machines for deploying those services. However, when multiple tenants’ containers get deployed on the same system there is little inherent isolation between tenants in existing containers-as-a-service platforms.

In the Spyre project we focus on developing a resource management framework that allows us to provide performance isolation between multiple tenants deploying containers in the cloud. We introduce the notion of a slice that provides the resource partition for one tenant within a system which com-prises of a defined set of resources—cores, memory, memory bandwidth, network bandwidth, storage, storage bandwidth etc. These are unique to that slice and disjoint from the resources commandeered for any other slice.

In this presentation, we socialize our concept of slices and discuss how they provide the performance isolation important to multi-tenant cloud services that care about tail latencies among other things. We discuss how we think they relate to concepts in existing container platforms and can be adopted for meeting performance-sensitive needs for cloud services. We then introduce our current implementation and invite audience to participate in a broader discussion on the challenges for performance isolation and management for container-based cloud frameworks.

Karthick Rajamani, IBM Research—Austin

Wes Felter, IBM Research—Austin

Alexandre Ferreira, IBM Research—Austin

Juan Rubio, IBM Research—Austin

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@conference {208652,
author = {Karthick Rajamani and Wes Felter and Alexandre Ferreira and Juan Rubio},
title = {Spyre: A Resource Management Framework for {Container-Based} Clouds},
year = {2015},
address = {Washington, D.C.},
publisher = {USENIX Association},
month = nov,
}
Download
View the slides
  • Log in or    Register to post comments

© USENIX

  • Privacy Policy
  • Contact Us