Skip to main content
USENIX
  • Conferences
  • Students
Sign in
  • Home
  • Attend
    • Registration Information
    • Registration Discounts
    • Venue, Hotel, and Travel
    • Students and Grants
    • Co-located Events
      • SOUPS 2016
      • HotCloud '16
      • HotStorage '16
  • Program
    • At a Glance
    • Technical Sessions
  • Activities
    • Birds-of-a-Feather Sessions
    • Poster Session
  • Participate
    • Instructions for Authors and Speakers
    • Call for Papers
    • Call for Practitioner Talks
  • Sponsorship
  • About
    • Organizers
    • Help Promote!
    • Questions
    • Past Conferences
  • Home
  • Attend
  • Program
  • Activities
  • Participate
  • Sponsorship
  • About

sponsors

Gold Sponsor
Gold Sponsor
Gold Sponsor
Gold Sponsor
Silver Sponsor
Silver Sponsor
Silver Sponsor
Silver Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Industry Partner
Industry Partner
Industry Partner

help promote

USENIX ATC '16

Get
Help Promote graphics!

connect with us


  •  Twitter
  •  Facebook
  •  LinkedIn
  •  Google+
  •  YouTube

twitter

Tweets by @usenix

usenix conference policies

  • Event Code of Conduct
  • Conference Network Policy
  • Statement on Environmental Responsibility Policy

You are here

Home ยป Coherence Stalls or Latency Tolerance: Informed CPU Scheduling for Socket and Core Sharing
Tweet

connect with us

Coherence Stalls or Latency Tolerance: Informed CPU Scheduling for Socket and Core Sharing

Authors: 

Sharanyan Srikanthan, Sandhya Dwarkadas, and Kai Shen, University of Rochester

Abstract: 

The efficiency of modern multiprogrammed multicore machines is heavily impacted by traffic due to data sharing and contention due to competition for shared resources. In this paper, we demonstrate the importance of identifying latency tolerance coupled with instructionlevel parallelism on the benefits of colocating threads on the same socket or physical core for parallel efficiency. By adding hardware counted CPU stall cycles due to cache misses to the measured statistics, we show that it is possible to infer latency tolerance at low cost. We develop and evaluate SAM-MPH, a multicore CPU scheduler that combines information on sources of traffic with tolerance for latency and need for computational resources. We also show the benefits of using a history of past intervals to introduce hysteresis when making mapping decisions, thereby avoiding oscillatory mappings and transient migrations that would impact performance. Experiments with a broad range of multiprogrammed parallel, graph processing, and data management workloads on 40-CPU and 80-CPU machines show that SAMMPH obtains ideal performance for standalone applications and improves performance by up to 61% over the default Linux scheduler for mixed workloads.

Sharanyan Srikanthan, University of Rochester

Sandhya Dwarkadas, University of Rochester

Kai Shen, University of Rochester

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {196290,
author = {Sharanyan Srikanthan and Sandhya Dwarkadas and Kai Shen},
title = {Coherence Stalls or Latency Tolerance: Informed {CPU} Scheduling for Socket and Core Sharing},
booktitle = {2016 USENIX Annual Technical Conference (USENIX ATC 16)},
year = {2016},
isbn = {978-1-931971-30-0},
address = {Denver, CO},
pages = {323--336},
url = {https://www.usenix.org/conference/atc16/technical-sessions/presentation/srikanthan},
publisher = {USENIX Association},
month = jun,
}
Download
Srikanthan PDF
View the slides

Presentation Audio

MP3 Download

Download Audio

  • Log in or    Register to post comments

Gold Sponsors

Silver Sponsors

Media Sponsors & Industry Partners

© USENIX

  • Privacy Policy
  • Contact Us