Skip to main content
USENIX
  • Conferences
  • Students
Sign in
  • Overview
  • Symposium Organizers
  • Registration Information
    • Registration Discounts
    • Venue, Hotel, and Travel
  • At a Glance
  • Calendar
  • Technical Sessions
  • Activities
    • Posters and Demos
    • Birds-of-a-Feather Sessions
  • Sponsorship
  • Students and Grants
    • Grants for Women
  • Services
  • Questions?
  • Help Promote!
  • For Participants
  • Call for Papers
  • Past Symposia

sponsors

Gold Sponsor
Gold Sponsor
Silver Sponsor
Silver Sponsor
Bronze Sponsor
Bronze Sponsor
Bronze Sponsor
Bronze Sponsor
General Sponsor
General Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Media Sponsor
Industry Partner

twitter

Tweets by @usenix

usenix conference policies

  • Event Code of Conduct
  • Conference Network Policy
  • Statement on Environmental Responsibility Policy

You are here

Home » SENIC: Scalable NIC for End-Host Rate Limiting
Tweet

connect with us

https://twitter.com/usenix
https://www.facebook.com/usenixassociation
http://www.linkedin.com/groups/USENIX-Association-49559/about
https://plus.google.com/108588319090208187909/posts
http://www.youtube.com/user/USENIXAssociation

SENIC: Scalable NIC for End-Host Rate Limiting

Authors: 

Sivasankar Radhakrishnan, University of California, San Diego; Yilong Geng and Vimalkumar Jeyakumar, Stanford University; Abdul Kabbani, Google Inc.; George Porter, University of California, San Diego; Amin Vahdat, Google Inc. and University of California, San Diego

Abstract: 

Rate limiting is an important primitive for managing server network resources. Unfortunately, software-based rate limiting suffers from limited accuracy and high CPU overhead, and modern NICs only support a handful of rate limiters. We present SENIC, a NIC design that can natively support 10s of thousands of rate limiters—100x to 1000x the number available in NICs today. The key idea is that the host CPU only classifies packets, enqueues them in per-class queues in host memory, and specifies rate limits for each traffic class. On the NIC, SENIC maintains class metadata, computes the transmit schedule, and only pulls packets from host memory when they are ready to be transmitted (on a real time basis). We implemented SENIC on NetFPGA, with 1000 rate limiters requiring just 30KB SRAM, and it was able to accurately pace packets. Further, in a memcached benchmark against software rate limiters, SENIC is able to sustain up to 250% higher load, while simultaneously keeping tail latency under 4ms at 90% network utilization.

Sivasankar Radhakrishnan, University of California, San Diego

Yilong Geng, Stanford University

Vimalkumar Jeyakumar, Stanford University

Abdul Kabbani, Google Inc.

George Porter, University of California, San Diego

Amin Vahdat, Google Inc. and University of California, San Diego

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {179716,
author = {Sivasankar Radhakrishnan and Yilong Geng and Vimalkumar Jeyakumar and Abdul Kabbani and George Porter and Amin Vahdat},
title = {{SENIC}: Scalable {NIC} for {End-Host} Rate Limiting},
booktitle = {11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14)},
year = {2014},
isbn = {978-1-931971-09-6},
address = {Seattle, WA},
pages = {475--488},
url = {https://www.usenix.org/conference/nsdi14/technical-sessions/presentation/radhakrishnan},
publisher = {USENIX Association},
month = apr,
}
Download
Radhakrishnan PDF
View the slides

Presentation Video 

Presentation Audio

MP3 Download

Download Audio

  • Log in or    Register to post comments

Gold Sponsors

Silver Sponsors

Bronze Sponsors

General Sponsors

Media Sponsors & Industry Partners

© USENIX

  • Privacy Policy
  • Contact Us