Julius Plenz, Google
Randomized load balancing is a common strategy to distribute requests across a server farm. When M requests are randomly assigned to N servers, every server is on average responsible for M/N requests. But in practice the distribution is not uniform: how many requests does the busiest server receive, relative to the average? This is the "peak to average load ratio". It is an important quantity that describes how much capacity is “wasted” when a system is provisioned for peak load. The closer this ratio is to one, the more uniform the utilization of servers is.
This talk gives a quick overview over two theorems from the following papers: "Balls into Bins" and "Small Cache, Big Effect."
Applied to the "requests to servers" scenario, the first theorem gives a closed expression for the amount of requests hitting the busiest server (with high likelihood). One somewhat surprising conclusion is that the peak to average ratio gets worse if number of requests and number of servers grow proportionally. The second theorem analyzes how effective small, fast caches can be, and how effective they remain as a system scales in size.
Julius Plenz, Google
Julius Plenz is an SRE at Google in Sydney, where he works on a large, internal storage system.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Julius Plenz},
title = {Randomized Load Balancing, Caching, and {Big-O} Math},
year = {2018},
publisher = {USENIX Association},
month = jun
}