- FAST '14 Home
- Conference Organizers
- Registration Information
- At a Glance
- Training Program
- Technical Sessions
- Purchase the Box Set
- Students and Grants
- Help Promote!
- For Participants
- Call for Papers
- Past Conferences
You are here
FireBox: A Hardware Building Block for 2020 Warehouse-Scale Computers
Krste Asanović, University of California, Berkeley
The first generation of Warehouse-Scale Computers (WSC) built everything from commercial off-the-shelf (COTS) components: computers, switches, and racks. The second generation, which is being deployed today, uses custom computers, custom switches, and even custom racks, albeit all built using COTS chips. We believe the third generation of WSC in 2020 will be built from custom chips. If WSC architects are free to design custom chips, what should they do differently?
FireBox is a new project at UC Berkeley proposing a new system architecture for these third-generation WSCs. Firebox is a 50kW WSC building block containing a thousand compute sockets and 100 Petabytes (2^57B) of non-volatile memory connected via a low-latency, high-bandwidth optical switch. We expect a 2020 WSC to be composed of 200 to 400 FireBoxes instead of 20,000 to 40,000 servers, thereby reducing management overhead. Each compute socket contains a System-on-a-Chip (SoC) with around 100 cores connected to high-bandwidth on-package DRAM. Fast SoC network interfaces reduce the software overhead of communicating between application services and high-radix network backplane switches connected by Terabit/sec optical fibers reduce the network's contribution to tail latency. The very large non-volatile store directly supports in-memory databases, and pervasive encryption ensures that data is always protected in transit and in storage.
To explore the many design options before building FireBox, we are building on DIABLO-1 (Datacenter-in-a-Box at Low Cost), our prior work simulating a WSC using FPGAs. DIABLO-2 will simulate an entire FireBox, including the fiber-optic network, the switch, the NIC, and 1000 SOCs, with every core running the full BDAS stack (from the AMP Lab) and the Linux OS, as well as interactive services and batch applications, with only a factor of 1000x slowdown from realtime.
Krste Asanović received a B.A. in Electrical and Information Sciences from Cambridge University in 1987 and a Ph.D. in Computer Science from U.C. Berkeley in 1998. He was an Assistant and Associate Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, Cambridge, from 1998 to 2007. In 2007, he joined the faculty at the University of California, Berkeley, where he co-founded the Berkeley Parallel Computing Laboratory. He is currently a Professor of Electrical Engineering and Computer Sciences and Director of the Berkeley ASPIRE Laboratory, which is developing new techniques to increase computing efficiency above the transistor level. He is an IEEE Fellow and an ACM Distinguished Scientist.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.