If you're not happy, we're not happy. If you feel a tutorial does not meet the high standards you have come to expect from USENIX, let us know by the first break and we will change you to any other available tutorial immediately.
Continuing Education Units (CEUs)
USENIX provides Continuing Education Units for a small additional administrative fee. The CEU is a nationally recognized standard unit of measure for continuing education and training and is used by thousands of organizations.
Two half-day tutorials qualify for 0.6 CEUs. You can request CEU credit by completing the CEU section on the registration form. USENIX provides a certificate for each attendee taking a tutorial for CEU credit. CEUs are not the same as college credits. Consult your employer or school to determine their applicability.
Training Materials on USB Drives
Training materials will be provided to you on an 8GB USB drive. If you'd like to access them during your class, remember to bring a laptop.
Half Day Morning
Write optimization refers to a set of techniques used to improve the performance of databases and file systems. Examples of write optimized data structures include Log-Structured Merge trees (LSMs) and Bε-trees. Systems that use such data structures include BetrFS, HBase, LevelDB, TableFS, TokuMX, TokuDB, etc.
This tutorial reviews write-optimization from the perspective of the analysis and engineering. We provide a framework for understanding which data structure will perform well on which workloads.
Michael A. Bender is a professor of computer science at Stony Brook University. He was Founder and Chief Scientist at Tokutek, Inc, an enterprise database company, which was acquired by Percona in 2015.
Bender's research interests span the areas of data structures and algorithms, I/O-efficient computing, scheduling, and parallel computing. He has coauthored over 125 articles on these and other topics. He has won several awards, including an R\&D 100 Award, a Test-of-Time award, two Best Paper Awards, and five awards for graduate and undergraduate teaching.
Bender received his B.A. in Applied Mathematics from Harvard University in 1992 and obtained a D.E.A. in Computer Science from the Ecole Normale Superieure de Lyon, France in 1993. He completed a Ph.D. on Scheduling Algorithms from Harvard University in 1998. He has held Visiting Scientist positions at both MIT and King's College London.
Martin Farach-Colton is a professor of computer science at Rutgers University. He was Founder and CTO at Tokutek, Inc, an enterprise database company, which was acquired by Percona in 2015.
Farach-Colton works on pure and applied algorithms in I/O-efficient storage systems, streaming algorithms and string matching. He has coauthored over 150 articles. He has won several awards, including a Sloan Foundation Fellowship, a Test-of-Time award, a Best Paper award, and teaching awards. He was named a distinguished alum of the University of Maryland Computer Science Department on the occasion of their 40th anniversary.
Farach-Colton received his B.S. in Mathematics and Chemistry from the University of South Carolina in 1984. He received his M.D. from Johns Hopkins in 1988 and his Ph.D. from the University of Maryland in 1991. He has been a Member of Technical Staff at Bell Labs (1997-98) and was an early employee of Google, Inc. (2000-2002).
Rob Johnson is a Research Assistant Professor at Stony Brook University and conducts research on Security, Big Data Algorithms, and Cryptography. He is director of the Security, Programming Languages, And Theory (SPLAT) lab at Stony Brook, the Cryptography Lab at the New York Center for Excellence in Wireless and Information Technology (CEWIT), and the Smart Grid Cyber-Security Testing Lab of the New York Advanced Energy Research and Technology Center (AERTC).
He does theoretical work with an impact on the real world. He developed BetrFS, a file system that uses recent advances in data structures to improve performance on some operations by over an order of magnitude. He invented the quotient filter, a high-performance alternative to the Bloom filter for Big Data applications. He founded cache-adaptive analysis, a theoretical framework for designing and analyzing algorithms that dynamically share memory with other processes. He broke the High-bandwidth Digital Content Protection (HDCP) crypto-system used in almost all DVD players and TVs. He co-authored CQual, a static analysis tool that has found dozens of bugs in the Linux kernel and has been used to audit the entire Debian Linux distribution for format-string bugs.
Rob completed his Ph.D. at UC Berkeley in 2006.
This tutorial is oriented toward administrators and developers who manage and use large-scale storage systems. An important goal of the tutorial is to give the audience the foundation for effectively comparing different storage system options, as well as a better understanding of the systems they already have.
Cluster-based parallel storage technologies are used to manage millions of files, thousands of concurrent jobs, and performance that scales from 10s to 100s of GB/sec. This tutorial will examine current state-of-the-art high-performance file systems and the underlying technologies employed to deliver scalable performance across a range of scientific and industrial applications.
The tutorial starts with a look at storage devices and SSDs in particular, which are growing in importance in all storage systems. Next we look at how a file system is put together, comparing and contrasting SAN file systems, scale-out NAS, object-based parallel file systems, and cloud-based storage systems.
Topics include SSD technology, scaling the data path, scaling metadata, fault tolerance, manageability, and cloud storage. Specific systems are discussed, including Ceph, Lustre, GPFS, PanFS, HDFS (Hadoop File System), and OpenStack.
Brent Welch is a senior staff software engineer at Google, where he works on their public Cloud platform. He was Chief Technology Officer at Panasas and has also worked at Xerox-PARC and Sun Microsystems Laboratories. Brent has experience building software systems from the device driver level up through network servers, user applications, and graphical user interfaces. While getting his Ph.D. at the University of California, Berkeley, Brent designed and built the Sprite distributed file system. He is the creator of the TclHttpd web server, the exmh email user interface, and the author of Practical Programming in Tcl and Tk.
Half Day Afternoon
In this tutorial, we will introduce the audience to the lunatic fringe of extreme high-performance computing and its storage systems. The most difficult challenge in HPC storage is caused by millions (soon to be billions) of simultaneously writing threads. Although cloud providers handle workloads of comparable, or larger, aggregate scale, the HPC challenge is unique because the concurrent writers are modifying shared data.
We will begin with a brief history of HPC computing covering the previous few decades, bringing us into the petaflop era which started in 2009. Then we will discuss the unique computational science in HPC so that the audience can understand the unavoidability of its unique storage challenges. We will then move into a discussion of archival storage and the hardware and software technologies needed to store today’s exabytes of data forever. From archive we will move into the parallel file systems of today and will end the lecture portion of the tutorial with a discussion of anticipated HPC storage systems of tomorrow. Of particular focus will be namespaces handling concurrent modifications to billions of entries as this is what we believe will be the largest challenge in the exascale era.
The tutorial will end with a free-ranging audience directed panel.
- A brief history lesson about the past 30 years of supercomputers
- An understanding of what makes HPC computing unique and the entailing storage challenges
- An overview of current HPC storage technologies such as burst buffers, parallel file systems, and archival storage
- A glimpse into the future of HPC storage technologies for both hardware and software
- Insights into unique research opportunities to advance HPC storage
Gary Grider is the Leader of the High Performance Computing (HPC) Division at Los Alamos National Laboratory. As Division Leader, Gary is responsible for all aspects of High Performance Computing technologies and deployment at Los Alamos. Additionally, Gary is responsible for managing the R&D portfolio for keeping the new technology pipeline full to provide solutions to problems in the Lab’s HPC environment, through funding of university and industry partners.
Gary is also the US Department of Energy Exascale Storage, IO, and Data Management National Co-Coordinator. In this role, Gary helps managed the U.S. government investments in Data Management, Mass Storage, and IO. Gary has 30 active patents/applications in the data storage area and has been working in HPC and HPC related storage since 1984.
John Bent, Chief Architect for Seagate Government Solutions, has been working on storage systems for over 20 years. After completely his data-aware scheduling dissertation at Wisconsin in 2005, John spent the next 10 years working for Gary Grider at LANL designing, maintaining, and measuring some of the world's largest parallel storage systems. Now at Seagate Gov, John helps design and map Seagate storage products, both hardware and systems, to emerging workloads in both Enterprise and Extreme IO.
Some of John’s more influential research has been the Parallel Log-structured File System and the DOE sponsored FastForward project prototyping an exascale storage system with Intel and The HDF Group. John is a former anthropology major who spent two years spearfishing on the equator while working as a Peace Corps volunteer.
Both Windows and Linux now contain support for Persistent Memory, an emerging non-volatile memory (NVM) technology. Persistent Memory is available today in the form of NVDIMMs and is expected to explode in capacity in the near future. Unlike other NVM technologies, such as SSDs, Persistent Memory provides a byte-addressable programming model, allowing direct memory access like DRAM, but retaining its contents across power loss. Technologies such as Intel’s 3D XPoint are expected to provide terabytes of NVM per CPU socket, with performance near DRAM speeds. The result offers applications a new tier for data placement in addition to the traditional memory and storage tiers: the persistent memory tier. While there are numerous ways for an OS to leverage Persistent Memory in a way that is transparent to the application, converting an application to be "persistent memory aware" will allow the highest performance benefit.
This tutorial will start with the basic SNIA NVM Programming Model used by operating systems to expose Persistent Memory to applications. We will walk through code examples showing how applications get access to Persistent Memory and we will pay special attention to safe programming practices such as flushing to persistence, atomic operations, and writing power-fail safe code. We will look at CPU instructions designed for atomic operations, cache flushing, and fencing, and how they interact with Persistent Memory.
Next, the tutorial will provide a brief survey of available libraries, compilers, and research in this area. We will then walk through some more complex examples of persistent memory programming in C, C++, and Java. Using the open source NVM Libraries from http://pmem.io we will show how to solve the common programming pain points and how the higher-level languages can help avoid common persistent memory programming mistakes.
- The SNIA NVM Programming Model
- How the Intel Architecture Supports Persistent Memory
- The Challenges of Persistent Memory Programming
- The Current State of the Persistent Memory Ecosystem
- Programming Using the NVM Libraries from http://pmem.io
- C, C++, and Java Persistent Memory Programming Techniques
Andy Rudoff is a Principal Engineer at Intel Corporation, focusing on Non-Volatile Memory programming. He is a contributor to the SNIA NVM Programming Technical Work Group. His more than 30 years industry experience includes design and development work in operating systems, file systems, networking, and fault management at companies large and small, including Sun Microsystems and VMware. Andy has taught various Operating Systems classes over the years and is a co-author of the popular UNIX Network Programming text book.