Check out the new USENIX Web site.
FAST '05
TUTORIAL PROGRAM

Overview | Tutorial Descriptions | Instructors

Tuesday, December 13, 2005
Tuesday Morning Half-Day Tutorials
T1 ERASURE CODES FOR STORAGE APPLICATIONS
Jim Plank, University of Tennessee
9:00 a.m.–12:30 p.m.

Who should attend: Anyone who deals with storage systems. This includes researchers, programmers, and managers of disk array systems, distributed storage systems, wide-area storage systems, and peer-to-peer storage systems. No hard-core math or programming knowledge is required. The goal is for the participants to come away with a basic understanding of how erasure codes may be used in storage systems, including all the available techniques and technologies, plus their performance tradeoffs and their complexity.

We all know that RAID Level 5, or parity coding, can protect storage systems composed of multiple disks from single disk failures. However, disk array systems are becoming so large that multiple disk failures occur with a high enough probability to become worrisome. With wide-area, distributed, and peer-to-peer storage systems, storage failures and network outages are everyday occurrences.

Unfortunately, solutions to tolerating multiple failures are not as standardized as RAID Level 5. The goal of this tutorial is for the participants to understand all of the options for tolerating multiple failures in storage systems.

Topics include:

  • Erasure coding technologies
    • Standard Reed-Solomon coding, for tolerating any number of failures with optimal space overhead.
    • Cauchy Reed-Solomon coding, which improves the performance of Reed-Solomon coding.
    • Parity-Array coding, including EVEN-ODD coding, plus other Horizontal and Vertical codes. These are parity-based codes that tolerate small numbers of failures with much better performance than Reed-Solomon codes.
    • Low-Density, Parity-Check (LDPC) coding, including Tornado Codes. These are parity-based codes that tolerate large numbers of failures with much better performance than Reed-Solomon codes.
  • A comparison of the above codes, with a discussion of programming/implementation options

Jim Plank (T1) is an Associate Professor of Computer Science at the University of Tennessee. His field of specialty is fault-tolerant computing, with an emphasis on checkpointing and wide-area storage systems. He approaches the difficult field of coding theory from the point of view of a systems researcher, and he has published multiple articles that explain the mechanics of erasure codes and their practical value to the systems community. A seasoned instructor, Professor Plank has won six teaching awards during his twelve years at Tennessee. He received his Ph.D. from Princeton University in 1993.


T2 OBJECT BASED CLUSTER STORAGE SYSTEMS
David Nagle and Brent Welch, Panasas
9:00 a.m.–12:30 p.m.

Who should attend: Anyone interested in the fundamentals of object-based storage (OSD) and their file systems. The tutorial describes the current state of object-based storage technologies, including the T10 OSD interface, OSD security models, and details of the file systems that use OSD technology.

The past few years have seen significant advances in cluster-based storage, with new systems embracing object-based storage to provide the scalability, performance, and fault-tolerance necessary to meet the demands of cluster applications. Products adopting the object-model include Panasas, Lustre, and Centera. This tutorial will present the fundamentals of object-based storage, including the underlying architectural principles and how various products have adapted those principles into their product designs.

Topics include:

  • An overview of the object-based storage device (OSD) interface as defined by the ANSI/T10 standard:
    • The object model
    • The OSD command set
    • OSD security
  • The decoupled data/metadata storage architecture commonly found in cluster storage systems
  • How the OSD interface, security model, networking, and RAID play critical roles in the performance and fault-tolerance of these systems
  • An in-depth comparison of the various object-based storage systems available today

David Nagle (T2) is currently the Advanced Development Architect for Panasas. Before joining Panasas, David was Director of Carnegie Mellon University's Parallel Data Lab, the leading university research group in object-based storage. During his tenure at CMU he was co-PI of the Network-attached Secure Disk Project (NASD) and led the Active Storage Networks and MEMS-based storage research projects. During the past 3 years David has worked with the ANSI T10 committee to complete the proposed standard (ratified in late 2004) and the upcoming OSD V2.0 standard.

Brent Welch (T2) is Director of Software Architecture at Panasas. Panasas has developed a scalable, high-performance, object-based distributed file system that is used in a variety of HPC environments, including many of the Top500 supercomputers. He has worked at Xerox PARC and Sun Microsystems Laboratories. Brent has experience building software systems from the device driver level up through network servers, user applications, and graphical user interfaces. While getting his Ph.D. at UC Berkeley, he designed and built the Sprite distributed file system. Brent is the creator of the TclHttpd Web server and the exmh email user interface and is the author of Practical Programming in Tcl and Tk.
Tuesday Afternoon Half-Day Tutorials
T3 NFS VERSION 4 (THE INSIDE STORY)
Spencer Shepler, Sun Microsystems
1:30 p.m.–5:00 p.m.

Who should attend: System administrators and designers involved with the development, deployment, or maintenance of distributed data systems, and specifically installations that use or may use NFS. Participants should know the basics of TCP/IP networking and understand the general concepts of distributed filesystems.

The Network Filesystem (NFS) has been existence in one form or another for 20 years. Even though NFS version 4 builds on the general principles of previous versions, new features and protocol constructs have been introduced that have changed the playing field. This tutorial will briefly review the basic NFS principles and then move quickly into a detailed review of the latest NFS protocol.

The information provided will be useful to anyone who will need to evaluate or support NFSv4 usage.

Topics include:

  • The mandatory security framework
  • COMPOUND operations and their construction and use
  • The state model employed by NFSv4 for support of OPEN/CLOSE
  • Extended attributes
  • Access Control Lists (ACLs)
  • File Delegations (support for aggressive file data caching)
  • Replication/migration
  • Current implementations and their support for various features of the NFSv4 protocol
  • The potential performance impact of NFSv4 and its implementations
  • IETF activities to extend the current NFSv4 protocol through its minor versioning mechanism

Spencer Shepler (T3) has been involved with NFS and its design and implementation for more than 16 years, first with IBM and more recently with Sun Microsystems. Spencer has also been involved with the definition and delivery of the SPEC SFS benchmark used to measure NFS server performance. Spencer has been involved with NFS version 4 design and development from the beginning and is currently the IETF NFSv4 working group co-chair and document editor.


T4 SNIA TUTORIALS: SAS, SATA, FIBRE CHANNEL, AND SCSI STORAGE PROTOCOLS
Ahmad Zamer, Intel Corporation, and M.K. Jibbe, Engenio Information Technologies
1:30 p.m.–5:00 p.m.

Who should attend: Researchers, system administrators, IT managers, and others wanting a technical look under the covers of the leading and coming storage protocols. This tutorial is provided to FAST by the Tutorials Committee of the Storage Networking Industry Association (SNIA).

The choices in magnetic disk storage protocols, until recently primarily SCSI for the enterprise and ATA for the desktop, are expanding. Serial versions of both protocols, SATA and SAS, are emerging with greater degrees of interoperability. Even more novel is the growing interest in a mapping of the SCSI protocol on top of the TCP/IP transport and its uses. Comparing these to the still growing and dominant replacement for SCSI in the enterprise, Fibre Channel, the first Storage Area Network (SAN), gives the attendee a broad view of the coming choices in storage protocols. This tutorial will be composed of three one-hour mini-tutorials covering technical and market issues of SAS/SATA, Fibre Channel, and iSCSI.

  • Serial Attached SCSI (SAS) Transforming the Storage Market (Ahmad Zamer)
    This tutorial provides an introduction to two important hard disk serial interface technologies. Serial attached SCSI (SAS) and Serial ATA (SATA) protocols are explained and their benefits outlined. The session also explains the compatibilities between the two protocols and their benefits to consumers. The interoperability of a SAS infrastructure with both SATA and SAS disk drives will provide IT managers with storage subsystems that have unprecedented levels of choice in flexibility and price performance points. Also discussed will be the dynamics behind the shift from parallel storage interconnects to serial technologies. More emphasis is placed on SAS, due to its role in the enterprise.
  • Fibre Channel Technologies (M. K. Jibbe)
    This tutorial will educate the user by providing fundamental knowledge of the Fibre Channel protocol, an overview of the functionality of the numerous components that comprise a Fibre Channel SAN, and discussion of the connectivity characteristics, architectural designs, and applications of Fibre Channel SANs.
  • IP Storage Protocols: iSCSI (Ahmad Zamer)
    This tutorial will explain the fundamentals of iSCSI and its deployment in various environments. The protocol and its relationship to SCSI will be explained and the use of software and hardware iSCSI initiators and targets will be discussed, along with the companion protocols for discovery and security.

Martin "Marty" Czekalski (T4) Marty Czekalski brings over twenty years of senior engineering management experience in advanced architecture development for ASICs, memory, and IO subsystem design.

He currently serves as Interface Architecture Initiatives Manager within MaxtorÕs Enterprise Products Technical and Strategic Marketing Group.

Previous industry experience includes engineering management roles at Quantum and Digital Equipment Corporation. Additionally, at Digital Equipment Corp., he was a key member of the Storage Strategy Task Force and the Next Generation IO Task Force, setting the directions for storage and interface strategy.

Mr. Czekalski has participated in several interface standards committees and industry storage groups. He was an Executive Committee Member and a Founding Member for the Serial Attached SCSI Working Group during its existence as the Serial Attached SCSI founding organization. He currently serves as Vice President and member of the Board of Directors of the SCSI Trade Association. Mr. Czekalski is also active with the T10 committee and the Trusted Computing Group.

Mr. Czekalski earned his MS degree in Electrical Engineering from the University of Maryland, and his BE degree in Electrical Engineering from the Stevens Institute of Technology.

John Hufferd (T4) Senior Executive Director of Technology for Brocade

M.K. Jibbe (T4) manages the test architect group at Engenio Information Technologies, Inc. (formerly LSI Logic Storage Systems) in Wichita, Kansas. As an architect lead, Dr. Jibbe and his group define and design the test requirements and test processes for all the Engenio Storage products. Dr. Jibbe is an interoperability architect at Engenio. In this role, Dr. Jibbe verifies that the Engenio Storage products are compliant with various standards and interoperable with all the third-party vendors supported by LSI Logic. Dr. Jibbe has been awarded 16 U.S. patents and has published 27 papers. The patents and papers are related to areas such as hardware development (chips emulating RAID technology), software development related to RAID technology, protocols and network compliance, and test tools. Dr. Jibbe is an Adjunct Professor at The Wichita State University, where he teaches classes on hardware simulation, MPP, SCSI and Fibre Channel protocols, computer architecture, and other basic electrical and computer engineering classes.

?Need help? Use our Contacts page.

Last changed: 5 Dec. 2005 rc