HotStorage '18 Workshop Program

All sessions will be held in the Essex Ballroom North unless otherwise noted.

Papers are available for download below to registered attendees now and to everyone beginning July 9, 2018. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].

Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)

This content is available to:

HotStorage '18 Attendee List (PDF)
HotStorage '18 Paper Archive (ZIP)

Monday, July 9, 2018

7:50 am-8:50 am

Continental Breakfast

Essex Ballroom Foyer

8:50 am–9:00 am

Opening Remarks

Program Co-Chairs: Ashvin Goel, University of Toronto and Nisha Talagala, ParallelM

9:00 am–10:30 am

Revisiting Caching

Session Chair: Ioan Stefanovici, MSR

Caching or Not: Rethinking Virtual File System for Non-Volatile Main Memory

Ying Wang, Dejun Jiang, and Jin Xiong, SKL Computer Architecture, ICT, CAS University of Chinese Academy of Sciences

Available Media

Virtual File System (VFS) conventionally provides an abstraction for multiple instances of underlying physical file systems as well as metadata caching, concurrency control and permission check, which benefits disk based file systems. However, in this paper we find VFS brings extra overhead when interacting with persistent memory (PM) file systems. We explore the opportunity of shortening VFS stack for PM file systems. We present ByVFS, an optimization of VFS to directly access metadata in PM file systems bypassing VFS caching layer. The results show ByVFS outperforms conventional VFS with cold cache and provides comparable performance against conventional VFS with warm cache. We also present potential issues when reducing VFS overhead.

DenseFS: a Cache-Compact Filesystem

Zev Weiss, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau, University of Wisconsin-Madison

Available Media

As nonvolatile memory technologies with access latencies comparable to DRAM proliferate, the CPU performance of previously storage-bound workloads becomes increasingly important. In this paper we examine the effects of the filesystem on cache behavior, a key aspect of CPU performance. We then develop DenseFS, a specialized filesystem that aims for a highly compact cache footprint and hence tries to minimize its cache pollution and the performance penalties it incurs. We find that DenseFS is effective in reducing the performance penalty of filesystem operations on user code, and can achieve dramatic reductions in cache miss rates as compared to existing filesystems.

Driving Cache Replacement with ML-based LeCaR

Giuseppe Vietri, Liana V. Rodriguez, Wendy A. Martinez, Steven Lyons, Jason Liu, and Raju Rangaswami, Florida International University; Ming Zhao, Arizona State University; Giri Narasimhan, Florida International University

Available Media

Can machine learning (ML) be used to improve on existing cache replacement strategies? We propose a general framework called LeCaR that uses the ML technique of regret minimization to answer the question in the affirmative. Surprisingly, we show that the LeCaR framework outperforms A RC using only two fundamental eviction policies – LRU and LFU. We also show that the performance gap increases when the size of the available cache gets smaller relative to the size of the working set.

10:30 am–11:00 am

Break with Refreshments

Essex Ballroom Foyer

11:00 am–12:30 pm

New Memory Hierarchies

Session Chair: Garrett Swart, Oracle

Towards Transparent and Seamless Storage-As-You-Go with Persistent Memory

Hyeonho Song and Sam H. Noh, UNIST

Available Media

In traditional computer systems, memory and storage are statically divide and separately allocated. In such a strictly dichotomized system, resource usage becomes unbalanced; more memory is always in need, yet large portions of storage remains unused. The goal of this paper is to break this strict division of memory and storage, and present a system that can dynamically move the boundary between memory and storage as need be. Commercial Persistent Memory (PM) is now in the horizon. PM products based on the DIMM interface are expected to be available soon as well. In this paper, we assume a hybrid memory system composed of DRAM and DIMM interface PM, where the intention is to use PM as storage space. For such a system, we propose Storage-As-You-Go (SAY-Go), a system that transparently adjusts the use of PM such that PM can be used as memory as well as storage as need be. To this end, we present the design and implementation of Persistent Memory Buddy (PMB), a memory allocation service that can freely grow and shrink memory it is managing.

BIBIM: A Prototype Multi-Partition Aware Heterogeneous New Memory

Gyuyoung Park and Miryeong Kwon, Yonsei University; Pratyush Mahapatra and Michael Swift, University of Wisconsin-Madison; Myoungsoo Jung, Yonsei University

Available Media

We describe a prototype multi-partition aware new memory controller and subsystem that precisely integrates DRAM with 3x nm phase change RAM (PRAM), referred to as BIBIM. In this work, we reveal main challenges of a new type of PRAMs in getting closer to main processors by evaluating our real 3x nm PRAM with persistent memory benchmarks. BIBIM implements hybrid cache logic into a 2x nm FPGA device, which can hide long latency imposed by the underlying PRAM modules as well as support persistent operations. The cache logic of our controller can also serve multiple read requests while writing data into a target PRAM bank by taking into account PRAM’s multi-partition architecture. The evaluation results demonstrate that the read and write latency of our BIBIM is 115 ns and 125 ns, which are 38% and 99% better than a pure PRAM-based memory subsystem. In addition, BIBIM can remove blocking reads by 53%, on average, thereby shortening average write-after-read latency by 48%.

SecPM: a Secure and Persistent Memory System for Non-volatile Memory

Pengfei Zuo and Yu Hua, Huazhong University of Science and Technology

Available Media

In the non-volatile memory, ensuring the security and correctness of persistent data is fundamental. However, the security and persistence issues are usually studied independently in existing work. To achieve both data security and persistence, simply combining existing persistence schemes with memory encryption is inefficient due to crash inconsistency and significant performance degradation. To bridge the gap between security and persistence, this paper proposes SecPM, a Secure and Persistent Memory system, which consists of a counter cache write-through (CWT) scheme and a locality-aware counter write reduction (CWR) scheme. Specifically, SecPM leverages the CWT scheme to guarantee the crash consistency via ensuring both the data and its counter are durable before the data flush completes, and leverages the CWR scheme to improve the performance via exploiting the spatial locality of counter storage, log and data writes. Preliminary experimental results demonstrate that SecPM significantly reduces the number of write requests and improves the transaction throughput by using the CWR scheme.

12:30 pm–2:00 pm

Luncheon for Workshop Attendees

Essex Ballroom South

2:00 pm–3:30 pm

File Systems and Namespaces

Session Chair: Ric Wheeler, Red Hat

Tintenfisch: File System Namespace Schemas and Generators

Michael A. Sevilla, Reza Nasirigerdeh, Carlos Maltzahn, Jeff LeFevre, Noah Watkins, and Peter Alvaro, University of California, Santa Cruz; Margaret Lawson and Jay Lofstead, Sandia National Laboratories; Jim Pivarski, Princeton University

Available Media

The file system metadata service is the scalability bottleneck for many of today’s workloads. Common approaches for attacking this "metadata scaling wall" include: caching inodes on clients and servers, caching parent inodes for path traversal, and dynamic caching policies that exploit workload locality. These caches reduce the number of remote procedure calls (RPCs) but the effectiveness is dependent on the overhead of maintaining cache coherence and the administrator’s ability to select the best cache size for the given workloads. Recent work reduces the number of metadata RPCs to 1 without using a cache at all, by letting clients "decouple" the subtrees from the global namespace so that they can do metadata operations locally. Even with this technique, we show that file system metadata is still a bottleneck because namespaces for today’s workloads can be very large. The size is problematic for reads because metadata needs to be transferred and materialized. The management techniques for file system metadata assume that namespaces have no structure but we observe that this is not the case for all workloads. We propose Tintenfisch, a file system that allows users to succinctly express the structure of the metadata they intend to create. If a user can express the structure of the namespace, Tintenfisch clients and servers can (1) compact metadata, (2) modify large namespaces more quickly, and (3) generate only relevant parts of the namespace. This reduces network traffic, storage footprints, and the number of overall metadata operations needed to complete a job.

Breaking Apart the VFS for Managing File Systems

Kuei Sun, Matthew Lakier, Angela Demke Brown, and Ashvin Goel, University of Toronto

Available Media

File system management applications, such as data scrubbers, defragmentation tools, resizing tools, and partition editors, are essential for maintaining, optimizing, and administering storage systems. These applications require fine-grained control over file-system metadata and data, such as the ability to migrate a data block to another physical location. Such control is not available with the VFS API, and so these applications bypass the VFS and access and modify file-system metadata directly. As a result, these applications do not work across file systems, and must be developed from scratch for each file system, which involves significant engineering effort and impedes adoption of new file systems.

Our goal is to design an interface that allows these management applications to be written once and be usable for all file systems that support the interface. Our key insight is that these applications operate on common file system abstractions, such as file system objects (e.g., blocks, inodes, and directory entries), and the mappings from logical blocks of a file to their physical locations. We propose the Extended Virtual File System (eVFS) interface that provides fine-grained access to these abstractions, allowing the development of generic file system management applications. We demonstrate the benefits of our approach by building a file-system agnostic conversion tool that performs in-place conversion of a source file system to a completely different destination file system, showing that arbitrary modifications to the file system format can be handled by the interface.

How to Teach an Old File System Dog New Object Store Tricks

Eunji Lee and Youil Han, Chungbuk National University; Suli Yang, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau, University of Wisconsin-Madison

Available Media

Many data service platforms use local file systems as their backend storage. Although this approach offers advantages in portability, extensibility, and ease of development, it may suffer from severe performance degradation if the mapping between the services required by the data service platform and the functions provided by the local file system is not carefully managed. This paper presents in-depth analysis of performance problems in current data service platforms that use file systems as their backend storage and proposes three novel strategies that are essential to solving the current performance problems. We demonstrate the efficacy of our strategies by implementing a prototype object store in Ceph, called SwimStore (Shadowing with Immutable Metadata Store). We experimentally show that SwimStore provides high performance with little variation, as well as a large reduction in write traffic.

3:30 pm–4:00 pm

Break with Refreshments

Essex Ballroom Foyer

4:00 pm–5:30 pm

Key Value Stores and Applications

Session Chair: Yu Hua, Huazhong University of Science and Technology

mLSM: Making Authenticated Storage Faster in Ethereum

Pandian Raju, Soujanya Ponnapalli, Evan Kaminsky, Gilad Oved, and Zachary Keener, University of Texas at Austin; Vijay Chidambaram, University of Texas at Austin and VMware Research; Ittai Abraham, VMware Research

Available Media

Ethereum provides authenticated storage: each read returns a value and a proof that allows the client to verify the value returned is correct. We experimentally show that such authentication leads to high read and write amplification (64x in the worst case). We present a novel data structure, Merkelized LSM (mLSM), that significantly reduces the read and write amplification while still allowing client verification of reads. mLSM significantly increases the performance of the storage subsystem in Ethereum, thereby increasing the performance of a wide range of Ethereum applications.

ElasticBF: Fine-grained and Elastic Bloom Filter Towards Efficient Read for LSM-tree-based KV Stores

Yueming Zhang, Yongkun Li, Fan Guo, Cheng Li, and Yinlong Xu, University of Science and Technology of China

Available Media

LSM-tree based KV stores suffer from severe read amplification, especially for large KV stores. Even worse, many applications may issue a large amount of lookup operations to search for nonexistent keys, which wastes a lot of extra I/Os. Even though Bloom filters can be used to speedup the read performance, existing designs usually adopt a uniform setting for all Bloom filters and fail to support dynamic adjustment, thus results in a high false positive rate or large memory consumption. To address this issue, we propose ElasticBF, which constructs more small filters for each SSTable and dynamically load into memory as needed based on access frequency, so it realizes a fine-grained and elastic adjustment in running time with the same memory usage. Experiment shows that ElasticBF can achieve 1.94×-2.24× read throughput compared to LevelDB under different workloads, and preserves the same write performance. More importantly, ElasticBF is orthogonal to existing works optimizing the structure of KV stores, so it can be used as an accelerator to further speedup their read performance.

Revisiting Concurrency in High-Performance NoSQL Databases

Yuvraj Patel, University of Wisconsin-Madison; Mohit Verma, NVIDIA; Andrea Arpaci-Dusseau and Remzi Arpaci-Dusseau, University of Wisconsin-Madison

Available Media

We measure the performance of five popular databases and show that single-node performance does not scale while hosting data on high-performance storage systems (e.g., Flash-based SSDs). We then analyze each system, unveiling techniques each system uses to increase concurrent performance; our taxonomy places said approaches into six different categories (thread architecture, batching, granularity, partitioning, scheduling and low-level efficiency) and thus points towards possible remedies that can scale the system. Finally, we introduce Xyza, a modified version of MongoDB that uses a wide range of classic and novel techniques to improve performance under concurrent, write-heavy workloads. Empirical analysis reveals that Xyza is 2x to 3x faster than MongoDB and scales well (up to 32 processing cores).

5:30 pm–6:30 pm

Poster Session and Happy Hour

Essex Ballroom South
Sponsored by NetApp

Check out the cool new ideas and the latest preliminary research on display at the Poster Session and Happy Hour. Take part in discussions with your colleagues over complimentary food and drinks.

Tuesday, July 10, 2018

8:00 am–9:00 am

Continental Breakfast

Essex Ballroom Foyer

9:00 am–10:30 am

New Media Technologies

Session Chair: Ethan Miller, University of California, Santa Cruz

Glass: A New Media for a New Era?

Patrick Anderson and Richard Black, Microsoft Research; Ausra Cerkauskaite, University of Southampton; Andromachi Chatzieleftheriou, James Clegg, Chris Dainty, Raluca Diaconu, Rokas Drevinskas, Austin Donnelly, Alexander L. Gaunt, Andreas Georgiou, and Ariel Gomez Diaz, Microsoft Research; Peter G. Kazansky, University of Southampton; David Lara, Sergey Legtchenko, Sebastian Nowozin, Aaron Ogus, Douglas Phillips, and Antony Rowstron, Microsoft Research; Masaaki Sakakura, University of Southampton; Ioan Stefanovici and Benn Thomsen, Microsoft Research; Lei Wang, University of Southampton; Hugh Williams and Mengyang Yang, Microsoft Research

Available Media

In the foreseeable future, major cloud vendors will need to store multiple zettabytes of data in their cloud storage infrastructure. Like all storage, cloud storage systems need to trade performance for cost, and they currently achieve this by using storage tiers backed by different storage media. The ultimate goal for cloud storage would be to provide zero-cost storage with low access latencies and high throughput. But all the storage media being deployed in the cloud today were created before the cloud existed, and were designed to support many usage scenarios. With cloud storage, the focus is on cost, and storage needs to be designed to be right-provisionable. The limits of what is possible with existing storage technologies are being reached, and a new clean-slate approach is needed for cloud storage. Hence, the time is right to seek out a new physical media to underpin novel storage systems designed exclusively to support the cloud. In Project Silica, Microsoft Research and Southampton University, are exploring whether quartz glass could be the future media for mass storage in the cloud. In this paper, we describe the basis for the technology, and discuss conventional assumptions about storage that we are challenging in Project Silica.

Data Management Design for Interlaced Magnetic Recording

Fenggang Wu, Baoquan Zhang, Zhichao Cao, Hao Wen, Bingzhe Li, and Jim Diehl, University of Minnesota, Twin Cities; Guohua Wang, South China University of Technology; David H.C. Du, University of Minnesota, Twin Cities

Available Media

Interlaced Magnetic Recording (IMR) is a promising technology which achieves higher data density and lower write amplification than Shingled Magnetic Recording (SMR) when used with Heat-Assisted Magnetic Recording (HAMR). In IMR, top (narrower) tracks and bottom (wider) tracks are interlaced so that each bottom track is partially overlapped with two adjacent top tracks. Top tracks can be updated without any write amplification, but updating a data block in a bottom track requires reading and rewriting of the affected data on the two neighboring top tracks if they contain valid data. We investigate efficient data management schemes for IMR in this paper. First, we design a Three-Phase data management algorithm that allocates disk space in three stages according to disk usage. We further propose two techniques, Top-Buffer and Block-Swap, which can be used in IMR to improve the performance of the Three-Phase algorithm. Top-Buffer opportunistically makes use of unallocated top track space as a buffer for updates to the bottom tracks, while Block-Swap progressively swaps hot data in bottom tracks with cold data in top tracks. Finally, we propose our Data Management design for IMR, or DM-IMR, by integrating Top-Buffer and Block-Swap with the Three-Phase scheme. Evaluations with Microsoft Research Cambridge traces show that DM-IMR can increase the throughput and reduce the write amplification for all traces when compared with the Three-Phase baseline scheme.

Exploring System Challenges of Ultra-Low Latency Solid State Drives

Sungjoon Koh, Changrim Lee, Miryeong Kwon, and Myoungsoo Jung, Yonsei University

Available Media

We quantitatively characterize performance behaviors of a real ultra-low latency (ULL) SSD archive by using a real 800GB Z-SSD prototype, and analyze systemlevel challenges that the current storage stack exhibits. Specifically, our comprehensive empirical evaluations and studies demonstrate i) diverse performance analyses of the ULL SSD, including a wide range of latency and queue examinations, ii) I/O interference characteristics, which are considered as one of the great performance bottlenecks of modern SSDs, and iii) efficiency and challenge analyses of a polling-based I/O service (newly added into Linux 4.4 kernel) by comparing it with conventional interrupt-based I/O services. In addition to these performance characterizations, we discuss several system implications, which are required to take a full benefit of ULL SSD in the future.

10:30 am–11:00 am

Break with Refreshments

Essex Ballroom Foyer

11:00 am–12:30 pm

Solid State Performance

Session Chair: Mike Mesnier, Intel

Utilitarian Performance Isolation in Shared SSDs

Bryan S. Kim, Seoul National University

Available Media

This paper proposes a utilitarian performance isolation (UPI) scheme for shared SSD settings. UPI exploits SSD’s abundant parallelism to maximize the utility of all tenants while providing performance isolation. Our approach is in contrast to static resource partitioning techniques that bind parallelism, isolation, and capacity altogether. We demonstrate that our proposed scheme reduces the 99th percentile response time by 38.5% for a latency-critical workload, and the average response time by 16.1% for a high-throughput workload compared to the static approaches.

PCStream: Automatic Stream Allocation Using Program Contexts

Taejin Kim and Sangwook Shane Hahn, Seoul National University; Sungjin Lee, DGIST; Jooyoung Hwang and Jongyoul Lee, Samsung Electronics; Jihong Kim, Seoul National University

Available Media

We propose a fully automatic stream management technique, called PCStream, for multi-streamed SSDs. PCStream is based on our observation that data lifetimes can be reliably predicted using write program contexts. By extracting program contexts during runtime, PCStream automates the data-to-stream mapping. When data mapped to the same stream show large differences in their lifetimes, PCStream moves the long-lived data of the current stream to its substream during garbage collection. Our experimental results show that PCStream can reduce the garbage collection overhead as much as a highly-optimized manual stream management technique while no code modification is necessary.

vStream: Virtual Stream Management for Multi-streamed SSDs

Hwanjin Yong, Samsung Electronics and Sungkyunkwan University; Kisik Jeong and Joonwon Lee, Sungkyunkwan University; Jin-Soo Kim, Seoul National University

Available Media

Recently, multi-streamed SSDs have been proposed to reduce the cost of garbage collection in NAND flash-based SSDs. In multi-stream SSDs, application developers can control data placement within the SSD by specifying a stream ID along with the write command. However, commercial SSDs support only a small number of streams due to the device's limitation in hardware resources. This makes it difficult to fully utilize the benefit of the multi-streamed SSDs.

In this paper, we propose a new concept of virtual streams (vStreams) for the multi-streamed SSDs. vStreams allow application developers to manage a sufficient number of streams regardless of the physical streams supported by the device. We also present novel mechanisms to monitor the lifetime of each stream with a negligible memory overhead and to map one or more vStreams into a physical stream at run time. Our evaluations with RocksDB show that the proposed vStream SSD improves the throughput by 70% compared to the legacy SSD with no stream support.

12:30 pm–2:00 pm

Luncheon for Workshop Attendees

Essex Ballroom South

2:00 pm–4:00 pm

Consistency and Reliability

Session Chair: Fred Douglis, Perspecta Labs

DREAM: Data Representation Aware of Damage to Extend the Lifetime of MLC NAND Flash Memory

Ting Ye, Wuhan National Laboratory for Optoelectronics, Huazhong University of Sci. and Tech., China; Shenggang Wan, School of Computer Science and Technology, Huazhong University of Sci. and Tech., China; Xubin He, Department of Computer and Information Sciences, Temple University, USA; Weijun Xiao, Virginia Commonwealth University, USA; Changsheng Xie, Wuhan National Laboratory for Optoelectronics, Huazhong University of Sci. and Tech., China

Available Media

MLC NAND flash memory uses the voltages of the memory cells to represent bits. High voltages cause much more damage on the cells than low voltages. The free space that need not store bits is leveraged to reduce the usage of those high voltages and thus extend the lifetime of the MLC memory. However, limited by the conventional data representation rule that represents bits by the voltage of one single cell, the high voltages are still used in a high probability. To fully explore the potential of the free space on reducing the usage of high voltages, we propose a novel data representation aware of damage, named DREAM. DREAM uses the voltage combinations of multiple cells instead of the voltage of one single cell to represent bits. It enables to represent the same bits through flexibly replacing the high voltages in some cells with the low voltages in other cells when free space is available. Hence, high voltages which cause more damage are less used and the lifetime of the MLC memory is extended. Theoretical analysis results demonstrate the effectiveness and efficiency of DREAM.

A Case for Biased Programming in Flash

Eitan Yaakobi, Gala Yadgar, Nachum Bundak, and Lior Gilon, Computer Science Department, Technion

Available Media

The voltage level of flash cells is directly correlated with the wear they experience. Previous studies showed that increasing the ratio of ones to zeroes within a flash page can reduce the amount of bit errors in this page as well as the long-term wear of its cells. Biased programming ensures more ones are programmed than zeroes by employing specialized codes which, in turn, incur non-negligible storage overhead.

We propose a novel approach to utilize the page spare area for biased programming, introducing a new tradeoff: while using the spare area for a stronger ECC can correct more errors, biased programming can reduce the number of those errors. We show that as long as the bit error rate is below a pre-determined threshold, biased programming can be applied without compromising the data’s durability. When the threshold is reached, we revert to normal programming, but we can use the chip for as much as 24% additional writes, thanks to its reduced wear. We demonstrate the applicability of our approach on real MLC chips. We also perform an initial evaluation on a TLC chip, which exposes the challenges in applying any type of biased programming to TLC flash.

When Address Remapping Techniques Meet Consistency Guarantee Mechanisms

Dong Hyun Kang and Gihwan Oh, Sungkyunkwan University; Dongki Kim and In Hwan Doh, Samsung Electronics; Changwoo Min, Virginia Tech; Sang-Won Lee and Young Ik Eom, Sungkyunkwan University

Available Media

Guaranteeing crash consistency is still one of the most expensive operations in traditional file systems because it causes redundant writes in a journaling file system, excessive read/writes in a log-structured file system, and tree-wandering problem in a copy-on-write file system. In this paper, we argue that such overhead can be significantly reduced by leveraging address remapping technique, which is already essential in many flash SSD devices. We thoroughly explore the potential of address remapping technique to ease the cost of guaranteeing consistency in two traditional file systems (i.e., Ext4 and F2FS) and one database system (i.e., MySQL). In particular, we introduce address remapping-based techniques to guarantee consistency, for file system journaling (i.e., SOJ and SDJ), segment cleaning (i.e., SSC), and application-level data journaling (i.e., SADJ). To evaluate the proposed techniques, we developed a PCIe SSD prototype, which exposes the address remapping capability to the upper layer as a share command. Our experimental results using the PCIe SSD with the share command confirms that the address remapping, though simple, is very effective in reducing the read/write amplification due to the conventional ways of guaranteeing consistency in the existing file systems and database applications.

Reducing NVM Writes with Optimized Shadow Paging

Yuanjiang Ni, UC Santa Cruz; Jishen Zhao, UC San Diego; Daniel Bittman, UC Santa Cruz; Ethan Miller, UC Santa Cruz and Pure Storage

Available Media

Byte-addressable non-volatile memory (BNVM) technologies are closing the performance gap between traditional storage and memory. However, the integrity of persistent data structures after an unclean shutdown remains a major concern. Logging and shadow paging are commonly used to ensure consistency of BNVM systems. But both approaches can impose significant performance and energy overhead by writing extra data into BNVM. Our approach leverages the indirection of virtual memory to avoid the need for logging actual data and uses a novel cache line-level mapping mechanism to eliminate the need to write unnecessary data. Thus, our approach is able to significantly reduce the overhead of committing data to BNVM. Our preliminary evaluation results show that using OSP for transactions reduces the overhead necessary to persist data by up to 1.96× as compared to undo-log. Moreover, our approach can be used to provide fast, low-overhead persistence for hardware transactional memory, further facilitating the acceptance of BNVM into computing systems.

4:00 pm–4:30 pm

Break with Refreshments

Essex Ballroom Foyer

4:30 pm–5:30 pm


Session Chairs: Ashvin Goel, University of Toronto; Nisha Talagala, ParallelM

Breakout Presentations

Presented by:

  • Carlos Maltzahn, UC Santa Cruz
  • Adam Manzanares, Western Digital
  • Swaminathan Sundararaman, ParallelM
    Available Media