Managing Array of SSDs When the Storage Device Is No Longer the Performance Bottleneck


Byungseok Kim, Jaeho Kim, and Sam H. Noh, UNIST


With the advent of high performing NVMe SSDs, the bottleneck of system performance is shifting away from the traditional storage device. In particular, the I/O stack software layers have already been recognized as a heavy burden on the overall I/O. Efforts to alleviate this burden have been considered. Recently, the spotlight has been on the CPU. With computing capacity as well as the means to get the data to the processor now being limited, recent studies have suggested that processing power be pushed into where the data is residing. With devices such as 3D XPoint in the horizon, this phenomenon is expected to be aggravated.

In this paper, we focus on another component related to such changes. In particular, it has been observed that the bandwidth of the network that connects clients to storage servers is now being surpassed by storage bandwidth. Figure 1 shows the changes that are happening. We observe that the changes in the storage interface is allowing storage bandwidth to surpass that of the network. As shown in Table 1, recent developments in SSDs have resulted in individual SSDs providing read and write bandwidth in the 5GB/s and 3GB/s range, respectively, which surpasses or is close to that of 10/25/40GbE (Gigabit Ethernet) that comprise the majority of networks being supported today.

Based on this observation, in this paper, we revisit the organization of disk arrays. Specifically, we target write performance in all-flash arrays, which we interchangeably refer to as SSD arrays, that are emerging as a solution for high-end storage. As shown in Table 2, most major storage vendors carry such a solution and these products employ plenty of SSDs to achieve large capacity and high performance. Figure 2 shows how typical all-flash arrays would be connected to the network and the host. Our goal is to provide high, sustained, and consistent write performance in such a storage environment.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@inproceedings {203352,
author = {Byungseok Kim and Jaeho Kim and Sam H. Noh},
title = {Managing Array of {SSDs} When the Storage Device Is No Longer the Performance Bottleneck},
booktitle = {9th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 17)},
year = {2017},
address = {Santa Clara, CA},
url = {},
publisher = {USENIX Association},
month = jul