Fisc: A Large-scale Cloud-native-oriented File System


Qiang Li, Alibaba Group; Lulu Chen, Fudan University and Alibaba Group; Xiaoliang Wang, Nanjing University; Shuo Huang, Alibaba Group; Qiao Xiang, Xiamen University; Yuanyuan Dong, Wenhui Yao, Minfei Huang, Puyuan Yang, Shanyang Liu, Zhaosheng Zhu, Huayong Wang, Haonan Qiu, Derui Liu, Shaozong Liu, Yujie Zhou, Yaohui Wu, Zhiwu Wu, Shang Gao, Chao Han, Zicheng Luo, Yuchao Shao, Gexiao Tian, Zhongjie Wu, Zheng Cao, and Jinbo Wu, Alibaba Group; Jiwu Shu, Xiamen University; Jie Wu, Fudan University; Jiesheng Wu, Alibaba Group


The wide adoption of Cloud Native shifts the boundary between cloud users and CSPs (Cloud Service Providers) from VM-based infrastructure to container-based applications. However, traditional file systems face challenges. First, the traditional file system (eg, Tectonic, Colossus, HDFS) clients are sophisticated and compete with the scarce resources in the application containers. Second, it is challenging for CSP to help the I/O pass from the containers to the storage clusters while guaranteeing their security, availability, and performance.

To provide file system service for cloud-native applications, we design \system{}, a cloud-native-oriented file system. \system{}~ introduces four key designs: 1) a lightweight file system client in the container, 2) a DPU-based virtio-\system{}~device to implement the hardware offloading, 3) a storage-aware mechanism to address the I/O to the storage node to improve the I/O's availability and realizes local read, 4) a full path QoS mechanism to guarantee the QoS of hybrid deployed applications. \system{} has been deployed in production for over three years. It now serves cloud-native applications running over 3 million cores. Results show that \system{}~client only consumes 80% CPU resources compared to the traditional file system client. The production environment shows that the online searching task's latency is less than 500 $\mu$s when accessing the remote storage cluster.

FAST '23 Open Access Sponsored by

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

This content is available to:

@inproceedings {285790,
author = {Qiang Li and Lulu Chen and Xiaoliang Wang and Shuo Huang and Qiao Xiang and Yuanyuan Dong and Wenhui Yao and Minfei Huang and Puyuan Yang and Shanyang Liu and Zhaosheng Zhu and Huayong Wang and Haonan Qiu and Derui Liu and Shaozong Liu and Yujie Zhou and Yaohui Wu and Zhiwu Wu and Shang Gao and Chao Han and Zicheng Luo and Yuchao Shao and Gexiao Tian and Zhongjie Wu and Zheng Cao and Jinbo Wu and Jiwu Shu and Jie Wu and Jiesheng Wu},
title = {Fisc: A Large-scale Cloud-native-oriented File System},
booktitle = {21st USENIX Conference on File and Storage Technologies (FAST 23)},
year = {2023},
isbn = {978-1-939133-32-8},
address = {Santa Clara, CA},
pages = {231--246},
url = {},
publisher = {USENIX Association},
month = feb,

Presentation Video