Chrisma Pakha, University of Chicago; Aakanksha Chowdhery, Google; Junchen Jiang, University of Chicago/Microsoft
Driven by the ubiquity of camera-equipped devices and the prohibitive cost of modern vision techniques, we see a growing need for a custom video streaming protocol that streams videos from cameras to cloud servers to perform neural-network-based video analytics. In the past decade, numerous efforts have optimized video streaming protocols to provide better quality-of-experience to users. In this paper, we call upon this community to similarly develop custom streaming protocols for better analytics quality (accuracy) of vision analytics (deep neural networks). We highlight new opportunities to substantially improve the tradeoffs between bandwidth usage and inference accuracy. The key insight is that existing streaming protocols are essentially client (camera)-driven; in contrast, by letting the analytics server decide what/when to stream from the camera, the new protocols can directly optimize the inference accuracy while minimizing bandwidth usage. Preliminary evaluation shows that a simple protocol can reduce bandwidth consumption by 4-23x compared to traditional streaming protocols and other distributed video analytics pipelines while maintaining at least 95% inference accuracy.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Chrisma Pakha and Aakanksha Chowdhery and Junchen Jiang},
title = {Reinventing Video Streaming for Distributed Vision Analytics},
booktitle = {10th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 18)},
year = {2018},
address = {Boston, MA},
url = {https://www.usenix.org/conference/hotcloud18/presentation/pakha},
publisher = {USENIX Association},
month = jul
}