sponsors
usenix conference policies
Parallelization Primitives for Dynamic Sparse Computations
Tsung-Han Lin, Stephen J. Tarsa, and H.T. Kung, Harvard University
We characterize a general class of algorithms common in machine learning, scientific computing, and signal processing, whose computational dependencies are both sparse, and dynamically defined throughout execution. Existing parallel computing runtimes, like MapReduce and GraphLab, are a poor fit for this class because they assume statically defined dependencies for resource allocation and scheduling decisions. As a result, changing load characteristics and straggling compute units degrade performance significantly. However, we show that the sparsity of computational dependencies and these algorithms’ natural error tolerance can be exploited to implement a flexible execution model with large efficiency gains, using two simple primitives: selective push-pull and statistical barriers. With reconstruction for compressive time-lapse MRI as a motivating application, we deploy a large Orthogonal Matching Pursuit (OMP) computation on Amazon’s EC2 cluster to demonstrate a 19x speedup over current static execution models.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Tsung-Han Lin and Stephen J. Tarsa and H.T. Kung},
title = {Parallelization Primitives for Dynamic Sparse Computations},
booktitle = {5th USENIX Workshop on Hot Topics in Parallelism (HotPar 13)},
year = {2013},
address = {San Jose, CA},
url = {https://www.usenix.org/conference/hotpar13/workshop-program/presentation/lin},
publisher = {USENIX Association},
month = jun
}
connect with us