Sparse Tensor Algebra as a Parallel Programming Model

30 Nov 2015  ·  Edgar Solomonik, Torsten Hoefler ·

Dense and sparse tensors allow the representation of most bulk data structures in computational science applications. We show that sparse tensor algebra can also be used to express many of the transformations on these datasets, especially those which are parallelizable. Tensor computations are a natural generalization of matrix and graph computations. We extend the usual basic operations of tensor summation and contraction to arbitrary functions, and further operations such as reductions and mapping. The expression of these transformations in a high-level sparse linear algebra domain specific language allows our framework to understand their properties at runtime to select the preferred communication-avoiding algorithm. To demonstrate the efficacy of our approach, we show how key graph algorithms as well as common numerical kernels can be succinctly expressed using our interface and provide performance results of a general library implementation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper