Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Tensor Algebra as a Parallel Programming Model (1512.00066v1)

Published 30 Nov 2015 in cs.MS

Abstract: Dense and sparse tensors allow the representation of most bulk data structures in computational science applications. We show that sparse tensor algebra can also be used to express many of the transformations on these datasets, especially those which are parallelizable. Tensor computations are a natural generalization of matrix and graph computations. We extend the usual basic operations of tensor summation and contraction to arbitrary functions, and further operations such as reductions and mapping. The expression of these transformations in a high-level sparse linear algebra domain specific language allows our framework to understand their properties at runtime to select the preferred communication-avoiding algorithm. To demonstrate the efficacy of our approach, we show how key graph algorithms as well as common numerical kernels can be succinctly expressed using our interface and provide performance results of a general library implementation.

Citations (47)

Summary

We haven't generated a summary for this paper yet.