Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

High-Performance Deep Learning via a Single Building Block (1906.06440v2)

Published 15 Jun 2019 in cs.LG, cs.DC, and stat.ML

Abstract: Deep learning (DL) is one of the most prominent branches of machine learning. Due to the immense computational cost of DL workloads, industry and academia have developed DL libraries with highly-specialized kernels for each workload/architecture, leading to numerous, complex code-bases that strive for performance, yet they are hard to maintain and do not generalize. In this work, we introduce the batch-reduce GEMM kernel and show how the most popular DL algorithms can be formulated with this kernel as the basic building-block. Consequently, the DL library-development degenerates to mere (potentially automatic) tuning of loops around this sole optimized kernel. By exploiting our new kernel we implement Recurrent Neural Networks, Convolution Neural Networks and Multilayer Perceptron training and inference primitives in just 3K lines of high-level code. Our primitives outperform vendor-optimized libraries on multi-node CPU clusters, and we also provide proof-of-concept CNN kernels targeting GPUs. Finally, we demonstrate that the batch-reduce GEMM kernel within a tensor compiler yields high-performance CNN primitives, further amplifying the viability of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Evangelos Georganas (18 papers)
  2. Kunal Banerjee (12 papers)
  3. Dhiraj Kalamkar (15 papers)
  4. Sasikanth Avancha (20 papers)
  5. Anand Venkat (5 papers)
  6. Michael Anderson (22 papers)
  7. Greg Henry (7 papers)
  8. Hans Pabst (10 papers)
  9. Alexander Heinecke (21 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.