Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ALCOP: Automatic Load-Compute Pipelining in Deep Learning Compiler for AI-GPUs (2210.16691v2)

Published 29 Oct 2022 in cs.DC

Abstract: Pipelining between data loading and computation is a critical tensor program optimization for GPUs. In order to unleash the high performance of latest GPUs, we must perform a synergetic optimization of multi-stage pipelining across the multi-level buffer hierarchy of GPU. Existing frameworks rely on hand-written libraries such as cuBLAS to perform pipelining optimization, which is inextensible to new operators and un-composable with prior tensor compiler optimizations. This paper presents ALCOP, the first framework that is compiler-native and fully supports multi-stage multi-level pipelining. ALCOP overcomes three critical obstacles in generating code for pipelining: detection of pipelining-applicable buffers, program transformation for multi-level multi-stage pipelining, and efficient schedule parameter search by incorporating static analysis. Experiments show that ALCOP can generate programs with 1.23x speedup on average (up to 1.73x) over vanilla TVM. On end-to-end models, ALCOP can improve upon TVM by up to 1.18x, and XLA by up to 1.64x. Besides, our performance model significantly improves the efficiency of the schedule tuning process and can find schedules with 99% of the performance given by exhaustive search while costing 40x fewer trials.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Guyue Huang (11 papers)
  2. Yang Bai (205 papers)
  3. Liu Liu (190 papers)
  4. Yuke Wang (23 papers)
  5. Bei Yu (113 papers)
  6. Yufei Ding (81 papers)
  7. Yuan Xie (188 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.