Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bring Your Own Codegen to Deep Learning Compiler (2105.03215v1)

Published 3 May 2021 in cs.LG, cs.PF, and cs.PL

Abstract: Deep neural networks (DNNs) have been ubiquitously applied in many applications, and accelerators are emerged as an enabler to support the fast and efficient inference tasks of these applications. However, to achieve high model coverage with high performance, each accelerator vendor has to develop a full compiler stack to ingest, optimize, and execute the DNNs. This poses significant challenges in the development and maintenance of the software stack. In addition, the vendors have to contiguously update their hardware and/or software to cope with the rapid evolution of the DNN model architectures and operators. To address these issues, this paper proposes an open source framework that enables users to only concentrate on the development of their proprietary code generation tools by reusing as many as possible components in the existing deep learning compilers. Our framework provides users flexible and easy-to-use interfaces to partition their models into segments that can be executed on "the best" processors to take advantage of the powerful computation capability of accelerators. Our case study shows that our framework has been deployed in multiple commercial vendors' compiler stacks with only a few thousand lines of code.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zhi Chen (235 papers)
  2. Cody Hao Yu (13 papers)
  3. Trevor Morris (1 paper)
  4. Jorn Tuyls (1 paper)
  5. Yi-Hsiang Lai (2 papers)
  6. Jared Roesch (8 papers)
  7. Elliott Delaye (2 papers)
  8. Vin Sharma (6 papers)
  9. Yida Wang (62 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.