Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Relay: A High-Level Compiler for Deep Learning (1904.08368v2)

Published 17 Apr 2019 in cs.LG, cs.PL, and stat.ML

Abstract: Frameworks for writing, compiling, and optimizing deep learning (DL) models have recently enabled progress in areas like computer vision and natural language processing. Extending these frameworks to accommodate the rapidly diversifying landscape of DL models and hardware platforms presents challenging tradeoffs between expressivity, composability, and portability. We present Relay, a new compiler framework for DL. Relay's functional, statically typed intermediate representation (IR) unifies and generalizes existing DL IRs to express state-of-the-art models. The introduction of Relay's expressive IR requires careful design of domain-specific optimizations, addressed via Relay's extension mechanisms. Using these extension mechanisms, Relay supports a unified compiler that can target a variety of hardware platforms. Our evaluation demonstrates Relay's competitive performance for a broad class of models and devices (CPUs, GPUs, and emerging accelerators). Relay's design demonstrates how a unified IR can provide expressivity, composability, and portability without compromising performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jared Roesch (8 papers)
  2. Steven Lyubomirsky (6 papers)
  3. Marisa Kirisame (6 papers)
  4. Logan Weber (3 papers)
  5. Josh Pollock (4 papers)
  6. Luis Vega (60 papers)
  7. Ziheng Jiang (23 papers)
  8. Tianqi Chen (77 papers)
  9. Thierry Moreau (11 papers)
  10. Zachary Tatlock (29 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.