Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Compiling ONNX Neural Network Models Using MLIR (2008.08272v2)

Published 19 Aug 2020 in cs.PL and cs.LG

Abstract: Deep neural network models are becoming increasingly popular and have been used in various tasks such as computer vision, speech recognition, and natural language processing. Machine learning models are commonly trained in a resource-rich environment and then deployed in a distinct environment such as high availability machines or edge devices. To assist the portability of models, the open-source community has proposed the Open Neural Network Exchange (ONNX) standard. In this paper, we present a high-level, preliminary report on our onnx-mlir compiler, which generates code for the inference of deep neural network models described in the ONNX format. Onnx-mlir is an open-source compiler implemented using the Multi-Level Intermediate Representation (MLIR) infrastructure recently integrated in the LLVM project. Onnx-mlir relies on the MLIR concept of dialects to implement its functionality. We propose here two new dialects: (1) an ONNX specific dialect that encodes the ONNX standard semantics, and (2) a loop-based dialect to provide for a common lowering point for all ONNX dialect operations. Each intermediate representation facilitates its own characteristic set of graph-level and loop-based optimizations respectively. We illustrate our approach by following several models through the proposed representations and we include some early optimization work and performance results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Tian Jin (24 papers)
  2. Gheorghe-Teodor Bercea (9 papers)
  3. Tung D. Le (3 papers)
  4. Tong Chen (200 papers)
  5. Gong Su (5 papers)
  6. Haruki Imai (5 papers)
  7. Yasushi Negishi (4 papers)
  8. Anh Leu (1 paper)
  9. Kevin O'Brien (14 papers)
  10. Kiyokuni Kawachiya (4 papers)
  11. Alexandre E. Eichenberger (1 paper)
Citations (48)