Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

oneDNN Graph Compiler: A Hybrid Approach for High-Performance Deep Learning Compilation (2301.01333v3)

Published 3 Jan 2023 in cs.LG and cs.PF

Abstract: With the rapid development of deep learning models and hardware support for dense computing, the deep learning workload characteristics changed significantly from a few hot spots on compute-intensive operations to a broad range of operations scattered across the models. Accelerating a few compute-intensive operations using the expert-tuned implementation of primitives does not fully exploit the performance potential of AI hardware. Various efforts have been made to compile a full deep neural network (DNN) graph. One of the biggest challenges is to achieve high-performance tensor compilation by generating expert level performance code for the dense compute-intensive operations and applying compilation optimization at the scope of DNN computation graph across multiple compute-intensive operations. We present oneDNN Graph Compiler, a tensor compiler that employs a hybrid approach of using techniques from both compiler optimization and expert-tuned kernels for high performance code generation of the deep neural network graph. oneDNN Graph Compiler addresses unique optimization challenges in the deep learning domain, such as low-precision computation, aggressive fusion of graph operations, optimization for static tensor shapes and memory layout, constant weight optimization, and memory buffer reuse. Experimental results demonstrate significant performance gains over existing tensor compiler and primitives library for performance-critical DNN computation graphs and end-to-end models on Intel Xeon Scalable Processors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Jianhui Li (26 papers)
  2. Zhennan Qin (1 paper)
  3. Yijie Mei (2 papers)
  4. Jingze Cui (1 paper)
  5. Yunfei Song (11 papers)
  6. Ciyong Chen (1 paper)
  7. Yifei Zhang (167 papers)
  8. Longsheng Du (1 paper)
  9. Xianhang Cheng (3 papers)
  10. Baihui Jin (1 paper)
  11. Yan Zhang (954 papers)
  12. Jason Ye (1 paper)
  13. Eric Lin (12 papers)
  14. Dan Lavery (1 paper)
Citations (4)