Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers (2207.13066v2)

Published 26 Jul 2022 in cs.LG and cs.SE

Abstract: Deep-learning (DL) compilers such as TVM and TensorRT are increasingly being used to optimize deep neural network (DNN) models to meet performance, resource utilization and other requirements. Bugs in these compilers can result in models whose semantics differ from the original ones, producing incorrect results that corrupt the correctness of downstream applications. However, finding bugs in these compilers is challenging due to their complexity. In this work, we propose a new fuzz testing approach for finding bugs in deep-learning compilers. Our core approach consists of (i) generating diverse yet valid DNN test models that can exercise a large part of the compiler's transformation logic using light-weight operator specifications; (ii) performing gradient-based search to find model inputs that avoid any floating-point exceptional values during model execution, reducing the chance of missed bugs or false alarms; and (iii) using differential testing to identify bugs. We implemented this approach in NNSmith which has found 72 new bugs for TVM, TensorRT, ONNXRuntime, and PyTorch to date. Of these 58 have been confirmed and 51 have been fixed by their respective project maintainers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiawei Liu (156 papers)
  2. Jinkun Lin (8 papers)
  3. Fabian Ruffy (5 papers)
  4. Cheng Tan (140 papers)
  5. Jinyang Li (67 papers)
  6. Aurojit Panda (27 papers)
  7. Lingming Zhang (48 papers)
Citations (45)