Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Training Reproducible Deep Learning Models (2202.02326v1)

Published 4 Feb 2022 in cs.LG, cs.AI, and cs.SE

Abstract: Reproducibility is an increasing concern in AI, particularly in the area of Deep Learning (DL). Being able to reproduce DL models is crucial for AI-based systems, as it is closely tied to various tasks like training, testing, debugging, and auditing. However, DL models are challenging to be reproduced due to issues like randomness in the software (e.g., DL algorithms) and non-determinism in the hardware (e.g., GPU). There are various practices to mitigate some of the aforementioned issues. However, many of them are either too intrusive or can only work for a specific usage context. In this paper, we propose a systematic approach to training reproducible DL models. Our approach includes three main parts: (1) a set of general criteria to thoroughly evaluate the reproducibility of DL models for two different domains, (2) a unified framework which leverages a record-and-replay technique to mitigate software-related randomness and a profile-and-patch technique to control hardware-related non-determinism, and (3) a reproducibility guideline which explains the rationales and the mitigation strategies on conducting a reproducible training process for DL models. Case study results show our approach can successfully reproduce six open source and one commercial DL models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Boyuan Chen (75 papers)
  2. Mingzhi Wen (1 paper)
  3. Yong Shi (138 papers)
  4. Dayi Lin (22 papers)
  5. Gopi Krishnan Rajbahadur (22 papers)
  6. Zhen Ming (19 papers)
  7. Jiang (40 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.