Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fully Decoupled Neural Network Learning Using Delayed Gradients (1906.09108v3)

Published 21 Jun 2019 in cs.CV

Abstract: Training neural networks with back-propagation (BP) requires a sequential passing of activations and gradients, which forces the network modules to work in a synchronous fashion. This has been recognized as the lockings (i.e., the forward, backward and update lockings) inherited from the BP. In this paper, we propose a fully decoupled training scheme using delayed gradients (FDG) to break all these lockings. The FDG splits a neural network into multiple modules and trains them independently and asynchronously using different workers (e.g., GPUs). We also introduce a gradient shrinking process to reduce the stale gradient effect caused by the delayed gradients. In addition, we prove that the proposed FDG algorithm guarantees a statistical convergence during training. Experiments are conducted by training deep convolutional neural networks to perform classification tasks on benchmark datasets, showing comparable or better results against the state-of-the-art methods as well as the BP in terms of both generalization and acceleration abilities. In particular, we show that the FDG is also able to train very wide networks (e.g., WRN-28-10) and extremely deep networks (e.g., ResNet-1202). Code is available at https://github.com/ZHUANGHP/FDG.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Huiping Zhuang (45 papers)
  2. Yi Wang (1038 papers)
  3. Qinglai Liu (1 paper)
  4. Shuai Zhang (320 papers)
  5. Zhiping Lin (22 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.