Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradient Noise Convolution (GNC): Smoothing Loss Function for Distributed Large-Batch SGD (1906.10822v1)

Published 26 Jun 2019 in cs.LG and stat.ML

Abstract: Large-batch stochastic gradient descent (SGD) is widely used for training in distributed deep learning because of its training-time efficiency, however, extremely large-batch SGD leads to poor generalization and easily converges to sharp minima, which prevents naive large-scale data-parallel SGD (DP-SGD) from converging to good minima. To overcome this difficulty, we propose gradient noise convolution (GNC), which effectively smooths sharper minima of the loss function. For DP-SGD, GNC utilizes so-called gradient noise, which is induced by stochastic gradient variation and convolved to the loss function as a smoothing effect. GNC computation can be performed by simply computing the stochastic gradient on each parallel worker and merging them, and is therefore extremely easy to implement. Due to convolving with the gradient noise, which tends to spread along a sharper direction of the loss function, GNC can effectively smooth sharp minima and achieve better generalization, whereas isotropic random noise cannot. We empirically show this effect by comparing GNC with isotropic random noise, and show that it achieves state-of-the-art generalization performance for large-scale deep neural network optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kosuke Haruki (3 papers)
  2. Taiji Suzuki (119 papers)
  3. Yohei Hamakawa (6 papers)
  4. Takeshi Toda (1 paper)
  5. Ryuji Sakai (1 paper)
  6. Masahiro Ozawa (1 paper)
  7. Mitsuhiro Kimura (2 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.