Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Anticorrelated Noise Injection for Improved Generalization (2202.02831v3)

Published 6 Feb 2022 in stat.ML, cs.LG, and math.OC

Abstract: Injecting artificial noise into gradient descent (GD) is commonly employed to improve the performance of machine learning models. Usually, uncorrelated noise is used in such perturbed gradient descent (PGD) methods. It is, however, not known if this is optimal or whether other types of noise could provide better generalization performance. In this paper, we zoom in on the problem of correlating the perturbations of consecutive PGD steps. We consider a variety of objective functions for which we find that GD with anticorrelated perturbations ("Anti-PGD") generalizes significantly better than GD and standard (uncorrelated) PGD. To support these experimental findings, we also derive a theoretical analysis that demonstrates that Anti-PGD moves to wider minima, while GD and PGD remain stuck in suboptimal regions or even diverge. This new connection between anticorrelated noise and generalization opens the field to novel ways to exploit noise for training machine learning models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Antonio Orvieto (46 papers)
  2. Hans Kersting (12 papers)
  3. Frank Proske (32 papers)
  4. Francis Bach (249 papers)
  5. Aurelien Lucchi (75 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.