Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Entropic gradient descent algorithms and wide flat minima (2006.07897v4)

Published 14 Jun 2020 in cs.LG, cond-mat.dis-nn, and stat.ML

Abstract: The properties of flat minima in the empirical risk landscape of neural networks have been debated for some time. Increasing evidence suggests they possess better generalization capabilities with respect to sharp ones. First, we discuss Gaussian mixture classification models and show analytically that there exist Bayes optimal pointwise estimators which correspond to minimizers belonging to wide flat regions. These estimators can be found by applying maximum flatness algorithms either directly on the classifier (which is norm independent) or on the differentiable loss function used in learning. Next, we extend the analysis to the deep learning scenario by extensive numerical validations. Using two algorithms, Entropy-SGD and Replicated-SGD, that explicitly include in the optimization objective a non-local flatness measure known as local entropy, we consistently improve the generalization error for common architectures (e.g. ResNet, EfficientNet). An easy to compute flatness measure shows a clear correlation with test accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Fabrizio Pittorino (14 papers)
  2. Carlo Lucibello (38 papers)
  3. Christoph Feinauer (12 papers)
  4. Gabriele Perugini (10 papers)
  5. Carlo Baldassi (36 papers)
  6. Elizaveta Demyanenko (3 papers)
  7. Riccardo Zecchina (48 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.