Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exponential convergence rates for Batch Normalization: The power of length-direction decoupling in non-convex optimization (1805.10694v3)

Published 27 May 2018 in stat.ML and cs.LG

Abstract: Normalization techniques such as Batch Normalization have been applied successfully for training deep neural networks. Yet, despite its apparent empirical benefits, the reasons behind the success of Batch Normalization are mostly hypothetical. We here aim to provide a more thorough theoretical understanding from a classical optimization perspective. Our main contribution towards this goal is the identification of various problem instances in the realm of machine learning where % -- under certain assumptions-- Batch Normalization can provably accelerate optimization. We argue that this acceleration is due to the fact that Batch Normalization splits the optimization task into optimizing length and direction of the parameters separately. This allows gradient-based methods to leverage a favourable global structure in the loss landscape that we prove to exist in Learning Halfspace problems and neural network training with Gaussian inputs. We thereby turn Batch Normalization from an effective practical heuristic into a provably converging algorithm for these settings. Furthermore, we substantiate our analysis with empirical evidence that suggests the validity of our theoretical results in a broader context.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jonas Kohler (34 papers)
  2. Hadi Daneshmand (20 papers)
  3. Ming Zhou (182 papers)
  4. Klaus Neymeyr (6 papers)
  5. Thomas Hofmann (121 papers)
  6. Aurelien Lucchi (75 papers)
Citations (87)

Summary

We haven't generated a summary for this paper yet.