Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable and Practical Natural Gradient for Large-Scale Deep Learning (2002.06015v1)

Published 13 Feb 2020 in cs.LG and stat.ML

Abstract: Large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. Previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization. We propose Scalable and Practical Natural Gradient Descent (SP-NGD), a principled approach for training models that allows them to attain similar generalization performance to models trained with first-order optimization methods, but with accelerated convergence. Furthermore, SP-NGD scales to large mini-batch sizes with a negligible computational overhead as compared to first-order methods. We evaluated SP-NGD on a benchmark task where highly optimized first-order methods are available as references: training a ResNet-50 model for image classification on ImageNet. We demonstrate convergence to a top-1 validation accuracy of 75.4% in 5.5 minutes using a mini-batch size of 32,768 with 1,024 GPUs, as well as an accuracy of 74.9% with an extremely large mini-batch size of 131,072 in 873 steps of SP-NGD.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kazuki Osawa (10 papers)
  2. Yohei Tsuji (3 papers)
  3. Yuichiro Ueno (4 papers)
  4. Akira Naruse (4 papers)
  5. Chuan-Sheng Foo (41 papers)
  6. Rio Yokota (64 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.