Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wide Neural Networks Forget Less Catastrophically (2110.11526v3)

Published 21 Oct 2021 in cs.LG, cs.AI, and cs.CV

Abstract: A primary focus area in continual learning research is alleviating the "catastrophic forgetting" problem in neural networks by designing new algorithms that are more robust to the distribution shifts. While the recent progress in continual learning literature is encouraging, our understanding of what properties of neural networks contribute to catastrophic forgetting is still limited. To address this, instead of focusing on continual learning algorithms, in this work, we focus on the model itself and study the impact of "width" of the neural network architecture on catastrophic forgetting, and show that width has a surprisingly significant effect on forgetting. To explain this effect, we study the learning dynamics of the network from various perspectives such as gradient orthogonality, sparsity, and lazy training regime. We provide potential explanations that are consistent with the empirical results across different architectures and continual learning benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Seyed Iman Mirzadeh (6 papers)
  2. Arslan Chaudhry (15 papers)
  3. Dong Yin (36 papers)
  4. Huiyi Hu (14 papers)
  5. Razvan Pascanu (138 papers)
  6. Dilan Gorur (10 papers)
  7. Mehrdad Farajtabar (56 papers)
Citations (54)

Summary

We haven't generated a summary for this paper yet.