Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On improving deep learning generalization with adaptive sparse connectivity (1906.11626v1)

Published 27 Jun 2019 in cs.NE and cs.LG

Abstract: Large neural networks are very successful in various tasks. However, with limited data, the generalization capabilities of deep neural networks are also very limited. In this paper, we empirically start showing that intrinsically sparse neural networks with adaptive sparse connectivity, which by design have a strict parameter budget during the training phase, have better generalization capabilities than their fully-connected counterparts. Besides this, we propose a new technique to train these sparse models by combining the Sparse Evolutionary Training (SET) procedure with neurons pruning. Operated on MultiLayer Perceptron (MLP) and tested on 15 datasets, our proposed technique zeros out around 50% of the hidden neurons during training, while having a linear number of parameters to optimize with respect to the number of neurons. The results show a competitive classification and generalization performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shiwei Liu (76 papers)
  2. Decebal Constantin Mocanu (52 papers)
  3. Mykola Pechenizkiy (118 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.