Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Intra-Ensemble in Neural Networks (1904.04466v2)

Published 9 Apr 2019 in cs.CV

Abstract: Improving model performance is always the key problem in machine learning including deep learning. However, stand-alone neural networks always suffer from marginal effect when stacking more layers. At the same time, ensemble is an useful technique to further enhance model performance. Nevertheless, training several independent deep neural networks for ensemble costs multiple resources. If so, is it possible to utilize ensemble in only one neural network? In this work, we propose Intra-Ensemble, an end-to-end ensemble strategy with stochastic channel recombination operations to train several sub-networks simultaneously within one neural network. Additional parameter size is marginal since the majority of parameters are mutually shared. Meanwhile, stochastic channel recombination significantly increases the diversity of sub-networks, which finally enhances ensemble performance. Extensive experiments and ablation studies prove the applicability of intra-ensemble on various kinds of datasets and network architectures.

Citations (5)

Summary

We haven't generated a summary for this paper yet.