Papers
Topics
Authors
Recent
2000 character limit reached

Intra-Ensemble in Neural Networks

Published 9 Apr 2019 in cs.CV | (1904.04466v2)

Abstract: Improving model performance is always the key problem in machine learning including deep learning. However, stand-alone neural networks always suffer from marginal effect when stacking more layers. At the same time, ensemble is an useful technique to further enhance model performance. Nevertheless, training several independent deep neural networks for ensemble costs multiple resources. If so, is it possible to utilize ensemble in only one neural network? In this work, we propose Intra-Ensemble, an end-to-end ensemble strategy with stochastic channel recombination operations to train several sub-networks simultaneously within one neural network. Additional parameter size is marginal since the majority of parameters are mutually shared. Meanwhile, stochastic channel recombination significantly increases the diversity of sub-networks, which finally enhances ensemble performance. Extensive experiments and ablation studies prove the applicability of intra-ensemble on various kinds of datasets and network architectures.

Citations (5)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.