Papers
Topics
Authors
Recent
2000 character limit reached

Model Agnostic Combination for Ensemble Learning

Published 16 Jun 2020 in cs.LG and stat.ML | (2006.09025v1)

Abstract: Ensemble of models is well known to improve single model performance. We present a novel ensembling technique coined MAC that is designed to find the optimal function for combining models while remaining invariant to the number of sub-models involved in the combination. Being agnostic to the number of sub-models enables addition and replacement of sub-models to the combination even after deployment, unlike many of the current methods for ensembling such as stacking, boosting, mixture of experts and super learners that lock the models used for combination during training and therefore need retraining whenever a new model is introduced into the ensemble. We show that on the Kaggle RSNA Intracranial Hemorrhage Detection challenge, MAC outperforms classical average methods, demonstrates competitive results to boosting via XGBoost for a fixed number of sub-models, and outperforms it when adding sub-models to the combination without retraining.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.