Liquid Ensemble Selection for Continual Learning (2405.07327v1)
Abstract: Continual learning aims to enable machine learning models to continually learn from a shifting data distribution without forgetting what has already been learned. Such shifting distributions can be broken into disjoint subsets of related examples; by training each member of an ensemble on a different subset it is possible for the ensemble as a whole to achieve much higher accuracy with less forgetting than a naive model. We address the problem of selecting which models within an ensemble should learn on any given data, and which should predict. By drawing on work from delegative voting we develop an algorithm for using delegation to dynamically select which models in an ensemble are active. We explore a variety of delegation methods and performance metrics, ultimately finding that delegation is able to provide a significant performance boost over naive learning in the face of distribution shifts.
- Expert Gate: Lifelong Learning With a Network of Experts. 3366–3375. https://openaccess.thecvf.com/content_cvpr_2017/html/Aljundi_Expert_Gate_Lifelong_CVPR_2017_paper.html
- Ben Armstrong and Kate Larson. 2021. On the Limited Applicability of Liquid Democracy. 3rd Games, Agents, and Incentives Workshop at the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (2021).
- Ben Armstrong and Kate Larson. 2024. Liquid Democracy for Low-Cost Ensemble Pruning. arXiv:2401.17443
- Efficient Lifelong Learning with A-GEM. In International Conference on Learning Representations. https://openreview.net/forum?id=Hkf2_sC5FX
- Voting with Random Classifiers (VORACE): Theoretical and Experimental Analysis. Autonomous Agents and Multi-Agent Systems 35, 2 (2021), 22.
- Continual Learning Beyond a Single Model. https://doi.org/10.48550/arXiv.2202.09826 arXiv:2202.09826 [cs].
- Robert M French. 1999. Catastrophic Forgetting in Connectionist Networks. Trends in cognitive sciences 3, 4 (1999), 128–135.
- Yoav Freund and Robert E Schapire. 1995. A Decision-Theoretic Generalization of On-line Learning and an Application to Boosting. In European conference on computational learning theory. Springer, 23–37.
- Overcoming Catastrophic Forgetting in Neural Networks. Proceedings of the national academy of sciences 114, 13 (2017), 3521–3526.
- Ludmila I Kuncheva and Christopher J Whitaker. 2003. Measures of Diversity in Classifier Ensembles and their Relationship with the Ensemble Accuracy. Machine learning 51 (2003), 181–207.
- Evaluating the Effect of Voting Methods on Ensemble-Based Classification. In 2017 IEEE International Conference on INnovations in Intelligent Systems and Applications. IEEE, 1–6.
- Zhizhong Li and Derek Hoiem. 2017. Learning Without Forgetting. IEEE transactions on pattern analysis and machine intelligence 40, 12 (2017), 2935–2947.
- More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning. In Computer Vision – ECCV 2020 (Lecture Notes in Computer Science), Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Springer International Publishing, Cham, 699–716. https://doi.org/10.1007/978-3-030-58574-7_42
- Avalanche: An End-to-End Library for Continual Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3600–3610.
- Understanding Plasticity in Neural Networks. In Proceedings of the 40th International Conference on Machine Learning. PMLR, 23190–23211. https://proceedings.mlr.press/v202/lyle23b.html ISSN: 2640-3498.
- Continual Deep Learning by Functional Regularisation of Memorable Past. In Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 4453–4464. https://proceedings.neurips.cc/paper/2020/hash/2f3bbb9730639e9ea48f309d9a79ff01-Abstract.html
- Pytorch: An Imperative sStyle, High-performance Deep Learning Library. Advances in neural information processing systems 32 (2019).
- A Normative Examination of Ensemble Learning Algorithms.. In ICML. 735–742.
- iCaRL: Incremental Classifier and Representation Learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Honolulu, HI, 5533–5542. https://doi.org/10.1109/CVPR.2017.587
- Divide and not Forget: Ensemble of Selectively Trained Experts in Continual Learning. http://arxiv.org/abs/2401.10191 arXiv:2401.10191 [cs].
- Three Types of Incremental Learning. Nature Machine Intelligence 4, 12 (2022), 1185–1197.
- CoSCL: Cooperation of Small Continual Learners is Stronger Than a Big One. In Computer Vision – ECCV 2022 (Lecture Notes in Computer Science), Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner (Eds.). Springer Nature Switzerland, Cham, 254–271. https://doi.org/10.1007/978-3-031-19809-0_15
- A Comprehensive Survey of Continual Learning: Theory, Method and Application. arXiv preprint arXiv:2302.00487 (2023).
- BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning. https://doi.org/10.48550/arXiv.2002.06715 arXiv:2002.06715 [cs, stat].
- Continual Learning Through Synaptic Intelligence. In International conference on machine learning. PMLR, 3987–3995.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.