Continual Learning in the Presence of Repetition (2405.04101v2)
Abstract: Continual learning (CL) provides a framework for training models in ever-evolving environments. Although re-occurrence of previously seen objects or tasks is common in real-world problems, the concept of repetition in the data stream is not often considered in standard benchmarks for CL. Unlike with the rehearsal mechanism in buffer-based strategies, where sample repetition is controlled by the strategy, repetition in the data stream naturally stems from the environment. This report provides a summary of the CLVision challenge at CVPR 2023, which focused on the topic of repetition in class-incremental learning. The report initially outlines the challenge objective and then describes three solutions proposed by finalist teams that aim to effectively exploit the repetition in the stream to learn continually. The experimental results from the challenge highlight the effectiveness of ensemble-based solutions that employ multiple versions of similar modules, each trained on different but overlapping subsets of classes. This report underscores the transformative potential of taking a different perspective in CL by employing repetition in the data stream to foster innovative strategy design.
- Lifelong robot learning. Robotics and Autonomous Systems, 15(1):25–46, 1995.
- Lifelong Machine Learning, Second Edition. Synthesis Lectures on Artificial Intelligence and Machine Learning. Springer Cham, 2018.
- Continual learning: Applications and the road forward. Transactions on Machine Learning Research, 2024.
- Never-ending learning. Communications of the ACM, 61(5):103–115, 2018.
- Continual lifelong learning with neural networks: A review. Neural Networks, 113:54–71, 2019.
- A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(7):3366–3385, 2022.
- Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Information Fusion, 58:52–68, 2020.
- Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of Learning and Motivation, volume 24, pages 109–165. Elsevier, 1989.
- Roger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychological Review, 97(2):285–308, 1990.
- Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, 2017.
- Continual learning through synaptic intelligence. In International Conference on Machine Learning, pages 3987–3995. PMLR, 2017.
- Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):2935–2947, 2017.
- Experience replay for continual learning. Advances in Neural Information Processing Systems, 32, 2019.
- On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019.
- icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001–2010, 2017.
- Continual learning with deep generative replay. Advances in Neural Information Processing Systems, 30, 2017.
- Brain-inspired replay for continual learning with artificial neural networks. Nature Communications, 11:4069, 2020.
- Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
- Expert gate: Lifelong learning with a network of experts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3366–3375, 2017.
- Ju Xu and Zhanxing Zhu. Reinforced continual learning. Advances in Neural Information Processing Systems, 31, 2018.
- Gido M van de Ven and Andreas S Tolias. Three continual learning scenarios. In NeurIPS Continual Learning Workshop, 2018.
- Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488, 2018.
- Three types of incremental learning. Nature Machine Intelligence, 4(12):1185–1197, 2022.
- Incremental object learning from contiguous views. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8777–8786, 2019.
- Scaling the number of tasks in continual learning. arXiv preprint arXiv:2207.04543, 2022.
- Effects of repetition learning on associative recognition over time: Role of the hippocampus and prefrontal cortex. Frontiers in Human Neuroscience, 12:277, 2018.
- Spacing repetitions over long timescales: a review and a reconsolidation explanation. Frontiers in Psychology, 8:962, 2017.
- Spaced learning enhances episodic memory by increasing neural pattern similarity across repetitions. Journal of Neuroscience, 39(27):5351–5360, 2019.
- Cvpr 2020 continual learning in computer vision competition: Approaches, results, current challenges and future directions. Artificial Intelligence, 303:103635, 2022.
- Continuous learning in single-incremental-task scenarios. Neural Networks, 116:56–73, 2019.
- Continual Learning Challenge, CVPR 2021. https://eval.ai/web/challenges/challenge-page/829/overview, 2021. Accessed: 2024-03-29.
- CLAD: A realistic continual learning benchmark for autonomous driving. Neural Networks, 161:659–669, 2023.
- 3rd continual learning workshop challenge on egocentric category and instance level object understanding. arXiv preprint arXiv:2212.06833, 2022.
- EgoObjects: A large-scale egocentric dataset for fine-grained object understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
- Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
- Class-incremental learning with repetition. In Sarath Chandar, Razvan Pascanu, Hanie Sedghi, and Doina Precup, editors, Proceedings of The 2nd Conference on Lifelong Learning Agents, volume 232 of Proceedings of Machine Learning Research, pages 437–455. PMLR, 22–25 Aug 2023.
- Codalab competitions: An open source platform to organize scientific challenges. Journal of Machine Learning Research, 24(198):1–6, 2023.
- Avalanche: A pytorch library for deep continual learning. Journal of Machine Learning Research, 24(363):1–6, 2023.
- Overcoming catastrophic forgetting with hard attention to the task. In International Conference on Machine Learning, pages 4548–4557. PMLR, 2018.
- Supervised contrastive learning. Advances in Neural Information Processing Systems, 33:18661–18673, 2020.
- Continual learning based on OOD detection and task masking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3856–3866, 2022.
- A multi-head model for continual learning via out-of-distribution replay. In Sarath Chandar, Razvan Pascanu, and Doina Precup, editors, Proceedings of The 1st Conference on Lifelong Learning Agents, volume 199 of Proceedings of Machine Learning Research, pages 548–563. PMLR, 22–24 Aug 2022.
- Xiaotian Duan. HAT-CL: A hard-attention-to-the-task pytorch library for continual learning. arXiv preprint arXiv:2307.09653, 2023.
- The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects. Frontiers in Psychology, 2013.
- Class-incremental learning: survey and performance evaluation on image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5):5513–5533, 2023.
- Random path selection for continual learning. Advances in Neural Information Processing Systems, 32, 2019.
- PathNet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017.
- PackNet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7765–7773, 2018.
- Ternary feature masks: Zero-forgetting for task-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 3570–3579, 2021.
- Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1717–1724, 2014.
- Fetril: Feature translation for exemplar-free class-incremental learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3911–3920, 2023.
- Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287–364, 2013.
- Dimensionality reduction by learning an invariant mapping. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1735–1742, 2006.
- Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019.
- DER: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3014–3023, 2021.
- Recent advances in open set recognition: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(10):3614–3631, 2020.
- ReAct: Out-of-distribution detection with rectified activations. Advances in Neural Information Processing Systems, 34:144–157, 2021.
- Open-set recognition: a good closed-set classifier is all you need? In International Conference on Learning Representations, 2022.
- Representation ensembling for synergistic lifelong learning with quasilinear complexity. arXiv preprint arxiv:2004.12908, 2020.
- Knowledge accumulation in continually learned representations and the issue of feature forgetting. arXiv preprint arXiv:2304.00933, 2023.
Collections
Sign up for free to add this paper to one or more collections.