Densely Distilling Cumulative Knowledge for Continual Learning (2405.09820v1)
Abstract: Continual learning, involving sequential training on diverse tasks, often faces catastrophic forgetting. While knowledge distillation-based approaches exhibit notable success in preventing forgetting, we pinpoint a limitation in their ability to distill the cumulative knowledge of all the previous tasks. To remedy this, we propose Dense Knowledge Distillation (DKD). DKD uses a task pool to track the model's capabilities. It partitions the output logits of the model into dense groups, each corresponding to a task in the task pool. It then distills all tasks' knowledge using all groups. However, using all the groups can be computationally expensive, we also suggest random group selection in each optimization step. Moreover, we propose an adaptive weighting scheme, which balances the learning of new classes and the retention of old classes, based on the count and similarity of the classes. Our DKD outperforms recent state-of-the-art baselines across diverse benchmarks and scenarios. Empirical analysis underscores DKD's ability to enhance model stability, promote flatter minima for improved generalization, and remains robust across various memory budgets and task orders. Moreover, it seamlessly integrates with other CL methods to boost performance and proves versatile in offline scenarios like model compression.
- Isele D, Cosgun A (2018) Selective experience replay for lifelong learning. In: AAAI
- Kim D, Han B (2023) On the stability-plasticity dilemma of class-incremental learning. In: CVPR
- Li Z, Hoiem D (2017) Learning without forgetting. IEEE TPAMI 40(12):2935–2947
- Ratcliff R (1990) Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychological review 97(2):285
- Robins A (1995) Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science 7(2):123–146
- Van de Ven GM, Tolias AS (2019) Three scenarios for continual learning. arXiv preprint arXiv:190407734
- Welling M (2009) Herding dynamical weights to learn. In: ICML
- Xu J, Zhu Z (2018) Reinforced continual learning. In: NeurIPS
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.