INCPrompt: Task-Aware incremental Prompting for Rehearsal-Free Class-incremental Learning (2401.11667v3)
Abstract: This paper introduces INCPrompt, an innovative continual learning solution that effectively addresses catastrophic forgetting. INCPrompt's key innovation lies in its use of adaptive key-learner and task-aware prompts that capture task-relevant information. This unique combination encapsulates general knowledge across tasks and encodes task-specific knowledge. Our comprehensive evaluation across multiple continual learning benchmarks demonstrates INCPrompt's superiority over existing algorithms, showing its effectiveness in mitigating catastrophic forgetting while maintaining high performance. These results highlight the significant impact of task-aware incremental prompting on continual learning performance.
- “Embracing change: Continual learning in deep neural networks,” Trends in cognitive sciences, vol. 24, no. 12, pp. 1028–1040, 2020.
- “Continual lifelong learning with neural networks: A review,” Neural networks, vol. 113, pp. 54–71, 2019.
- Robert M French, “Catastrophic forgetting in connectionist networks,” Trends in cognitive sciences, vol. 3, no. 4, pp. 128–135, 1999.
- “Learning without forgetting,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 12, pp. 2935–2947, 2017.
- “Overcoming catastrophic forgetting in neural networks,” Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
- “Fedet: a communication-efficient federated class-incremental learning framework based on enhanced transformer,” in Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023, pp. 3984–3992.
- “Gradient episodic memory for continual learning,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 6470–6479.
- “icarl: Incremental classifier and representation learning,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 2001–2010.
- “Shoggoth: towards efficient edge-cloud collaborative real-time video inference via adaptive online learning,” in 2023 60th ACM/IEEE Design Automation Conference (DAC). IEEE, 2023, pp. 1–6.
- “The power of scale for parameter-efficient prompt tuning,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp. 3045–3059.
- “Prefix-tuning: Optimizing continuous prompts for generation,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021, pp. 4582–4597.
- “Learning to prompt for continual learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 139–149.
- “Dualprompt: Complementary prompting for rehearsal-free continual learning,” in European Conference on Computer Vision, 2022, pp. 631–648.
- “The many faces of robustness: A critical analysis of out-of-distribution generalization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8340–8349.
- “On tiny episodic memories in continual learning,” arXiv preprint arXiv:1902.10486, 2019.
- “Gdumb: A simple approach that questions our progress in continual learning,” in European Conference on Computer Vision, 2020, pp. 524–540.
- “Large scale incremental learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 374–382.
- “Dark experience for general continual learning: a strong, simple baseline,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020, pp. 15920–15930.
- “Co2l: Contrastive continual learning,” in Proceedings of the IEEE/CVF International conference on computer vision, 2021, pp. 9516–9525.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.