CP-Prompt: Composition-Based Cross-modal Prompting for Domain-Incremental Continual Learning (2407.21043v2)
Abstract: The key challenge of cross-modal domain-incremental learning (DIL) is to enable the learning model to continuously learn from novel data with different feature distributions under the same task without forgetting old ones. However, existing top-performing methods still cause high forgetting rates, by lacking intra-domain knowledge extraction and inter-domain common prompting strategy. In this paper, we propose a simple yet effective framework, CP-Prompt, by training limited parameters to instruct a pre-trained model to learn new domains and avoid forgetting existing feature distributions. CP-Prompt captures intra-domain knowledge by compositionally inserting personalized prompts on multi-head self-attention layers and then learns the inter-domain knowledge with a common prompting strategy. CP-Prompt shows superiority compared with state-of-the-art baselines among three widely evaluated DIL tasks. The source code is available at https://github.com/dannis97500/CP_Prompt.
- Yu Feng (216 papers)
- Zhen Tian (60 papers)
- Yifan Zhu (84 papers)
- Zongfu Han (2 papers)
- Haoran Luo (31 papers)
- Guangwei Zhang (3 papers)
- Meina Song (14 papers)