Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CP-Prompt: Composition-Based Cross-modal Prompting for Domain-Incremental Continual Learning (2407.21043v2)

Published 22 Jul 2024 in cs.CL, cs.AI, and cs.LG

Abstract: The key challenge of cross-modal domain-incremental learning (DIL) is to enable the learning model to continuously learn from novel data with different feature distributions under the same task without forgetting old ones. However, existing top-performing methods still cause high forgetting rates, by lacking intra-domain knowledge extraction and inter-domain common prompting strategy. In this paper, we propose a simple yet effective framework, CP-Prompt, by training limited parameters to instruct a pre-trained model to learn new domains and avoid forgetting existing feature distributions. CP-Prompt captures intra-domain knowledge by compositionally inserting personalized prompts on multi-head self-attention layers and then learns the inter-domain knowledge with a common prompting strategy. CP-Prompt shows superiority compared with state-of-the-art baselines among three widely evaluated DIL tasks. The source code is available at https://github.com/dannis97500/CP_Prompt.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yu Feng (216 papers)
  2. Zhen Tian (60 papers)
  3. Yifan Zhu (84 papers)
  4. Zongfu Han (2 papers)
  5. Haoran Luo (31 papers)
  6. Guangwei Zhang (3 papers)
  7. Meina Song (14 papers)
Github Logo Streamline Icon: https://streamlinehq.com