Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA (2412.01004v5)

Published 1 Dec 2024 in cs.CV

Abstract: Continual learning (CL) aims to accumulate knowledge from sequential data and task streams. Leveraging their strong generalization and flexibility, pre-trained vision-language embedding models such as CLIP (Contrastive Language-Image Pre-training) have been widely adopted and validated in CL. In addition to learning new knowledge, we investigate whether the pre-trained knowledge in CLIP, can be retained or even enhanced, in CL, while incorporating new knowledge from a data stream. Existing CL methods primarily focus on continual downstream adaptation using components isolated from the pre-trained model (PTM), increasing inference complexity and limiting improvements to the PTM itself; some also retain knowledge by relying on additional reference data, resulting in high training costs. To address these limitations, we propose a universal and efficient CL approach for CLIP based on Dynamic Rank-Selective LoRA (CoDyRA), which directly improves the PTMs while preserving the existing knowledge from both pre-training and CL. By analyzing how LoRA rank and placement affect learning and forgetting in CL, we design CoDyRA that adaptively performs rank-minimized parameter updates in different modules, based on their importance to the current data. This ensures a balance between knowledge acquisition (plasticity) and forgetting mitigation (stability). Our method operates without explicit domain or distribution prediction and does not rely on reference data, enabling seamless task integration while maintaining pre-trained capabilities. Moreover, CoDyRA preserves the original model architecture and deployment pipeline, introducing no additional inference overhead. Extensive experiments show that our approach enhances representations for new downstream data while retaining pre-trained knowledge, achieving state-of-the-art results.

Summary

We haven't generated a summary for this paper yet.