Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning (2505.24816v1)

Published 30 May 2025 in cs.CV

Abstract: Class-Incremental Learning (CIL) aims to learn new classes sequentially while retaining the knowledge of previously learned classes. Recently, pre-trained models (PTMs) combined with parameter-efficient fine-tuning (PEFT) have shown remarkable performance in rehearsal-free CIL without requiring exemplars from previous tasks. However, existing adapter-based methods, which incorporate lightweight learnable modules into PTMs for CIL, create new adapters for each new task, leading to both parameter redundancy and failure to leverage shared knowledge across tasks. In this work, we propose ContinuaL Low-Rank Adaptation (CL-LoRA), which introduces a novel dual-adapter architecture combining \textbf{task-shared adapters} to learn cross-task knowledge and \textbf{task-specific adapters} to capture unique features of each new task. Specifically, the shared adapters utilize random orthogonal matrices and leverage knowledge distillation with gradient reassignment to preserve essential shared knowledge. In addition, we introduce learnable block-wise weights for task-specific adapters, which mitigate inter-task interference while maintaining the model's plasticity. We demonstrate CL-LoRA consistently achieves promising performance under multiple benchmarks with reduced training and inference computation, establishing a more efficient and scalable paradigm for continual learning with pre-trained models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jiangpeng He (41 papers)
  2. Zhihao Duan (38 papers)
  3. Fengqing Zhu (77 papers)

Summary

Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning

The paper "CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning" introduces a novel approach to tackle Class-Incremental Learning (CIL) with pre-trained models. The authors propose a dual-adapter architecture named CL-LoRA, integrating the strengths of both task-shared and task-specific adapters. This paper concerns itself with the challenges presented by CIL—specifically, catastrophic forgetting—and aims to provide solutions within the framework of parameter-efficient tuning.

Problem Statement

Class-Incremental Learning, where new classes are learned sequentially, poses significant challenges to maintaining performance on previously learned classes. Traditional methods often rely on storing past data for rehearsal, but this is impractical due to privacy and storage issues. Pre-trained models combined with Parameter-Efficient Fine-Tuning (PEFT) methods have shown promise, but existing adapter-based approaches suffer from parameter redundancy and limited knowledge sharing across tasks.

Contributions

The paper introduces a Continual Low-Rank Adaptation (CL-LoRA) framework with the following contributions:

  1. Dual-Adapter Architecture: A combination of task-shared adapters for cross-task knowledge retention and task-specific adapters for learning unique features of new tasks. The shared adapters utilize random orthogonal matrices, and block-wise weights are introduced for task-specific adapters with orthogonal constraints to mitigate inter-task interference.
  2. Knowledge Distillation and Gradient Reassignment: Knowledge distillation is applied with an early exit strategy at the transition point between shared and specific adapters. Further, gradient reassignment leverages the L2L_2 norm of weight vectors for task-shared adapters to enhance reliable knowledge transfer.
  3. Efficient and Scalable Approach: The methodological design maintains reduced training and inference computation, establishing a more efficient paradigm for continual learning with pre-trained models, overcoming limitations of existing methods.

Results and Analysis

Experiments were conducted on multiple benchmarks including CIFAR-100, ImageNet-R, and VTAB. Results show CL-LoRA achieving high final accuracy and average accuracy while significantly reducing the number of trainable parameters compared to existing state-of-the-art methods. Particularly in challenging scenarios like ImageNet-R and ImageNet-A, CL-LoRA demonstrated robust performance improvements, clearly indicating its capability to manage distribution shifts effectively.

Implications and Future Directions

The proposed CL-LoRA framework provides practical improvements in rehearsal-free continual learning scenarios and highlights the importance of shared knowledge retention and specific task adaptation. The findings in this paper suggest that leveraging shared knowledge can reduce reliance on task identity during inference, potentially paving the way for advances in online and blurry task boundary learning.

Future work could explore various configurations of LoRA, including adapting different aspects of MHSA layers and further optimizing the position of shared adapters in transformer blocks. Moreover, developing methods to dynamically adjust the splitting position between shared and task-specific adapters based on task complexity and data characteristics remains an open challenge. This paper lays a valuable foundation for these investigations and establishes a pathway for scalable, efficient continual learning with pre-trained models.

Youtube Logo Streamline Icon: https://streamlinehq.com