Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DER: Dynamically Expandable Representation for Class Incremental Learning (2103.16788v1)

Published 31 Mar 2021 in cs.CV and cs.LG

Abstract: We address the problem of class incremental learning, which is a core step towards achieving adaptive vision intelligence. In particular, we consider the task setting of incremental learning with limited memory and aim to achieve better stability-plasticity trade-off. To this end, we propose a novel two-stage learning approach that utilizes a dynamically expandable representation for more effective incremental concept modeling. Specifically, at each incremental step, we freeze the previously learned representation and augment it with additional feature dimensions from a new learnable feature extractor. This enables us to integrate new visual concepts with retaining learned knowledge. We dynamically expand the representation according to the complexity of novel concepts by introducing a channel-level mask-based pruning strategy. Moreover, we introduce an auxiliary loss to encourage the model to learn diverse and discriminate features for novel concepts. We conduct extensive experiments on the three class incremental learning benchmarks and our method consistently outperforms other methods with a large margin.

Citations (380)

Summary

  • The paper proposes a two-stage learning strategy that dynamically expands representations by integrating new feature extractors while preserving old knowledge.
  • It employs an auxiliary loss and channel-level mask-based pruning to effectively mitigate catastrophic forgetting in class incremental learning.
  • Experimental results on CIFAR-100, ImageNet-100, and ImageNet-1000 demonstrate superior forward and backward transfer compared to existing methods.

DER: Dynamically Expandable Representation for Class Incremental Learning

The paper "DER: Dynamically Expandable Representation for Class Incremental Learning" addresses the intricate problem of class incremental learning (CIL) with a focus on achieving a balanced stability-plasticity trade-off using limited memory resources. The authors propose a novel approach termed Dynamically Expandable Representation (DER) that incrementally expands the learned representation while retaining old knowledge to improve performance in class incremental learning scenarios.

Methodology

The core contribution of the paper lies in its two-stage learning strategy leveraging a dynamically expandable representation. During each incremental step, the previously learned representation is preserved and augmented by introducing a new learnable feature extractor. This addition enables the simultaneous integration of new visual concepts while retaining old knowledge, crucial for tackling catastrophic forgetting and achieving a balance between stability (retention of old knowledge) and plasticity (acquisition of new knowledge).

Key methodological components include:

  1. Super-Feature Extractor: The model expands its dimensionality at each incremental step by concatenating features from a new learnable extractor with the existing frozen representation. This structure facilitates retaining past knowledge by maintaining the integrity of previously learned features.
  2. Auxiliary Loss Function: The authors introduce an auxiliary loss to enhance the learning of diverse and discriminative features specific to novel classes, thereby promoting an effective expansion of feature dimensionality.
  3. Mask-Based Pruning: A channel-level mask-based pruning strategy is employed to dynamically adjust the model's architecture in response to the complexity of new concepts, ensuring a compact representation while mitigating redundancy.
  4. Two-Stage Strategy: The adoption of a two-stage strategy, where feature representation and classifier learning are decoupled, allows for targeted adaptation of the model's architecture and classification boundary, thereby minimizing the impact of class imbalance and improving overall performance.

Experimental Results

The authors validate their approach on three well-known CIL benchmarks: CIFAR-100, ImageNet-100, and ImageNet-1000. Across these datasets, DER not only outperforms existing approaches such as iCaRL, PODNet, and TPCIL but also demonstrates a robust capability to scale with varying increments in class size. Notably, DER achieves significant improvements in average incremental accuracy and excels in scenarios characterized by numerous incremental steps, illustrating its efficacy in managing both backward and forward memory transfer.

Implications and Future Work

The positive backward and forward transfer observed in DER marks a significant advancement in class incremental learning methodologies, suggesting new pathways for developing truly adaptive vision systems that mimic human learning processes over time. The ability to dynamically and efficiently expand model capacity without catastrophic forgetting aligns with potential applications in dynamic real-world environments, such as robotics and autonomous systems, where new data is frequently encountered.

Theoretical implications of this work suggest strong potential for exploring further architectural innovations in neural networks that are specifically designed for incremental learning. Future work could include enhancing DER's adaptability through incorporating meta-learning paradigms or reinforcement learning strategies to fine-tune representation learning continuously.

In conclusion, DER represents a substantial step forward in class incremental learning, providing a sophisticated yet practical approach to expanding neural representations dynamically. This work paves the way for further research and development of systems capable of more efficient lifelong learning, balancing innovation in architecture design with practical applicability across a broad range of domains.