Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few-Shot Incremental Learning with Continually Evolved Classifiers (2104.03047v1)

Published 7 Apr 2021 in cs.CV, cs.AI, and cs.LG

Abstract: Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points, without forgetting knowledge of old classes. The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems. Moreover, as training data come in sequence in FSCIL, the learned classifier can only provide discriminative information in individual sessions, while FSCIL requires all classes to be involved for evaluation. In this paper, we address the FSCIL problem from two aspects. First, we adopt a simple but effective decoupled learning strategy of representations and classifiers that only the classifiers are updated in each incremental session, which avoids knowledge forgetting in the representations. By doing so, we demonstrate that a pre-trained backbone plus a non-parametric class mean classifier can beat state-of-the-art methods. Second, to make the classifiers learned on individual sessions applicable to all classes, we propose a Continually Evolved Classifier (CEC) that employs a graph model to propagate context information between classifiers for adaptation. To enable the learning of CEC, we design a pseudo incremental learning paradigm that episodically constructs a pseudo incremental learning task to optimize the graph parameters by sampling data from the base dataset. Experiments on three popular benchmark datasets, including CIFAR100, miniImageNet, and Caltech-USCD Birds-200-2011 (CUB200), show that our method significantly outperforms the baselines and sets new state-of-the-art results with remarkable advantages.

Few-Shot Incremental Learning with Continually Evolved Classifiers: A Detailed Examination

In the academic exploration of machine learning, few-shot class-incremental learning (FSCIL) represents a challenging paradigm that integrates incremental learning with the constraints posed by few-shot learning. The primary aim within this framework is to develop algorithms that can incrementally learn new classes from limited data while maintaining knowledge of previously acquired classes. The inherent challenge in FSCIL arises from the potential for overfitting to the scarce data of new classes and the exacerbation of catastrophic forgetting of old classes. This paper addresses the FSCIL problem with an innovative approach combining decoupled learning strategies and graph-based classifier evolution, which demonstrates remarkable benefits in maintaining learning efficiency and class discrimination across sessions.

Key Innovations and Methodology

The paper proposes a two-pronged approach to tackle the challenges of FSCIL. First, it employs a decoupled learning strategy for representations and classifiers. This involves training the representation in the initial session with abundant base class data and then fixing this representation in subsequent sessions to prevent knowledge forgetting. This approach leverages the robustness of pre-trained backbones complemented by non-parametric class mean classifiers, showing superior performance compared to state-of-the-art methods.

The second innovation is the Continually Evolved Classifier (CEC), which adapts classifier weights learned in individual sessions to be applicable across all classes. This is achieved through a graph-based model that propagates contextual information between classifiers. The architecture of the CEC leverages a graph attention network (GAT) to enhance the adaptability of classifier weights, ensuring discriminative capability when faced with all historical and current classes.

Experimental Validation and Results

The efficacy of the proposed method is validated through rigorous experimentation on benchmark datasets such as CIFAR100, miniImageNet, and CUB200. The results are compelling, with the proposed approach setting new state-of-the-art performance standards, significantly outperforming existing methods. Notably, the decoupled training strategy was shown to substantially reduce overfitting and forgetting issues, while the CEC effectively improved classification accuracy across incremental sessions.

Implications and Future Directions

The implications of these findings are profound both in theory and practice. The decoupled training strategy represents an important advancement in understanding how to maintain representation fidelity in the face of incremental exposure to new classes. The use of graph-based adaptation for classifiers underscores the value of incorporating relational models into the learning process to better capture class relationships and adapt to new data effectively.

Looking forward, this research opens several avenues for further investigation. Continued exploration into optimizing graph model architectures for better context propagation could yield even more robust few-shot incremental learning systems. Additionally, the integration of these strategies with other paradigms such as reinforcement learning could further expand the applicability of FSCIL methodologies in real-world scenarios where data collection and class definitions are dynamically evolving.

In summary, the paper successfully advances the field of FSCIL by introducing a novel, effective method combining decoupled training of representations and graph-based classifier adaptation, demonstrating significant improvements in managing class diversity and learning transfer across incremental sessions. The strong numerical results and the strategic approach undertaken by the authors signify an important contribution to the ongoing research in this challenging domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chi Zhang (566 papers)
  2. Nan Song (18 papers)
  3. Guosheng Lin (157 papers)
  4. Yun Zheng (49 papers)
  5. Pan Pan (24 papers)
  6. Yinghui Xu (48 papers)
Citations (249)