Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Class-incremental learning: survey and performance evaluation on image classification (2010.15277v3)

Published 28 Oct 2020 in cs.LG and cs.CV

Abstract: For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning. The main challenge for incremental learning is catastrophic forgetting, which refers to the precipitous drop in performance on previously learned tasks after learning a new one. Incremental learning of deep neural networks has seen explosive growth in recent years. Initial work focused on task-incremental learning, where a task-ID is provided at inference time. Recently, we have seen a shift towards class-incremental learning where the learner must discriminate at inference time between all classes seen in previous tasks without recourse to a task-ID. In this paper, we provide a complete survey of existing class-incremental learning methods for image classification, and in particular, we perform an extensive experimental evaluation on thirteen class-incremental methods. We consider several new experimental scenarios, including a comparison of class-incremental methods on multiple large-scale image classification datasets, an investigation into small and large domain shifts, and a comparison of various network architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Marc Masana (20 papers)
  2. Xialei Liu (35 papers)
  3. Mikel Menta (3 papers)
  4. Andrew D. Bagdanov (47 papers)
  5. Joost van de Weijer (133 papers)
  6. Bartlomiej Twardowski (9 papers)
Citations (551)

Summary

  • The paper compares 13 class-incremental learning methods for image classification by evaluating performance metrics across diverse datasets.
  • It demonstrates that regularization and exemplar-based techniques significantly mitigate catastrophic forgetting and improve classification accuracy.
  • The study highlights architecture-dependent effects and calls for hybrid approaches to tackle inter-task confusion and large domain shifts.

Insights into Class-Incremental Learning for Image Classification

The paper provides a comprehensive survey and performance evaluation of Class-Incremental Learning (CIL) methods, focusing on their application to image classification. In particular, the paper scrutinizes thirteen methods across various datasets and scenarios, exploring their efficacy in addressing the challenges of CIL, such as catastrophic forgetting, inter-task confusion, and task-recency bias.

Experimental Setup and Methodologies

The experiments are conducted using diverse datasets, including CIFAR-100, VGGFace2, and ImageNet, among others. Each dataset is partitioned in various ways to simulate incremental learning scenarios. These include equal distribution of classes across tasks and variations with larger initial tasks, which serve as a pre-trained starting point. The evaluation employs average accuracy as the primary metric, considering performance across the sequence of learned tasks.

Key Findings and Results

The paper analyzes regularization methods that mitigate catastrophic forgetting by stabilizing the learning process. Elastic Weight Consolidation (EWC) and Learning without Forgetting (LwF) are notable for their weight and data regularization techniques, respectively. However, the experiments demonstrate that EWC often outperforms LwF, especially in scenarios with smaller domain shifts.

Exemplar-based methods, such as iCaRL and EEIL, show significant advantages by using stored samples from previous tasks to maintain classification accuracy. The choice of exemplar sampling strategy, whether herding or random, appears to have minor effects on performance but favors herding slightly in longer sequences.

The paper also highlights bias-correction methods like BiC and LUCIR, which explicitly address the task-recency bias. These methods demonstrate superior performance in scenarios with a large number of classes, confirming the importance of addressing class imbalance during training and evaluation phases.

Network Architecture and Domain Shifts

Experiments with different architectures, such as ResNet and MobileNet, reveal that some CIL methods are architecture-dependent. Networks with skip connections often favor different strategies compared to simpler ones, like AlexNet.

In scenarios characterized by varying domain shifts, the results indicate that larger shifts between tasks introduce challenges inadequately addressed by current methods. The research suggests that overcoming inter-task confusion remains an open problem in these contexts.

Implications and Future Directions

The findings suggest that while current CIL methods have progressed significantly, substantial improvements are needed, especially in handling large domain shifts and optimizing exemplar usage. Future research might focus on developing hybrid approaches that better integrate exemplar learning and feature rehearsal, while leveraging meta-learning and unsupervised methods.

Moreover, exploring task-free settings and beyond cross-entropy losses are emerging areas that could redefine efficiency and applicability of incremental learning systems. The paper offers a rich ground for further exploration and encourages development of privacy-preserving techniques, which remain crucial in real-world applications.

In summary, the paper serves as a vital resource, bridging the gap between foundational principles and emergent trends in class-incremental learning, encouraging robust evaluation protocols and enhanced model architectures for real-world applications.