Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

12 mJ per Class On-Device Online Few-Shot Class-Incremental Learning (2403.07851v1)

Published 12 Mar 2024 in cs.LG and cs.CV

Abstract: Few-Shot Class-Incremental Learning (FSCIL) enables machine learning systems to expand their inference capabilities to new classes using only a few labeled examples, without forgetting the previously learned classes. Classical backpropagation-based learning and its variants are often unsuitable for battery-powered, memory-constrained systems at the extreme edge. In this work, we introduce Online Few-Shot Class-Incremental Learning (O-FSCIL), based on a lightweight model consisting of a pretrained and metalearned feature extractor and an expandable explicit memory storing the class prototypes. The architecture is pretrained with a novel feature orthogonality regularization and metalearned with a multi-margin loss. For learning a new class, our approach extends the explicit memory with novel class prototypes, while the remaining architecture is kept frozen. This allows learning previously unseen classes based on only a few examples with one single pass (hence online). O-FSCIL obtains an average accuracy of 68.62% on the FSCIL CIFAR100 benchmark, achieving state-of-the-art results. Tailored for ultra-low-power platforms, we implement O-FSCIL on the 60 mW GAP9 microcontroller, demonstrating online learning capabilities within just 12 mJ per new class.

Summary

  • The paper introduces an online few-shot incremental learning method that adapts to new classes with only 12 mJ energy per class.
  • It employs a frozen MobileNetV2 backbone, feature orthogonality regularization, and a multi-margin loss to efficiently learn from minimal samples.
  • Experiments on CIFAR100 demonstrate state-of-the-art performance with 68.62% accuracy, enabling practical deployment on energy-constrained edge devices.

On-Device Online Few-Shot Class-Incremental Learning with Ultra-Low Energy Requirements

Introduction

Few-Shot Class-Incremental Learning (FSCIL) represents a critical challenge in the deployment of intelligent systems in dynamic environments, particularly on edge devices. The primary objective in FSCIL is to enable models to expand their knowledge by learning new classes based on a very limited number of samples, without forgetting previously acquired information. This paper introduces an innovative approach named Online Few-Shot Class-Incremental Learning (O-FSCIL), which stands out by its lightweight architecture and extraordinarily low energy consumption, making it ideally suited for ultra-low-power edge devices. The approach incorporates a novel feature orthogonality regularization method during pretraining and employs a multi-margin loss during the metalearning stage, significantly improving the model's ability to generalize from few examples.

Approach

The essence of the O-FSCIL method lies in its ability to adapt to new classes on-device with minimal computational and memory overhead. The system relies on a pre-trained feature extractor that remains frozen during the class-incremental learning phase, thus avoiding retraining and the associated computational costs. Each new class is learned by directly updating an explicit memory module that stores class prototypes, enabling the model to learn with a single pass over the new class samples — a process referred to as online learning.

Significantly, the architecture employs a MobileNetV2 backbone, opting for a balance between computational efficiency and model capacity. This choice is instrumental in conserving energy on resource-constrained devices. Moreover, O-FSCIL introduces an orthogonal regularization strategy during pretraining, enhancing the model's feature separability. Concurrently, a multi-margin loss is used in the metalearning phase to foster better generalization to new classes, addressing the critical challenge of learning from few samples without catastrophic forgetting.

Results

The efficacy of O-FSCIL is demonstrated through extensive experiments conducted on the CIFAR100 dataset. The method achieves state-of-the-art performance, with a remarkable average accuracy of 68.62% across incremental learning sessions, employing a ResNet-12 backbone. When adapting the methodology to the more lightweight MobileNetV2 backbone variants, the system maintains high performance levels while significantly reducing computational and storage requirements. Notably, the Online Few-Shot Class-Incremental Learning approach implemented on the GAP9 microcontroller demonstrates its capacity to perform online learning within an energy budget of merely 12 mJ per new class, an accomplishment that speaks volumes about its potential for deployment on battery-powered edge devices.

Implications and Future Directions

The O-FSCIL framework's introduction marks a significant step forward in the domain of on-device class-incremental learning, making it particularly relevant for real-world applications where adaptability and low energy consumption are paramount. The approach's efficiency opens new avenues for deploying intelligent systems in scenarios previously considered impractical due to energy and computational limitations. Future research might explore the applicability of these methodologies across other domains and datasets, further refine the balance between model compactness and performance, and investigate novel ways to reduce energy consumption during the learning phase even further. Additionally, addressing challenges surrounding the quantization of model components without sacrificing accuracy remains an exciting area for continued exploration.