Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 87 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 102 tok/s Pro
Kimi K2 166 tok/s Pro
GPT OSS 120B 436 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Coarse-To-Fine Few-Shot Class-Incremental Learning

Updated 26 September 2025
  • The paper introduces a novel hyperbolic embedding framework that aligns with hierarchical data to improve fine-grained recognition in few-shot class-incremental learning.
  • It leverages contrastive optimization, weight normalization, and freezing in the Poincaré ball to preserve coarse-class knowledge while integrating new fine classes.
  • Empirical benchmarks demonstrate significant improvements in classification accuracy and scalability, underscoring the method's practical impact on hierarchical and incremental learning tasks.

Coarse-To-Fine Few-Shot Class-Incremental Learning (C2FSCIL) addresses the challenge of incrementally learning a sequence of novel, fine-grained classes with only a few annotated examples per new class, while preserving recognition ability for previously learned, coarser-grained (e.g., superclass) labels. Standard approaches to few-shot class-incremental learning are limited when underlying data and label structures are hierarchical. Recent methods leverage hyperbolic embeddings to align model geometry with hierarchical semantics and introduce novel optimization and augmentation techniques adapted to the hyperbolic context, achieving improved scalability and performance under few-shot and incremental constraints.

1. Hyperbolic Space Representation for Hierarchical Data

Hierarchical structures such as coarse-to-fine (superclass/subclass) taxonomies naturally require an embedding space that can represent exponentially expanding relationships efficiently. Hyperbolic space, characteristically of negative curvature, provides a geometric growth rate well-matched to such semantic hierarchies. The Poincaré ball model is commonly used, where each point x\mathbf{x} lies in an nn-dimensional ball of radius 1/c1/\sqrt{c} and the Riemannian metric is

GH(x)=(21cx2)2In,G_H(\mathbf{x}) = \left(\frac{2}{1 - c\|\mathbf{x}\|^2}\right)^2 I_n,

where c>0c > 0 is the curvature. The hyperbolic distance between vectors x\mathbf{x} and y\mathbf{y} is given by

dpc(x,y)=1ccosh1 ⁣(1+2cxy2(1cx2)(1cy2)),d_p^c(\mathbf{x},\mathbf{y}) = \frac{1}{\sqrt{c}} \cosh^{-1}\!\left(1 + \frac{2c\|\mathbf{x}-\mathbf{y}\|^2}{(1-c\|\mathbf{x}\|^2)(1-c\|\mathbf{y}\|^2)}\right),

which reflects the exponential expansion away from the ball center. This property yields compact and discriminative representations for both coarse (central, generic) and fine (peripheral, specific) class semantics (Dai et al., 23 Sep 2025).

2. The Knowe Approach: Learning, Normalizing, and Freezing Weights

The Knowe algorithm forms the foundation of C2FSCIL (Xiang et al., 2021). It begins by learning a general-purpose embedding via supervised and contrastive learning on coarse labels. The base session constructs an embedding space wherein samples from the same superclass are mapped nearby, with inter-class repulsion. After this contrastive pre-training, incrementally learned fine classes are added via classifier weights (prototypes), which are L2L_2-normalized and then frozen—ensuring stability (retention of old knowledge) and plasticity (adaptation to new classes). In the hyperbolic variant, these steps are performed in the Poincaré ball, and the classifier weights are normalized and frozen with respect to the geometry of the space. This strict freezing avoids catastrophic forgetting and prevents bias in the evolving classifier as new fine-grained categories are encountered in later sessions (Dai et al., 23 Sep 2025).

3. Hyperbolic Embedding and Model Architecture

Rather than extracting features in a Euclidean space, the model incorporates a hyperbolic mapping layer, typically using the exponential map to endow deep feature vectors with hyperbolic structure. The mapping transforms a tangent space vector x\mathbf{x} (often from a learned base point w\mathbf{w}) into the Poincaré ball as

TP(x)=expwc(x),\operatorname{TP}(\mathbf{x}) = \exp_{\mathbf{w}}^c(\mathbf{x}),

yielding vectors confined to Bcn={vv<1/c}\mathcal{B}_c^n = \{\mathbf{v} \mid \|\mathbf{v}\| < 1/\sqrt{c}\}. Downstream, classification is handled by hyperbolic fully connected (HypFC) layers. These layers employ Möbius addition and multiplication to retain hyperbolic compatibility: y=HypFC(x)=Wcxcb,\mathbf{y} = \operatorname{HypFC}(\mathbf{x}) = W \otimes_c \mathbf{x} \oplus_c b, with c\otimes_c and c\oplus_c denoting Möbius matrix multiplication and addition. All operations, including the comparison of features and prototypes, use hyperbolic distance rather than Euclidean or dot-product similarity, maintaining geometry-consistency throughout the network (Dai et al., 23 Sep 2025).

4. Hyperbolic Contrastive Optimization and Few-Shot Augmentation

To exploit the advantages of hyperbolic geometry during learning, the standard contrastive loss is reformulated for hyperbolic space: LConhyp=n=1Nlogexp ⁣[dpc(qn,kn+)τ]exp ⁣[dpc(qn,kn+)τ]+mnexp ⁣[dpc(qn,km)τ],\mathcal{L}_{\mathrm{Con}}^{\mathrm{hyp}} = -\sum_{n=1}^{N} \log\frac{\exp\!\left[-\frac{d_p^c(\mathbf{q}_n,\mathbf{k}_n^+)}{\tau}\right]}{\exp\!\left[-\frac{d_p^c(\mathbf{q}_n,\mathbf{k}_n^+)}{\tau}\right] + \sum_{m\neq n} \exp\!\left[-\frac{d_p^c(\mathbf{q}_n,\mathbf{k}_m^-)}{\tau}\right]}, where qn\mathbf{q}_n is a query and kn+\mathbf{k}_n^+ is its positive sample, with τ\tau a temperature parameter. Classification and optimization are thus fully aligned with the underlying negative curvature. In few-shot settings, the method estimates the fine-class feature distribution in hyperbolic space using the maximum entropy principle. The wrapped normal distribution in the Poincaré ball is used for this purpose: NWc(x;μ,Σ)=NHc(x;μ,Σ)(cdpc(μ,x)sinh ⁣(cdpc(μ,x)))d1.\mathcal{N}_W^c(\mathbf{x};\mu,\Sigma) = \mathcal{N}_{\mathbb{H}^c}(\mathbf{x};\mu,\Sigma) \left(\frac{\sqrt{c}\, d_p^c(\mu,\mathbf{x})}{\sinh\!\left(\sqrt{c}\, d_p^c(\mu,\mathbf{x})\right)}\right)^{d-1}. Augmented features are sampled from this distribution, supplementing the meager set of real examples for each novel fine class, thus alleviating overfitting and improving classifier generalization under few-shot constraints (Dai et al., 23 Sep 2025).

5. Empirical Evaluation and Impact on C2FSCIL Benchmarks

Experimental results on benchmarks including CIFAR-100, tieredImageNet, and OpenEarthSensing demonstrate that hyperbolic embedding and contrastive loss yield marked improvements in both coarse-class and fine-class accuracy (Dai et al., 23 Sep 2025). Specifically, embeddings in hyperbolic space lead to superior hierarchical separation, which manifests as gains under incremental conditions (e.g., higher overall and fine-grained accuracy after multiple incremental sessions and a stable forgetting rate). Augmentation via maximum entropy sampling is especially effective for mitigating overfitting in the regime of very few labeled samples per new class. This suggests that the hyperbolic geometry is well suited to the hierarchical, coarse-to-fine progression inherent in C2FSCIL: it allows both stable partitioning for base (coarse) classes and flexible allocation for novel (fine) subcategories.

6. Discussion and Future Directions

The integration of hyperbolic geometry into C2FSCIL offers several advantages: it aligns model representation with the underlying data hierarchy, naturally supports scalable partitioning for increasingly fine labels, reduces catastrophic forgetting through weight normalization and freezing, and leverages geometric data augmentation to support generalization from scant supervision. Future directions include extending these techniques to more complex, multi-modal hierarchical tasks, devising adaptive curvature adjustment schemes, and exploring variants of hyperbolic metric augmentation tailored to specific hierarchical domains. Applications are anticipated in domains where data is both incremental and hierarchical, such as remote sensing, taxonomic biology, and real-time object recognition for robotics or autonomous vehicles (Dai et al., 23 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Coarse-To-Fine Few-Shot Class-Incremental Learning (C2FSCIL).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube