Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 87 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 102 tok/s Pro
Kimi K2 166 tok/s Pro
GPT OSS 120B 436 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Coarse-to-Fine Registration Paradigm

Updated 27 September 2025
  • The paradigm is an algorithmic strategy that first estimates broad, global transformations and then refines them with localized adjustments to align complex data.
  • By mapping feature extraction into hyperbolic space via the Poincaré ball model, it efficiently encodes hierarchical relationships for improved class-incremental learning.
  • Empirical results demonstrate enhanced accuracy, reduced forgetting, and effective augmentation in few-shot scenarios with limited fine-class data.

The coarse-to-fine registration paradigm is an algorithmic strategy for aligning complex data (such as images, point clouds, or feature manifolds) by first estimating broad, global transformations and then refining these with increasingly detailed, localized adjustments or representations. It is characterized by a hierarchical process: initial learning or registration at a "coarse" scale (capturing global structures or categories), followed by incremental "fine" stages that introduce or adapt to additional details (such as fine-grained class distinctions or pixel-level deformations). This approach is especially prominent in modern computer vision, geometric learning, representation learning, and few-shot class-incremental learning, where hierarchical structure and data scarcity intersect.

1. Hierarchical Learning in Coarse-to-Fine Paradigm

The coarse-to-fine paradigm separates the learning process into sequential stages: an initial coarse representation captures broad, high-level structures or classes, which is subsequently refined via fine-grained learning. In the context of Few-Shot Class-Incremental Learning (C2FSCIL) (Dai et al., 23 Sep 2025), this means:

  • Coarse Stage: The network is trained with abundant labeled data for coarse (superclass) categories using contrastive methods, leveraging the rich data to learn robust, general features.
  • Fine Stage: New, fine-grained classes with few examples are incrementally introduced. The network’s feature extractor, frozen from the coarse stage, is paired with new classifier weights trained under few-shot conditions.

Table: Stages in the C2FSCIL Paradigm

Stage Training Data Model Adaptation
Coarse Many-label, many-sample Train extractor + classifier
Fine (few-shot) Few-label, few-sample Only learn new classifier heads

The hierarchical coarse-to-fine process is motivated by the intuition that broad category separation (e.g., distinguishing "birds" from "dogs") is easier given more data, and that downstream fine-scale distinctions (e.g., between bird species) benefit from initial robust superclass separation.

2. Hyperbolic Geometry for Hierarchical Representation

Hyperbolic space, particularly the Poincaré ball model, is leveraged for its intrinsic suitability to capture hierarchical and tree-like structures—qualities not naturally encoded in Euclidean space. In this setting (Dai et al., 23 Sep 2025):

  • The feature extractor is explicitly mapped into hyperbolic space via the exponential map,

TP(x)=expwc(x),\text{TP}(x) = \exp^c_w(x),

where ww is a learnable reference point in the ball and cc is the curvature parameter.

  • Probability distributions in hyperbolic space, necessary for modeling uncertainty and augmentation, are constructed using a wrapped normal or maximum entropy distribution. The density is

NHc(x;μ,Σ)=kexp(zΣ1z2),z=λμclogμcx.\mathcal{N}_{\mathbb{H}}^c(x; \mu, \Sigma) = k \exp\left( -\frac{z^\top \Sigma^{-1} z}{2} \right), \quad z = \lambda_{\mu}^c \log_{\mu}^c x.

  • Hyperbolic mappings and contrastive objectives preserve hierarchical distances, such that child nodes (fine classes) branch naturally from coarse parent nodes.

Significance: Hyperbolic embeddings partition hierarchical levels efficiently, thus preserving multi-level and fine-grained relationships vital for incremental class learning.

3. Model Architecture and Embedding

In the coarse-to-fine paradigm for few-shot class-incremental learning (C2FSCIL), the neural architecture is modified to operate natively in hyperbolic space:

  • Feature extraction: The output of a standard CNN or backbone is mapped into the Poincaré ball using the exponential map expwc(x)\exp^c_w(x) with a learnable center ww and curvature cc.
  • Classifier adaptation: Fully-connected classifier layers are replaced by hyperbolic fully-connected (HypFC) layers. The HypFC operation is

y=HypFC(x)=Wcxcby = \text{HypFC}(x) = W \otimes_c x \oplus_c b

where c\otimes_c and c\oplus_c are the Möbius scalar multiplication and addition, ensuring that all mappings are distance-preserving in the hyperbolic metric.

Model optimization is guided by hyperbolic contrastive loss, rather than standard (Euclidean) similarity. This loss uses the negative hyperbolic distance:

LConhyp=nlog{exp[dpc(qn,kn+)/τ]exp[dpc(qn,kn+)/τ]+mnexp[dpc(qn,km)/τ]}L_{\text{Con}}^{\text{hyp}} = -\sum_n \log \left\{ \frac{\exp \left[ -d_p^c(q_n, k_n^+)/\tau \right]}{ \exp \left[ -d_p^c(q_n, k_n^+)/\tau \right] + \sum_{m \neq n} \exp \left[ -d_p^c(q_n, k_m^-)/\tau \right] } \right\}

with dpc(,)d_p^c(\cdot, \cdot) denoting the hyperbolic distance and τ\tau a temperature parameter.

4. Hyperbolic Optimization and Augmentation

Few-shot fine-class incremental steps are prone to overfitting due to sample scarcity. The approach in (Dai et al., 23 Sep 2025) addresses this by:

  • Maximum entropy estimation: For each fine class, the mean and variance of the class feature distribution are estimated in hyperbolic space. Samples are drawn from a wrapped normal (Riemannian normal) distribution as

x=expμc(vλμc)x = \exp^c_\mu(v \cdot \lambda_\mu^c)

where vv is sampled from a standard normal in Euclidean space.

  • Augmentation: These synthetic, geometry-respecting samples are used to train classifier heads for fine classes, mitigating overfitting by enlarging the effective sample support for each class.
  • Classifier training: Only the weights corresponding to the new fine classes are learned; the extractor and previous classifier heads remain fixed, ensuring stability and minimizing catastrophic forgetting.

5. Empirical Results and Impact

Experiments on C2FSCIL benchmarks (including CIFAR-100, tieredImageNet, OpenEarthSensing) demonstrated:

  • Improved coarse and fine class accuracy: Embedding the feature extractor and classifier in hyperbolic space led to higher superclass discrimination and improved fine-class recognition under few-shot conditions.
  • Mitigated forgetting: The hierarchical, frozen-extractor design prevents previously learned classes from degrading in performance upon the introduction of new fine classes.
  • Augmentation benefit: Generative sampling in the Poincaré ball (hyperbolic augmentation) further boosted fine-class accuracy and average overall performance.

Table: Empirical Findings

Design Coarse Acc. Fine Acc. Forgetting
Euclidean (Baseline) Lower Lower
Hyperbolic Embedding
+ Hyperbolic Augment ↑↑ ≈ baseline

This suggests that hyperbolic geometry is particularly beneficial when both hierarchical structure and limited fine-scale supervision are present, as in class-incremental few-shot settings.

6. Context, Limitations, and Extensions

The coarse-to-fine paradigm's hierarchical strategy is well-aligned with emerging interests in geometric deep learning, representation learning on non-Euclidean manifolds, and lifelong or continual learning:

  • Advantages over Euclidean embedding: Hyperbolic geometry allows for exponential separation of nodes, compactly representing tree- or DAG-like class hierarchies, which is difficult in flat Euclidean spaces.
  • Limitations: The performance gains hinge on the degree to which class relationships are inherently hierarchical; tasks lacking such structures may not benefit equivalently from hyperbolic embeddings.
  • Open challenges: Selecting or learning optimal curvature parameters cc, improving the tractability of hyperbolic distribution normalization constants, and further refining augmentation strategies in the hyperbolic domain.

A plausible implication is that other tasks with hierarchical, compositional, or scale-structured outputs (such as scene understanding or multi-level semantic parsing) may benefit from coarse-to-fine learning in hyperbolic space.

7. Conclusion

The coarse-to-fine registration paradigm, particularly as formalized in hyperbolic space (Dai et al., 23 Sep 2025), integrates hierarchical learning, hyperbolic geometry, tailored optimization, and generative augmentation for robust few-shot class-incremental learning. By leveraging the Poincaré ball model and contrastive hyperbolic objectives, it captures both global hierarchies and fine-class details efficiently, offering improvements in accuracy and stability under data-scarce conditions and opening new research directions in geometric and hierarchical deep learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Coarse-to-Fine Registration Paradigm.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube