Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation (2001.08735v3)

Published 23 Jan 2020 in cs.CV and cs.LG

Abstract: Few-shot classification aims to recognize novel categories with only few labeled images in each class. Existing metric-based few-shot classification algorithms predict categories by comparing the feature embeddings of query images with those from a few labeled images (support examples) using a learned metric function. While promising performance has been demonstrated, these methods often fail to generalize to unseen domains due to large discrepancy of the feature distribution across domains. In this work, we address the problem of few-shot classification under domain shifts for metric-based methods. Our core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage. To capture variations of the feature distributions under different domains, we further apply a learning-to-learn approach to search for the hyper-parameters of the feature-wise transformation layers. We conduct extensive experiments and ablation studies under the domain generalization setting using five few-shot classification datasets: mini-ImageNet, CUB, Cars, Places, and Plantae. Experimental results demonstrate that the proposed feature-wise transformation layer is applicable to various metric-based models, and provides consistent improvements on the few-shot classification performance under domain shift.

Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

The paper addresses the challenge of few-shot classification under domain shifts by proposing a novel approach using feature-wise transformation layers. Few-shot classification traditionally involves recognizing instances from novel categories with minimal labeled examples. While existing metric-based methods, which utilize similarity measures between query and support embeddings, have demonstrated success, their performance often degrades when facing unseen domains due to domain discrepancies in feature distributions.

Methodology

The core innovation of this research is the integration of feature-wise transformation layers within the feature encoder. These layers apply affine transformations to simulate a variety of feature distributions encountered during training, thereby preparing the model for domain shifts. The parameters of these transformation layers are optimized using a learning-to-learn approach, which refines hyper-parameters to maximize model performance on unseen domains post-training on seen domains.

Experimental Design and Results

Experiments were conducted across five distinct datasets — mini-ImageNet, CUB, Cars, Places, and Plantae — under a domain generalization setting. The paper evaluates three metric-based models: Matching Networks, Relation Networks, and Graph Neural Networks. The feature-wise transformation layers consistently improved model performance in cross-domain settings. For instance, improvements were evident when the model was trained on mini-ImageNet and tested on markedly different domains like CUB and Cars.

Contributions

  1. Feature-Wise Transformation Layers: The transformation layers effectively simulate cross-domain feature distributions. They are model agnostic and enhance generalization across different metric-based methods.
  2. Learning-to-Learn Algorithm: This algorithm optimizes transformation layer parameters, efficiently capturing feature distribution variations and improving cross-domain adaptability without manual tuning.
  3. Extensive Evaluation: Results demonstrated that the proposed method enhances domain-independent generalization, outperforming standard baselines in cross-domain evaluations.

Implications and Future Directions

The integration of feature-wise transformations presents a significant step in mitigating domain discrepancies, a pertinent issue in few-shot learning. These techniques not only enhance model robustness but also offer a modular approach that can be applied to various architectures with minimal configuration adjustments.

Theoretical implications extend to broader discussions on domain adaptation and generalization strategies, suggesting potential intersections with adversarial learning and conditional normalization methods. Practically, these findings could facilitate advancements in applications where domain-specific labeled data is scarce, such as medical image analysis or rare species identification.

Future research may explore extending this work by integrating unsupervised learning techniques to further reduce the reliance on labeled data in new domains. Additionally, examining the effectiveness of this approach in more dynamic or evolving environments could uncover further insights into the adaptability of metric-based methods in real-world scenarios. This line of inquiry holds promise for robust AI systems capable of learning and adapting in diverse settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hung-Yu Tseng (31 papers)
  2. Hsin-Ying Lee (60 papers)
  3. Jia-Bin Huang (106 papers)
  4. Ming-Hsuan Yang (377 papers)
Citations (370)