Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Few-Shot Image Classification with Unlabelled Examples (2006.12245v6)

Published 17 Jun 2020 in cs.CV, cs.LG, and stat.ML

Abstract: We develop a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance. Our approach combines a regularized Mahalanobis-distance-based soft k-means clustering procedure with a modified state of the art neural adaptive feature extractor to achieve improved test-time classification accuracy using unlabelled data. We evaluate our method on transductive few-shot learning tasks, in which the goal is to jointly predict labels for query (test) examples given a set of support (training) examples. We achieve state of the art performance on the Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks. All trained models and code have been made publicly available at github.com/plai-group/simple-cnaps.

Citations (50)

Summary

  • The paper introduces Transductive CNAPS as its main contribution by integrating unlabelled query examples with labelled support data to refine class predictions.
  • It employs a novel transductive task encoder that adapts the feature extractor using both support and query sets for improved out-of-domain performance.
  • The study demonstrates state-of-the-art results on benchmarks like mini-ImageNet and Meta-Dataset, underscoring the potential of unlabelled data in few-shot learning.

Enhancing Few-Shot Image Classification with Unlabelled Examples: An Expert's Perspective

The paper explores leveraging unlabelled data within the framework of few-shot learning (FSL) for image classification, proposing a sophisticated methodology named Transductive CNAPS (Conditional Neural Adaptive Processes). At its core, the paper builds upon and extends the Simple CNAPS architecture by incorporating unlabelled query examples to refine class predictions. This work is particularly relevant for tasks where acquiring ample labelled training data is impractical.

Methodological Advancements

Transductive CNAPS integrates two primary innovations. The first involves the adaptation of the feature extractor by a transductive task-encoding mechanism that leverages both support (labelled) and query (unlabelled) sets. The transductive task encoder exploits the entire query set simultaneously to provide a task representation that augments the adaptability of the feature extractor. This transductive conditioning ensures relevant features are captured even in the presence of unlabelled examples, enhancing out-of-domain performance.

The second advancement is the introduction of an iterative soft k-means clustering procedure, based on a Mahalanobis distance metric, designed to iteratively refine class mean and covariance estimates. This transductive algorithm utilizes initial class assignments through labeled support examples and iteratively refines them using the entire query set. This iterative refinement is inspired by Bregman soft clustering approaches but is tailored towards few-shot classification, respecting the class variance through covariances rather than fixed distances.

Numerical Results and Performance

The authors have demonstrated that Transductive CNAPS achieves state-of-the-art results across several few-shot classification benchmarks, including Meta-Dataset, mini-ImageNet and tiered-ImageNet. It outperformed previous methods, particularly in settings with limited labelled examples (low-shot) but restrictive task distribution, underscoring its effectiveness in practical low-data regimes. Particularly noteworthy is its superior performance on out-of-domain datasets, which highlights the robustness of the transductive feature adaptation approach.

Ablations provided within the paper reveal the dual nature of the improvements, with both transductive feature extraction and iterative clustering contributing to the overall performance boost. It is critical to note that while the transductive task encoder prominently enhanced out-of-domain task performance, the iterative clustering mainly bolstered accuracy on in-domain tasks.

Implications and Future Directions

This research contributes significantly to the FSL landscape by effectively utilizing unlabelled data, which is abundantly available in most real-world settings. The work aligns well with the growing interest in reducing dependence on labelled data, which dovetails with trends in semi-supervised and unsupervised learning.

The results of this paper suggest several avenues for future research. Firstly, exploring alternative mechanisms to train the network end-to-end with soft k-means clustering could address the instability observed during training. Additionally, experimenting with more sophisticated methods of combining labelled and unlabelled data could further refine the refinement algorithm, potentially drawing inspiration from GMMs or other probabilistic models without compromising computational efficiency.

Overall, the transductive approach proposed in this paper is a compelling paradigm for enhancing FSL with unlabelled data, offering a scalable solution to the pervasive problem of data scarcity in machine learning. As AI continues to advance, such transductive learning techniques are likely to become a mainstay in the push towards more efficient and effective model training schemes.

Youtube Logo Streamline Icon: https://streamlinehq.com