Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot classification (2003.06670v1)

Published 14 Mar 2020 in cs.CV

Abstract: The field of Few-Shot Learning (FSL), or learning from very few (typically $1$ or $5$) examples per novel class (unseen during training), has received a lot of attention and significant performance advances in the recent literature. While number of techniques have been proposed for FSL, several factors have emerged as most important for FSL performance, awarding SOTA even to the simplest of techniques. These are: the backbone architecture (bigger is better), type of pre-training on the base classes (meta-training vs regular multi-class, currently regular wins), quantity and diversity of the base classes set (the more the merrier, resulting in richer and better adaptive features), and the use of self-supervised tasks during pre-training (serving as a proxy for increasing the diversity of the base set). In this paper we propose yet another simple technique that is important for the few shot learning performance - a search for a compact feature sub-space that is discriminative for a given few-shot test task. We show that the Task-Adaptive Feature Sub-Space Learning (TAFSSL) can significantly boost the performance in FSL scenarios when some additional unlabeled data accompanies the novel few-shot task, be it either the set of unlabeled queries (transductive FSL) or some additional set of unlabeled data samples (semi-supervised FSL). Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than $5\%$, while increasing the benefit of using unlabeled data in FSL to above $10\%$ performance gain.

Task-Adaptive Feature Sub-Space Learning for Few-Shot Learning

The paper "TAFSSL: Task-Adaptive Feature Sub-Space Learning for transductive and semi-supervised few-shot learning" tackles the challenge of Few-Shot Learning (FSL), a paradigm that focuses on developing models capable of learning from a very limited number of examples per class, typically one or five. This research makes a significant contribution by introducing a method to enhance FSL performance, leveraging both transductive and semi-supervised learning frameworks.

Core Methodology

The proposed method, Task-Adaptive Feature Sub-Space Learning (TAFSSL), seeks to identify and utilize a discriminative feature sub-space tailored for a specific few-shot learning task. The technique relies on additional unlabeled data that accompany the novel few-shot task, either in the form of a set of unlabeled queries (for transductive FSL) or additional unlabeled samples (for semi-supervised FSL). The primary aim is to improve the efficacy of FSL by finding a compact feature sub-space that is optimized for the few-shot task under consideration.

Key Experiments

The authors present empirical evaluations on popular benchmarks such as miniImageNet and tieredImageNet to demonstrate the effectiveness of the TAFSSL approach. The results highlight that TAFSSL provides significant performance enhancements, improving current state-of-the-art results by over 5% in both transductive and semi-supervised FSL settings. In particular, the experiments reveal that utilizing unlabeled data can result in a more than 10% gain in performance.

Analytical Insight

The paper explores several factors that have been identified as crucial for enhancing FSL performance. These include the backbone neural architecture, pre-training methodology (meta-training versus regular multi-class), the diversity of the training classes, and incorporation of self-supervised tasks. TAFSSL adds to this by focusing on the adaptive learning of feature sub-spaces, which allows the model to maintain essential discriminative attributes that might otherwise be lost when faced with novel, unseen classes.

Theoretical and Practical Implications

Theoretically, TAFSSL suggests that by adapting feature spaces closer to the novel task's specific requirements, models can achieve better discrimination despite limited data. This approach opens avenues for further exploration into dynamic feature adaptation not only within the field of few-shot learning but also in low-resource learning scenarios.

Practically, TAFSSL can be particularly beneficial in real-world settings where additional unlabeled data is readily available, hence effectively bridging the gap between theory and practice. The framework offers a promising direction for improving the deployment of machine learning models in environments where labeled data is scarce.

Future Directions

Given the promising results, future work may explore several extensions of TAFSSL. Potential areas of exploration include integrating non-linear sub-space learning approaches, examining the impact on different neural network architectures, and extending the framework to handle varying task complexities and domain-specific challenges. Moreover, incorporating TAFSSL into end-to-end meta-learning frameworks could further refine its applicability and performance.

Conclusion

TAFSSL marks a step toward more adaptive and robust few-shot learning models. By harnessing the power of task-specific feature sub-spaces, this method not only sets a new standard for performance in FSL on established benchmarks but also enhances the applicability of FSL techniques in practical scenarios, paving the way for solutions capable of thriving in data-scarce environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Moshe Lichtenstein (3 papers)
  2. Prasanna Sattigeri (70 papers)
  3. Rogerio Feris (105 papers)
  4. Raja Giryes (155 papers)
  5. Leonid Karlinsky (79 papers)
Citations (75)
Youtube Logo Streamline Icon: https://streamlinehq.com