Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting Few-Shot Visual Learning with Self-Supervision (1906.05186v1)

Published 12 Jun 2019 in cs.CV and cs.LG

Abstract: Few-shot learning and self-supervised learning address different facets of the same problem: how to train a model with little or no labeled data. Few-shot learning aims for optimization methods and models that can learn efficiently to recognize patterns in the low data regime. Self-supervised learning focuses instead on unlabeled data and looks into it for the supervisory signal to feed high capacity deep neural networks. In this work we exploit the complementarity of these two domains and propose an approach for improving few-shot learning through self-supervision. We use self-supervision as an auxiliary task in a few-shot learning pipeline, enabling feature extractors to learn richer and more transferable visual representations while still using few annotated samples. Through self-supervision, our approach can be naturally extended towards using diverse unlabeled data from other datasets in the few-shot setting. We report consistent improvements across an array of architectures, datasets and self-supervision techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Spyros Gidaris (34 papers)
  2. Andrei Bursuc (55 papers)
  3. Nikos Komodakis (37 papers)
  4. Matthieu Cord (129 papers)
  5. Patrick Pérez (90 papers)
Citations (388)

Summary

  • The paper introduces a novel framework that integrates self-supervision with few-shot learning to enrich feature representations.
  • It augments traditional learning with auxiliary tasks like rotation prediction and patch location to leverage unlabeled data.
  • Results demonstrate significant accuracy gains on benchmarks such as MiniImageNet and CIFAR-FS, reducing the need for extensive labeled datasets.

Overview of "Boosting Few-Shot Visual Learning with Self-Supervision"

The paper "Boosting Few-Shot Visual Learning with Self-Supervision" presents a novel approach that synergistically combines few-shot learning (FSL) and self-supervised learning (SSL) to enhance the learning performance of models with limited annotated data. Few-shot learning focuses on enabling models to learn from a minimal amount of labeled data, while self-supervised learning leverages unlabeled data by employing auxiliary pretext tasks to learn features that can be transferred to downstream tasks.

Methodology

The authors introduce a framework that integrates self-supervision into few-shot learning pipelines to improve the generalization capabilities of trained models. Specifically, they propose augmenting the training objective of few-shot models by adding an auxiliary self-supervised loss during the initial learning stage. This self-supervised component allows the models to learn from additional unlabeled data and therefore gain richer feature representations.

The paper explores two paradigms for exploiting self-supervision:

  1. Auxiliary Loss: The primary few-shot learning loss is combined with a self-supervised task loss, pushing the model to learn additional visual patterns or features. The tasks of rotation prediction and relative patch location are utilized as self-supervised tasks.
  2. Semi-Supervised Learning: The approach is extended to incorporate unlabeled data from different but related datasets during training. This allows the model to leverage a larger and more diverse set of visual features.

Results

The paper reports consistent improvements across various benchmark datasets: MiniImageNet, CIFAR-FS, and tiered-MiniImageNet. The methodology demonstrates significant enhancement in the recognition accuracy of novel classes when compared to existing few-shot learning methods. These improvements were particularly pronounced in high-capacity architectures like WRN-28-10. A notable boost in accuracy is achieved when combining self-supervision with the Cosine Classifiers approach in the few-shot learning stage.

Implications and Future Research

The proposed integration of self-supervision into few-shot learning showcases a viable pathway to reduce the reliance on large amounts of labeled data, making model training more efficient in real-world scenarios where labeled data is scarce. By utilizing unlabeled data effectively, this approach paves the way for more adaptable and robust machine learning models that can be deployed across various tasks with minimal task-specific data preparation.

Looking forward, further exploration could involve assessing additional self-supervised tasks or architectures to further enhance feature learning. Additionally, the proposed approach can be potentially adapted to other domains beyond visual learning, such as natural language processing, where few-shot learning remains a challenging yet rewarding task. This work also opens avenues for future research into optimizing the balance between self-supervised and few-shot learning components to tailor models for specific applications and datasets.