Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Propagate Labels: Transductive Propagation Network for Few-shot Learning (1805.10002v5)

Published 25 May 2018 in cs.LG, cs.CV, cs.NE, and stat.ML

Abstract: The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class. The recently introduced meta-learning approaches tackle this problem by learning a generic classifier across a large number of multiclass classification tasks and generalizing the model to a new task. Yet, even with such meta-learning, the low-data problem in the novel classification task still remains. In this paper, we propose Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem. Specifically, we propose to learn to propagate labels from labeled instances to unlabeled test instances, by learning a graph construction module that exploits the manifold structure in the data. TPN jointly learns both the parameters of feature embedding and the graph construction in an end-to-end manner. We validate TPN on multiple benchmark datasets, on which it largely outperforms existing few-shot learning approaches and achieves the state-of-the-art results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yanbin Liu (18 papers)
  2. Juho Lee (106 papers)
  3. Minseop Park (5 papers)
  4. Saehoon Kim (19 papers)
  5. Eunho Yang (89 papers)
  6. Sung Ju Hwang (178 papers)
  7. Yi Yang (856 papers)
Citations (641)

Summary

Transductive Propagation Networks for Few-shot Learning

The paper "Learning to Propagate Labels: Transductive Propagation Network for Few-shot Learning" presents a novel approach to few-shot learning—where the goal is to develop classifiers that can generalize from a limited number of training examples. The focus of this paper is the Transductive Propagation Network (TPN), a meta-learning framework designed for transductive inference. This paper addresses the ongoing challenge of the low-data problem by proposing a holistic approach to classify the entire test set concurrently.

Few-shot learning traditionally relies on meta-learning strategies that leverage episodic training across diverse tasks to generalize from limited data. However, these approaches remain vulnerable to overfitting on the scarce data available for new tasks. This motivates the proposed TPN framework which adopts transductive learning strategies, leveraging the relationships within the entire query set to improve classification.

Key Contributions

  1. Transductive Inference Framework: TPN represents an advance in few-shot learning by implementing transductive inference, predicting labels for the test set collectively instead of individually. This approach exploits the manifold structure of data, suggesting improved performance through the propagation of labels across interconnected test instances.
  2. Graph Construction and Label Propagation: Central to TPN is the novel graph construction strategy, which adapts the label propagation network through episodic meta-learning. This consists of mapping inputs to an embedding space, constructing a graph that captures the data's intrinsic manifold characteristics, and iteratively propagating labels from a support set to a query set, producing a closed-form solution that optimizes the decision boundary in few-shot scenarios.
  3. Strong Numerical Results: TPN achieved superior performance compared to state-of-the-art few-shot learning methods on benchmark datasets. It demonstrated remarkable improvement in classification accuracy, particularly in settings with fewer training shots, where the benefits of transductive inference are most pronounced.

Practical and Theoretical Implications

Practically, TPN provides a robust mechanism for few-shot classification tasks, which can be particularly useful in domains where acquiring labeled data is expensive or impractical. Theoretically, it offers a pathway for exploring transductive methods within the few-shot paradigm, challenging the conventional wisdom of inductive inference strategies. This contribution extends the applicability of graph-based learning to scenarios where rapid adaptation to new tasks is necessitated by minimal data availability.

Future Developments

Future research directions might include optimizing the computational efficiency of the transductive inference process—especially concerning the graph construction and label propagation steps. Investigating alternative or adaptive distance metrics within the episodic framework could further enhance the adaptability and accuracy of TPN in diverse application areas. Additionally, expanding transductive inference techniques to broader AI tasks could uncover further performance improvements and novel applications.

In conclusion, the TPN framework advances the domain of few-shot learning by integrating transductive inference, illustrating compelling performance gains and fostering rich opportunities for future exploration in efficient and effective learning from minimal data.