Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Propagate for Graph Meta-Learning (1909.05024v2)

Published 11 Sep 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Meta-learning extracts common knowledge from learning different tasks and uses it for unseen tasks. It can significantly improve tasks that suffer from insufficient training data, e.g., few shot learning. In most meta-learning methods, tasks are implicitly related by sharing parameters or optimizer. In this paper, we show that a meta-learner that explicitly relates tasks on a graph describing the relations of their output dimensions (e.g., classes) can significantly improve few shot learning. The graph's structure is usually free or cheap to obtain but has rarely been explored in previous works. We develop a novel meta-learner of this type for prototype-based classification, in which a prototype is generated for each class, such that the nearest neighbor search among the prototypes produces an accurate classification. The meta-learner, called "Gated Propagation Network (GPN)", learns to propagate messages between prototypes of different classes on the graph, so that learning the prototype of each class benefits from the data of other related classes. In GPN, an attention mechanism aggregates messages from neighboring classes of each class, with a gate choosing between the aggregated message and the message from the class itself. We train GPN on a sequence of tasks from many-shot to few shot generated by subgraph sampling. During training, it is able to reuse and update previously achieved prototypes from the memory in a life-long learning cycle. In experiments, under different training-test discrepancy and test task generation settings, GPN outperforms recent meta-learning methods on two benchmark datasets. The code of GPN and dataset generation is available at https://github.com/liulu112601/Gated-Propagation-Net.

Overview of "Learning to Propagate for Graph Meta-Learning"

The paper "Learning to Propagate for Graph Meta-Learning" introduces a novel approach to enhance meta-learning by leveraging graph structures that encapsulate relationships among classes. The proposed strategy is aimed at improving the performance of few-shot learning which often suffers from insufficient training samples. The key contribution of this work is the introduction of the Gated Propagation Network (GPN), which explicitly incorporates graph-based relationships into meta-learning processes, facilitating the transfer of knowledge across related tasks.

Core Concepts and Architecture

Meta-learning, or "learning to learn," addresses the challenge of training models with limited data by extracting shared knowledge from related tasks. The authors propose that tasks can be explicitly connected through graph structures that define inter-class relationships. This approach extends beyond traditional meta-learning methodologies that primarily rely on shared parameters or optimizers. The GPN is designed to work with prototype-based classification, where each class is represented by a prototype, and classification is achieved through nearest-neighbor search among prototype representations.

The GPN executes a message-passing mechanism across the graph where each class (node) shares its prototype with neighboring nodes, allowing mutual benefit from related class data. The propagation process employs an attention mechanism to determine the influence of messages received from neighboring nodes, with a gating mechanism deciding between utilizing the aggregated message or the node’s own message. Notably, GPN is capable of maintaining and updating prototypes dynamically via memory, facilitating lifelong learning.

Experimental Evaluation

The efficacy of GPN was evaluated across multiple settings using benchmark datasets derived from tieredImageNet, which reflect varying distances between training and test classes. Experiments demonstrate GPN's superiority over existing meta-learning models, including Prototypical Networks and Graph Neural Networks, in scenarios involving random and snowball sampling. The empirical results indicate that GPN significantly enhances classification accuracy, particularly when the graph structure closely relates training and unseen test classes. This highlights the importance of utilizing weakly supervised graph information in meta-learning contexts.

Implications and Future Directions

The integration of graph structures within meta-learning frameworks as proposed by GPN represents a critical advancement, potentially applicable in various domains such as biological taxonomy, disease classification, and e-commerce item categorization. This paper sets the stage for further exploration in several directions:

  1. Graph Structure Utilization: Future research could expand the applicability of graph meta-learning by integrating more complex graph structures or considering multi-modal graphs where nodes and edges possess different types and weights.
  2. Scalability: Investigations into the scalability of GPN, especially concerning its computational requirements in handling extensive graphs with numerous nodes, would be beneficial.
  3. Transfer Learning: Exploring how GPN can aid in transfer learning scenarios where tasks from significantly different distributions must be addressed.
  4. Cross-Domain Graph Meta-Learning: As the methodology is domain-agnostic, extending GPN to address challenges in other fields stands as a promising avenue for research.

Through this paper, the necessity and advantage of explicitly modeling task relationships utilizing graph structures in meta-learning environments are clearly demonstrated, providing an essential foundation for future developments in AI and machine learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lu Liu (464 papers)
  2. Tianyi Zhou (172 papers)
  3. Guodong Long (115 papers)
  4. Jing Jiang (192 papers)
  5. Chengqi Zhang (74 papers)
Citations (94)
Github Logo Streamline Icon: https://streamlinehq.com