Papers
Topics
Authors
Recent
2000 character limit reached

Automated Curriculum Learning for Neural Networks (1704.03003v1)

Published 10 Apr 2017 in cs.NE

Abstract: We introduce a method for automatically selecting the path, or syllabus, that a neural network follows through a curriculum so as to maximise learning efficiency. A measure of the amount that the network learns from each data sample is provided as a reward signal to a nonstationary multi-armed bandit algorithm, which then determines a stochastic syllabus. We consider a range of signals derived from two distinct indicators of learning progress: rate of increase in prediction accuracy, and rate of increase in network complexity. Experimental results for LSTM networks on three curricula demonstrate that our approach can significantly accelerate learning, in some cases halving the time required to attain a satisfactory performance level.

Citations (497)

Summary

  • The paper introduces a framework that models curriculum learning as a multi-armed bandit problem with adaptive reward scaling.
  • The paper defines precise learning progress metrics, including prediction gain and variational complexity gain, to track accuracy and network complexity improvements.
  • The paper demonstrates through LSTM experiments on synthetic language, repeat copy, and bAbI tasks that the approach can double learning efficiency compared to uniform sampling.

Automated Curriculum Learning for Neural Networks

The paper presents a novel methodology for automating curriculum learning (CL) in neural networks to enhance learning efficiency. This approach determines the optimal path through a curriculum using a reward signal fed into a nonstationary multi-armed bandit algorithm. The reward signal reflects learning progress based on two key indicators: the rate of increase in prediction accuracy and the rate of increase in network complexity.

Key Contributions

The authors present a framework that models a curriculum as a multi-armed bandit problem with an adaptive policy to maximize learning progress. Through this lens, the paper connects CL with intrinsic motivation literature, leveraging concepts such as prediction gain and complexity gain.

  1. Curriculum as a Multi-Armed Bandit: The paper models the selection of tasks in a curriculum using multi-armed bandit algorithms, specifically adapting the Exp3.S algorithm known for its effectiveness in adversarial settings.
  2. Learning Progress Indicators: A major focus is on defining metrics for learning progress, including:
    • Prediction Gain (PG) and variants (self, target, and mean prediction gains).
    • Complexity-driven measures such as Variational Complexity Gain (VCG) based on the Minimum Description Length (MDL) principle.
  3. Adaptive Reward Scaling: The approach employs adaptive reward scaling to normalize progress signals, enhancing the robustness across diverse task settings.

Experimental Results

Experiments were conducted with LSTM networks on three distinct tasks: synthetic language modeling, repeat copy tasks, and the bAbI question answering dataset. Results demonstrate that the proposed curricula often double the efficiency compared to uniform sampling.

  • Synthetic Language Modeling: Signals like complexity gains effectively prioritized datasets with higher structural complexity, fostering more efficient learning.
  • Repeat Copy Task: The framework showed significant acceleration in solving the target tasks when compared to uniform sampling, particularly using gradient-based complexity gains.
  • bAbI Tasks: Prediction gain notably improved both the speed and the number of tasks completed compared to uniform sampling, indicating an ability to discover implicit task orderings.

Implications and Future Directions

The proposed automated curriculum learning framework has substantial implications:

  • Scalability and Efficiency: By automating curriculum discovery, this approach can be pivotal in scaling learning algorithms to complex tasks without manual task ordering.
  • Intrinsic Motivation in AI: The integration with intrinsic motivation theories suggests potential paths for developing AI systems that learn more naturally and efficiently.

Future research could explore further refinement of progress signals, their applicability to broader types of neural architectures, and integration with reinforcement learning frameworks. Moreover, comparing the robustness and adaptability of this framework across varied domains would yield insights into its generality and potential for practical deployment.

This paper lays a noteworthy foundation for automated curriculum learning, aligning with contemporary demands in artificial intelligence for more adaptive and efficient learning paradigms.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.