Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Overview and Survey of Recent Advances in Meta-Learning (2004.11149v7)

Published 17 Apr 2020 in cs.LG and stat.ML

Abstract: This article reviews meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks with applications in highly automated AI, few-shot learning, natural language processing and robotics. Unlike deep learning, meta-learning can be applied to few-shot high-dimensional datasets and considers further improving model generalization to unseen tasks. Deep learning is focused upon in-sample prediction and meta-learning concerns model adaptation for out-of-sample prediction. Meta-learning can continually perform self-improvement to achieve highly autonomous AI. Meta-learning may serve as an additional generalization block complementary for original deep learning model. Meta-learning seeks adaptation of machine learning models to unseen tasks which are vastly different from trained tasks. Meta-learning with coevolution between agent and environment provides solutions for complex tasks unsolvable by training from scratch. Meta-learning methodology covers a wide range of great minds and thoughts. We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework. Recent applications concentrate upon the integration of meta-learning with other machine learning framework to provide feasible integrated problem solutions. We briefly present recent meta-learning advances and discuss potential future research directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Huimin Peng (9 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.