Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Learning to Meta-Learning: Reduced Training Overhead and Complexity for Communication Systems (2001.01227v1)

Published 5 Jan 2020 in cs.LG, cs.IT, math.IT, and stat.ML

Abstract: Machine learning methods adapt the parameters of a model, constrained to lie in a given model class, by using a fixed learning procedure based on data or active observations. Adaptation is done on a per-task basis, and retraining is needed when the system configuration changes. The resulting inefficiency in terms of data and training time requirements can be mitigated, if domain knowledge is available, by selecting a suitable model class and learning procedure, collectively known as inductive bias. However, it is generally difficult to encode prior knowledge into an inductive bias, particularly with black-box model classes such as neural networks. Meta-learning provides a way to automatize the selection of an inductive bias. Meta-learning leverages data or active observations from tasks that are expected to be related to future, and a priori unknown, tasks of interest. With a meta-trained inductive bias, training of a machine learning model can be potentially carried out with reduced training data and/or time complexity. This paper provides a high-level introduction to meta-learning with applications to communication systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Osvaldo Simeone (326 papers)
  2. Sangwoo Park (73 papers)
  3. Joonhyuk Kang (59 papers)
Citations (58)

Summary

We haven't generated a summary for this paper yet.