Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Meta-Learning with Gaussian Processes (2312.00742v1)

Published 1 Dec 2023 in stat.ML, cs.AI, and cs.LG

Abstract: Meta-learning is a powerful approach that exploits historical data to quickly solve new tasks from the same distribution. In the low-data regime, methods based on the closed-form posterior of Gaussian processes (GP) together with Bayesian optimization have achieved high performance. However, these methods are either computationally expensive or introduce assumptions that hinder a principled propagation of uncertainty between task models. This may disrupt the balance between exploration and exploitation during optimization. In this paper, we develop ScaML-GP, a modular GP model for meta-learning that is scalable in the number of tasks. Our core contribution is a carefully designed multi-task kernel that enables hierarchical training and task scalability. Conditioning ScaML-GP on the meta-data exposes its modular nature yielding a test-task prior that combines the posteriors of meta-task GPs. In synthetic and real-world meta-learning experiments, we demonstrate that ScaML-GP can learn efficiently both with few and many meta-tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Petru Tighineanu (11 papers)
  2. Lukas Grossberger (1 paper)
  3. Paul Baireuther (5 papers)
  4. Kathrin Skubch (8 papers)
  5. Stefan Falkner (14 papers)
  6. Julia Vinogradska (9 papers)
  7. Felix Berkenkamp (29 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.