Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Subspace Structure of Gradient-Based Meta-Learning (2207.03804v2)

Published 8 Jul 2022 in cs.LG

Abstract: In this work we provide an analysis of the distribution of the post-adaptation parameters of Gradient-Based Meta-Learning (GBML) methods. Previous work has noticed how, for the case of image-classification, this adaptation only takes place on the last layers of the network. We propose the more general notion that parameters are updated over a low-dimensional \emph{subspace} of the same dimensionality as the task-space and show that this holds for regression as well. Furthermore, the induced subspace structure provides a method to estimate the intrinsic dimension of the space of tasks of common few-shot learning datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Gustaf Tegnér (3 papers)
  2. Alfredo Reichlin (8 papers)
  3. Hang Yin (77 papers)
  4. Mårten Björkman (49 papers)
  5. Danica Kragic (126 papers)

Summary

We haven't generated a summary for this paper yet.