Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decoder Choice Network for Meta-Learning (1909.11446v2)

Published 25 Sep 2019 in cs.LG and stat.ML

Abstract: Meta-learning has been widely used for implementing few-shot learning and fast model adaptation. One kind of meta-learning methods attempt to learn how to control the gradient descent process in order to make the gradient-based learning have high speed and generalization. This work proposes a method that controls the gradient descent process of the model parameters of a neural network by limiting the model parameters in a low-dimensional latent space. The main challenge of this idea is that a decoder with too many parameters is required. This work designs a decoder with typical structure and shares a part of weights in the decoder to reduce the number of the required parameters. Besides, this work has introduced ensemble learning to work with the proposed approach for improving performance. The results show that the proposed approach is witnessed by the superior performance over the Omniglot classification and the miniImageNet classification tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jialin Liu (97 papers)
  2. Fei Chao (53 papers)
  3. Longzhi Yang (10 papers)
  4. Chih-Min Lin (6 papers)
  5. Qiang Shen (19 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.