Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prior-Knowledge and Attention-based Meta-Learning for Few-Shot Learning (1812.04955v5)

Published 11 Dec 2018 in cs.CV and cs.LG

Abstract: Recently, meta-learning has been shown as a promising way to solve few-shot learning. In this paper, inspired by the human cognition process which utilizes both prior-knowledge and vision attention in learning new knowledge, we present a novel paradigm of meta-learning approach with three developments to introduce attention mechanism and prior-knowledge for meta-learning. In our approach, prior-knowledge is responsible for helping meta-learner expressing the input data into high-level representation space, and attention mechanism enables meta-learner focusing on key features of the data in the representation space. Compared with existing meta-learning approaches that pay little attention to prior-knowledge and vision attention, our approach alleviates the meta-learner's few-shot cognition burden. Furthermore, a Task-Over-Fitting (TOF) problem, which indicates that the meta-learner has poor generalization on different K-shot learning tasks, is discovered and we propose a Cross-Entropy across Tasks (CET) metric to model and solve the TOF problem. Extensive experiments demonstrate that we improve the meta-learner with state-of-the-art performance on several few-shot learning benchmarks, and at the same time the TOF problem can also be released greatly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yunxiao Qin (22 papers)
  2. Chenxu Zhao (29 papers)
  3. Zezheng Wang (14 papers)
  4. Xiangyu Zhu (85 papers)
  5. Guojun Qi (15 papers)
  6. Jingping Shi (3 papers)
  7. Zhen Lei (205 papers)
  8. WeiGuo Zhang (11 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.