Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Primitive-aware Discriminative Representations for Few-shot Learning (2208.09717v2)

Published 20 Aug 2022 in cs.CV

Abstract: Few-shot learning (FSL) aims to learn a classifier that can be easily adapted to recognize novel classes with only a few labeled examples. Some recent work about FSL has yielded promising classification performance, where the image-level feature is used to calculate the similarity among samples for classification. However, the image-level feature ignores abundant fine-grained and structural in-formation of objects that may be transferable and consistent between seen and unseen classes. How can humans easily identify novel classes with several sam-ples? Some study from cognitive science argues that humans can recognize novel categories through primitives. Although base and novel categories are non-overlapping, they can share some primitives in common. Inspired by above re-search, we propose a Primitive Mining and Reasoning Network (PMRN) to learn primitive-aware representations based on metric-based FSL model. Concretely, we first add Self-supervision Jigsaw task (SSJ) for feature extractor parallelly, guiding the model to encode visual pattern corresponding to object parts into fea-ture channels. To further mine discriminative representations, an Adaptive Chan-nel Grouping (ACG) method is applied to cluster and weight spatially and se-mantically related visual patterns to generate a group of visual primitives. To fur-ther enhance the discriminability and transferability of primitives, we propose a visual primitive Correlation Reasoning Network (CRN) based on graph convolu-tional network to learn abundant structural information and internal correlation among primitives. Finally, a primitive-level metric is conducted for classification in a meta-task based on episodic training strategy. Extensive experiments show that our method achieves state-of-the-art results on six standard benchmarks.

Citations (1)

Summary

We haven't generated a summary for this paper yet.