Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-level Semantic Feature Augmentation for One-shot Learning (1804.05298v4)

Published 15 Apr 2018 in cs.CV

Abstract: The ability to quickly recognize and learn new visual concepts from limited samples enables humans to swiftly adapt to new environments. This ability is enabled by semantic associations of novel concepts with those that have already been learned and stored in memory. Computers can start to ascertain similar abilities by utilizing a semantic concept space. A concept space is a high-dimensional semantic space in which similar abstract concepts appear close and dissimilar ones far apart. In this paper, we propose a novel approach to one-shot learning that builds on this idea. Our approach learns to map a novel sample instance to a concept, relates that concept to the existing ones in the concept space and generates new instances, by interpolating among the concepts, to help learning. Instead of synthesizing new image instance, we propose to directly synthesize instance features by leveraging semantics using a novel auto-encoder network we call dual TriNet. The encoder part of the TriNet learns to map multi-layer visual features of deep CNNs, that is, multi-level concepts, to a semantic vector. In semantic space, we search for related concepts, which are then projected back into the image feature spaces by the decoder portion of the TriNet. Two strategies in the semantic space are explored. Notably, this seemingly simple strategy results in complex augmented feature distributions in the image feature space, leading to substantially better performance.

Multi-level Semantic Feature Augmentation for One-shot Learning

The paper, "Multi-level Semantic Feature Augmentation for One-shot Learning," by Zitian Chen et al., introduces an innovative method for enhancing one-shot learning through feature augmentation in the semantic space. The research addresses the significant challenge of data scarcity in few-shot learning scenarios, where traditional approaches require extensive labeled datasets. By leveraging semantic relationships, this work proposes a novel dual TriNet architecture to generate new instance features aimed at improving classification performance with minimal data.

The core contribution of this work lies in the dual TriNet's design, which consists of an encoder-decoder network structure. The encoder TriNet maps multi-layer visual features obtained from convolutional neural networks (CNNs) into a high-dimensional semantic space. Once mapped, this semantic representation allows for the augmentation of data by introducing Gaussian noise or utilizing semantic neighborhood retrieval. The decoder TriNet then projects these semantically augmented vectors back into the image feature space, ultimately producing augmented instance features.

Through rigorous evaluation, the authors demonstrate that their method effectively augments visual features in a multi-layer architectural setting, thereby improving classification performance across multiple datasets, including miniImageNet, CIFAR-100, CUB-200, and Caltech-256. For instance, in the miniImageNet dataset, the application of dual TriNet achieves an accuracy improvement to 58.12% in one-shot learning scenarios, significantly outperforming the baseline ResNet-18 model. These results are consistently observed across other datasets, highlighting the method’s effectiveness.

The research highlights various insightful theoretical implications and suggests potential applications beyond traditional image classification. The designed framework capitalizes on pre-trained semantic spaces such as word2vec, indicating that enhanced representational continuity in semantic spaces can facilitate powerful augmentations. The dual TriNet's capability to exploit the latent semantic relationships between visual elements may also prove beneficial in extending to other domains, such as image segmentation and object detection.

Future work may explore integrating this approach with more complex semantic representations or leveraging different neural architectures. Additionally, the dual TriNet could be adapted for dynamic environments where novel classes continuously emerge, representing a step towards more generalized AI systems capable of learning with minimal supervision.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zitian Chen (10 papers)
  2. Yanwei Fu (199 papers)
  3. Yinda Zhang (68 papers)
  4. Yu-Gang Jiang (223 papers)
  5. Xiangyang Xue (169 papers)
  6. Leonid Sigal (102 papers)
Citations (216)