Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Aligned Cross-Modal Representation for Generalized Zero-Shot Classification (2112.12927v1)

Published 24 Dec 2021 in cs.CV

Abstract: Learning a common latent embedding by aligning the latent spaces of cross-modal autoencoders is an effective strategy for Generalized Zero-Shot Classification (GZSC). However, due to the lack of fine-grained instance-wise annotations, it still easily suffer from the domain shift problem for the discrepancy between the visual representation of diversified images and the semantic representation of fixed attributes. In this paper, we propose an innovative autoencoder network by learning Aligned Cross-Modal Representations (dubbed ACMR) for GZSC. Specifically, we propose a novel Vision-Semantic Alignment (VSA) method to strengthen the alignment of cross-modal latent features on the latent subspaces guided by a learned classifier. In addition, we propose a novel Information Enhancement Module (IEM) to reduce the possibility of latent variables collapse meanwhile encouraging the discriminative ability of latent variables. Extensive experiments on publicly available datasets demonstrate the state-of-the-art performance of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhiyu Fang (5 papers)
  2. Xiaobin Zhu (21 papers)
  3. Chun Yang (45 papers)
  4. Zheng Han (31 papers)
  5. Jingyan Qin (4 papers)
  6. Xu-Cheng Yin (35 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.