Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PVLR: Prompt-driven Visual-Linguistic Representation Learning for Multi-Label Image Recognition (2401.17881v1)

Published 31 Jan 2024 in cs.CV

Abstract: Multi-label image recognition is a fundamental task in computer vision. Recently, vision-LLMs have made notable advancements in this area. However, previous methods often failed to effectively leverage the rich knowledge within LLMs and instead incorporated label semantics into visual features in a unidirectional manner. In this paper, we propose a Prompt-driven Visual-Linguistic Representation Learning (PVLR) framework to better leverage the capabilities of the linguistic modality. In PVLR, we first introduce a dual-prompting strategy comprising Knowledge-Aware Prompting (KAP) and Context-Aware Prompting (CAP). KAP utilizes fixed prompts to capture the intrinsic semantic knowledge and relationships across all labels, while CAP employs learnable prompts to capture context-aware label semantics and relationships. Later, we propose an Interaction and Fusion Module (IFM) to interact and fuse the representations obtained from KAP and CAP. In contrast to the unidirectional fusion in previous works, we introduce a Dual-Modal Attention (DMA) that enables bidirectional interaction between textual and visual features, yielding context-aware label representations and semantic-related visual representations, which are subsequently used to calculate similarities and generate final predictions for all labels. Extensive experiments on three popular datasets including MS-COCO, Pascal VOC 2007, and NUS-WIDE demonstrate the superiority of PVLR.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hao Tan (80 papers)
  2. Zichang Tan (25 papers)
  3. Jun Li (778 papers)
  4. Jun Wan (79 papers)
  5. Zhen Lei (205 papers)