Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fine-Grained Scene Image Classification with Modality-Agnostic Adapter (2407.02769v1)

Published 3 Jul 2024 in cs.CV

Abstract: When dealing with the task of fine-grained scene image classification, most previous works lay much emphasis on global visual features when doing multi-modal feature fusion. In other words, models are deliberately designed based on prior intuitions about the importance of different modalities. In this paper, we present a new multi-modal feature fusion approach named MAA (Modality-Agnostic Adapter), trying to make the model learn the importance of different modalities in different cases adaptively, without giving a prior setting in the model architecture. More specifically, we eliminate the modal differences in distribution and then use a modality-agnostic Transformer encoder for a semantic-level feature fusion. Our experiments demonstrate that MAA achieves state-of-the-art results on benchmarks by applying the same modalities with previous methods. Besides, it is worth mentioning that new modalities can be easily added when using MAA and further boost the performance. Code is available at https://github.com/quniLcs/MAA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yiqun Wang (31 papers)
  2. Zhao Zhou (9 papers)
  3. Xiangcheng Du (11 papers)
  4. Xingjiao Wu (26 papers)
  5. Yingbin Zheng (18 papers)
  6. Cheng Jin (76 papers)

Summary

We haven't generated a summary for this paper yet.