Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning (2303.05952v1)

Published 10 Mar 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Contrastive loss has been increasingly used in learning representations from multiple modalities. In the limit, the nature of the contrastive loss encourages modalities to exactly match each other in the latent space. Yet it remains an open question how the modality alignment affects the downstream task performance. In this paper, based on an information-theoretic argument, we first prove that exact modality alignment is sub-optimal in general for downstream prediction tasks. Hence we advocate that the key of better performance lies in meaningful latent modality structures instead of perfect modality alignment. To this end, we propose three general approaches to construct latent modality structures. Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization. Extensive experiments are conducted on two popular multi-modal representation learning frameworks: the CLIP-based two-tower model and the ALBEF-based fusion model. We test our model on a variety of tasks including zero/few-shot image classification, image-text retrieval, visual question answering, visual reasoning, and visual entailment. Our method achieves consistent improvements over existing methods, demonstrating the effectiveness and generalizability of our proposed approach on latent modality structure regularization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qian Jiang (12 papers)
  2. Changyou Chen (108 papers)
  3. Han Zhao (159 papers)
  4. Liqun Chen (42 papers)
  5. Qing Ping (13 papers)
  6. Son Dinh Tran (2 papers)
  7. Yi Xu (302 papers)
  8. Belinda Zeng (16 papers)
  9. Trishul Chilimbi (22 papers)
Citations (29)
X Twitter Logo Streamline Icon: https://streamlinehq.com