Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training (2103.06561v6)

Published 11 Mar 2021 in cs.CV and cs.IR

Abstract: Multi-modal pre-training models have been intensively explored to bridge vision and language in recent years. However, most of them explicitly model the cross-modal interaction between image-text pairs, by assuming that there exists strong semantic correlation between the text and image modalities. Since this strong assumption is often invalid in real-world scenarios, we choose to implicitly model the cross-modal correlation for large-scale multi-modal pre-training, which is the focus of the Chinese project `WenLan' led by our team. Specifically, with the weak correlation assumption over image-text pairs, we propose a two-tower pre-training model called BriVL within the cross-modal contrastive learning framework. Unlike OpenAI CLIP that adopts a simple contrastive learning method, we devise a more advanced algorithm by adapting the latest method MoCo into the cross-modal scenario. By building a large queue-based dictionary, our BriVL can incorporate more negative samples in limited GPU resources. We further construct a large Chinese multi-source image-text dataset called RUC-CAS-WenLan for pre-training our BriVL model. Extensive experiments demonstrate that the pre-trained BriVL model outperforms both UNITER and OpenAI CLIP on various downstream tasks.

An Analysis of WenLan for Bridging Vision and Language through Multi-Modal Pre-Training

Multi-modal pre-training models have seen growing interest, particularly in reconciling the interaction between vision and language. The paper "WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training," introduces a novel approach to multi-modal pre-training, directly questioning the convention of enforcing a strong semantic correlation between image and text in single-pair interpretations. This deviation from traditional approaches is grounded in a weak correlation assumption, aligning more consistently with the complexity and diversity observed in real-world scenarios.

Conceptual Framework

The paper introduces BriVL, a two-tower pre-training model, which leverages cross-modal contrastive learning. Unlike the OpenAI CLIP which employs a simpler contrastive learning framework, BriVL adapts the Momentum Contrast (MoCo) methodology to include a substantial number of negative samples, even under constrained GPU resources. This enhancement allows BriVL to optimize its representations and handle larger data sizes efficiently.

The WenLan project, which includes BriVL, is supported by the development of a data corpus called RUC-CAS-WenLan, a 30-million image-text pair dataset curated from Chinese web content encompassing multiple domains such as news, sports, and entertainment. This dataset serves as the backbone for pre-training BriVL.

Experimental Evaluation

The paper reports extensive evaluations indicating that BriVL effectively outperforms existing models like UNITER and OpenAI CLIP across diverse downstream tasks. Notably, the results on the AIC-ICC validation set showcase BriVL's superior capability in both image-to-text and text-to-image retrieval subtasks, in terms of retrieval accuracy metrics such as Recall@1, Recall@5, and Recall@10.

BriVL excels by providing improved representation learning that accommodates more diverse image-text negative pairings, which traditional models might overlook due to the reliance on strong correlational assumptions. Its architectural efficiency also facilitates practical deployment across various applications with inherent cross-modal complexity, from text-to-image retrieval to complex vision-language tasks.

Methodological Insights

BriVL introduces a robust pre-training paradigm by integrating advanced cross-modal contrastive learning strategies. The key methodological contributions include:

  1. Two-Tower Architecture: Divergent from single-tower architectures, this structure facilitates scalability and dynamic interaction modeling between modalities without entangling the computations of the two, thus preserving efficiency and compartmentalized functionality.
  2. Enhanced Contrastive Learning: The adaptation of MoCo allows for dynamic and scalable dictionary building within constrained hardware environments, offering a significant edge over traditional GPU-intensive contrastive methods, and thereby enhancing the intrinsic quality of multi-modal representations.
  3. Scalable Pre-trained Embeddings: The BriVL can seamlessly integrate with various downstream tasks, contributing pre-trained image and text embeddings that can augment domain-specific tasks with minimal additional computational overhead.

Conclusion and Outlook

The paper presents WenLan and its associated BriVL model as significant advancements towards resolving the complexities inherent to multi-modal integration of vision and language. Its ability to outperform contemporaries underlines the strength of its architecture and methodological innovations. As research continues, future endeavors could explore expanding the pre-training dataset to 500 million pairs and scaling the model to encompass more parameters, aiming at broader applications, including enriched text-to-image synthesis and deployment in real-world multi-modal interaction interfaces. This progression signifies a step toward more universally applicable AI systems that harmonize vision and language more effectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (35)
  1. Yuqi Huo (19 papers)
  2. Manli Zhang (3 papers)
  3. Guangzhen Liu (3 papers)
  4. Haoyu Lu (24 papers)
  5. Yizhao Gao (19 papers)
  6. Guoxing Yang (11 papers)
  7. Jingyuan Wen (5 papers)
  8. Heng Zhang (93 papers)
  9. Baogui Xu (1 paper)
  10. Weihao Zheng (8 papers)
  11. Zongzheng Xi (1 paper)
  12. Yueqian Yang (1 paper)
  13. Anwen Hu (22 papers)
  14. Jinming Zhao (26 papers)
  15. Ruichen Li (19 papers)
  16. Yida Zhao (12 papers)
  17. Liang Zhang (357 papers)
  18. Yuqing Song (13 papers)
  19. Xin Hong (22 papers)
  20. Wanqing Cui (7 papers)
Citations (122)
Youtube Logo Streamline Icon: https://streamlinehq.com