Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities (2401.14405v2)

Published 25 Jan 2024 in cs.CV, cs.AI, and cs.LG
Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities

Abstract: We propose to improve transformers of a specific modality with irrelevant data from other modalities, e.g., improve an ImageNet model with audio or point cloud datasets. We would like to highlight that the data samples of the target modality are irrelevant to the other modalities, which distinguishes our method from other works utilizing paired (e.g., CLIP) or interleaved data of different modalities. We propose a methodology named Multimodal Pathway - given a target modality and a transformer designed for it, we use an auxiliary transformer trained with data of another modality and construct pathways to connect components of the two models so that data of the target modality can be processed by both models. In this way, we utilize the universal sequence-to-sequence modeling abilities of transformers obtained from two modalities. As a concrete implementation, we use a modality-specific tokenizer and task-specific head as usual but utilize the transformer blocks of the auxiliary model via a proposed method named Cross-Modal Re-parameterization, which exploits the auxiliary weights without any inference costs. On the image, point cloud, video, and audio recognition tasks, we observe significant and consistent performance improvements with irrelevant data from other modalities. The code and models are available at https://github.com/AILab-CVC/M2PT.

Introduction

Transformers have demonstrated their prowess across a variety of tasks and modalities, illustrated by their universal modeling capabilities for sequence-to-sequence learning. However, contemporary methods often rely on the use of relevant multimodal data, such as paired datasets. In "Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities," authors propose an unconventional approach leveraging data from different modalities that are not explicitly related.

Leveraging Irrelevant Data

The work is grounded in the ability of transformers to universalize embedding data across modalities, allowing them to process information from images or audio as sequences of tokens. It breaks new ground by suggesting that data irrelevance, often a limitation in conventional models, can be harnessed to improve transformer performance. The authors' method dissociates from the norm of using well-aligned paired data (e.g., image-text pairs) and explores the untapped potential of unrelated datasets.

Multimodal Pathways Transformer (M2PT)

The core of the approach lies in constructing pathways that allow data from the auxiliary modality to be processed alongside data from the target modality within transformers. Crucially, a technique named Cross-Modal Re-parameterization is introduced, which integrates weights from an auxiliary model trained on a different modality into the target transformers without incurring inference costs. The model, M2PT, is designed to access universal sequence-to-sequence modeling abilities by forming a network that connects components of the different modality-specific transformers.

Experimental Results and Observations

Considerable advancements were made by M2PT over multiple modalities on a variety of tasks, including image, point cloud, video, and audio recognition. One key insight is that the performance enhancements observed are not purely due to the increase in parameters. The results suggest that the modality-complementary knowledge gained from training on disparate datasets may relate to the transformer's ability to process hierarchical representations—a universal feature prominent across different modalities.

Conclusion and Future Directions

Despite the empirical successes, the research uncovers a field with ample room for theoretical exploration. The precise mechanisms behind the performance boosts an area ripe for deeper investigation, potentially requiring a more profound understanding of neural network internals. The "Multimodal Pathway" approach encourages a reimagination of how unrelated data can cross-pollinate and enrich models designed for specific modalities, marking a promising avenue for future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yiyuan Zhang (21 papers)
  2. Xiaohan Ding (41 papers)
  3. Kaixiong Gong (12 papers)
  4. Yixiao Ge (99 papers)
  5. Ying Shan (252 papers)
  6. Xiangyu Yue (93 papers)
Citations (6)
Youtube Logo Streamline Icon: https://streamlinehq.com