Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Composition for Multimodal Large Language Models (2402.12750v2)

Published 20 Feb 2024 in cs.CV, cs.AI, and cs.CL

Abstract: Recent developments in Multimodal LLMs (MLLMs) have shown rapid progress, moving towards the goal of creating versatile MLLMs that understand inputs from various modalities. However, existing methods typically rely on joint training with paired multimodal instruction data, which is resource-intensive and challenging to extend to new modalities. In this paper, we propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model. Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters. Furthermore, we introduce DAMC to address parameter interference and mismatch issues during the merging process, thereby enhancing the model performance. To facilitate research in this area, we propose MCUB, a benchmark for assessing ability of MLLMs to understand inputs from diverse modalities. Experiments on this benchmark and four other multimodal understanding tasks show significant improvements over baselines, proving that model composition can create a versatile model capable of processing inputs from multiple modalities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Chi Chen (62 papers)
  2. Yiyang Du (5 papers)
  3. Zheng Fang (104 papers)
  4. Ziyue Wang (75 papers)
  5. Fuwen Luo (14 papers)
  6. Peng Li (390 papers)
  7. Ming Yan (190 papers)
  8. Ji Zhang (177 papers)
  9. Fei Huang (410 papers)
  10. Maosong Sun (337 papers)
  11. Yang Liu (2256 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com