Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model (2401.16420v1)

Published 29 Jan 2024 in cs.CV and cs.CL
InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model

Abstract: We introduce InternLM-XComposer2, a cutting-edge vision-LLM excelling in free-form text-image composition and comprehension. This model goes beyond conventional vision-language understanding, adeptly crafting interleaved text-image content from diverse inputs like outlines, detailed textual specifications, and reference images, enabling highly customizable content creation. InternLM-XComposer2 proposes a Partial LoRA (PLoRA) approach that applies additional LoRA parameters exclusively to image tokens to preserve the integrity of pre-trained language knowledge, striking a balance between precise vision understanding and text composition with literary talent. Experimental results demonstrate the superiority of InternLM-XComposer2 based on InternLM2-7B in producing high-quality long-text multi-modal content and its exceptional vision-language understanding performance across various benchmarks, where it not only significantly outperforms existing multimodal models but also matches or even surpasses GPT-4V and Gemini Pro in certain assessments. This highlights its remarkable proficiency in the realm of multimodal understanding. The InternLM-XComposer2 model series with 7B parameters are publicly available at https://github.com/InternLM/InternLM-XComposer.

Introduction to LLMs

InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-LLMs (VLMs) represents a significant advancement in the field of VLMs. It excels in both comprehension of visual elements and text-image composition, offering highly customizable content creation across a wide spectrum of application contexts.

Partial LoRA and Data Foundation

The model's capabilities are amplified through two critical design elements. The first is the Partial LoRA (P-LoRA) which strategically applies additional LoRA parameters to image tokens, harmonizing capability in composition and comprehension. Secondly, high quality and diverse data foundation is essential. The dataset is expertly curated, being rich in complexity and multifaceted, varying from simple instruction adherence to customization of content with a plethora of materials.

Performance Benchmarks and Advances

InternLM-XComposer2’s performance across various benchmarks is noteworthy. It not only significantly surpasses existing open-source MLLMs but also competes with advanced models like GPT-4V and Gemini Pro, particularly excelling in free-form text-image composition demonstrated in the OpenCompass benchmark for evaluating the creativity of LLMs.

The Future of Vision-Language Understanding

The sophistication of InternLM-XComposer2 combined with robust methodologies such as Partial LoRA and a rich data foundation hold promise for the future of multimodal understanding. Its proficiency in nuanced perception, intricate reasoning, and knowledge integration place it at the forefront of VLM advancements, with potential applications ranging from content generation to AI-augmented creative endeavors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (23)
  1. Xiaoyi Dong (73 papers)
  2. Pan Zhang (153 papers)
  3. Yuhang Zang (54 papers)
  4. Yuhang Cao (41 papers)
  5. Bin Wang (750 papers)
  6. Linke Ouyang (12 papers)
  7. Xilin Wei (6 papers)
  8. Songyang Zhang (116 papers)
  9. Haodong Duan (55 papers)
  10. Maosong Cao (9 papers)
  11. Wenwei Zhang (77 papers)
  12. Yining Li (29 papers)
  13. Hang Yan (86 papers)
  14. Yang Gao (761 papers)
  15. Xinyue Zhang (63 papers)
  16. Wei Li (1121 papers)
  17. Jingwen Li (29 papers)
  18. Kai Chen (512 papers)
  19. Conghui He (114 papers)
  20. Xingcheng Zhang (29 papers)
Citations (174)