Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models (2308.13437v2)

Published 25 Aug 2023 in cs.CV

Abstract: Recently, Multimodal LLMs (MLLMs) that enable LLMs to interpret images through visual instruction tuning have achieved significant success. However, existing visual instruction tuning methods only utilize image-language instruction data to align the language and image modalities, lacking a more fine-grained cross-modal alignment. In this paper, we propose Position-enhanced Visual Instruction Tuning (PVIT), which extends the functionality of MLLMs by integrating an additional region-level vision encoder. This integration promotes a more detailed comprehension of images for the MLLM. In addition, to efficiently achieve a fine-grained alignment between the vision modules and the LLM, we design multiple data generation strategies to construct an image-region-language instruction dataset. Finally, we present both quantitative experiments and qualitative analysis that demonstrate the superiority of the proposed model. Code and data will be released at https://github.com/PVIT-official/PVIT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chi Chen (62 papers)
  2. Ruoyu Qin (3 papers)
  3. Fuwen Luo (14 papers)
  4. Xiaoyue Mi (9 papers)
  5. Peng Li (390 papers)
  6. Maosong Sun (337 papers)
  7. Yang Liu (2253 papers)
Citations (41)