Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visual Perception by Large Language Model's Weights (2405.20339v1)

Published 30 May 2024 in cs.CV

Abstract: Existing Multimodal LLMs (MLLMs) follow the paradigm that perceives visual information by aligning visual features with the input space of LLMs, and concatenating visual tokens with text tokens to form a unified sequence input for LLMs. These methods demonstrate promising results on various vision-language tasks but are limited by the high computational effort due to the extended input sequence resulting from the involvement of visual tokens. In this paper, instead of input space alignment, we propose a novel parameter space alignment paradigm that represents visual information as model weights. For each input image, we use a vision encoder to extract visual features, convert features into perceptual weights, and merge the perceptual weights with LLM's weights. In this way, the input of LLM does not require visual tokens, which reduces the length of the input sequence and greatly improves efficiency. Following this paradigm, we propose VLoRA with the perceptual weights generator. The perceptual weights generator is designed to convert visual features to perceptual weights with low-rank property, exhibiting a form similar to LoRA. The experimental results show that our VLoRA achieves comparable performance on various benchmarks for MLLMs, while significantly reducing the computational costs for both training and inference. The code and models will be made open-source.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Feipeng Ma (8 papers)
  2. Hongwei Xue (10 papers)
  3. Guangting Wang (11 papers)
  4. Yizhou Zhou (29 papers)
  5. Fengyun Rao (25 papers)
  6. Shilin Yan (20 papers)
  7. Yueyi Zhang (28 papers)
  8. Siying Wu (5 papers)
  9. Mike Zheng Shou (165 papers)
  10. Xiaoyan Sun (46 papers)
Citations (3)