Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token (2501.03895v1)

Published 7 Jan 2025 in cs.CV, cs.AI, and cs.CL

Abstract: The advent of real-time large multimodal models (LMMs) like GPT-4o has sparked considerable interest in efficient LMMs. LMM frameworks typically encode visual inputs into vision tokens (continuous representations) and integrate them and textual instructions into the context of LLMs, where large-scale parameters and numerous context tokens (predominantly vision tokens) result in substantial computational overhead. Previous efforts towards efficient LMMs always focus on replacing the LLM backbone with smaller models, while neglecting the crucial issue of token quantity. In this paper, we introduce LLaVA-Mini, an efficient LMM with minimal vision tokens. To achieve a high compression ratio of vision tokens while preserving visual information, we first analyze how LMMs understand vision tokens and find that most vision tokens only play a crucial role in the early layers of LLM backbone, where they mainly fuse visual information into text tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to fuse visual information into text tokens in advance, thereby facilitating the extreme compression of vision tokens fed to LLM backbone into one token. LLaVA-Mini is a unified large multimodal model that can support the understanding of images, high-resolution images, and videos in an efficient manner. Experiments across 11 image-based and 7 video-based benchmarks demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token instead of 576. Efficiency analyses reveal that LLaVA-Mini can reduce FLOPs by 77%, deliver low-latency responses within 40 milliseconds, and process over 10,000 frames of video on the GPU hardware with 24GB of memory.

Overview of LLaVA-Mini: Efficient Large Multimodal Models with Minimal Vision Tokens

The paper introduces LLaVA-Mini, a large multimodal model (LMM) designed to efficiently handle vision and language data. Traditionally, LMMs incorporate a high number of vision tokens to represent images and videos. This practice, although effective for performance, incurs substantial computational overhead, presenting challenges in efficiency, particularly in real-time applications. LLaVA-Mini addresses this issue by significantly reducing the number of vision tokens while maintaining competitive performance in vision-language tasks.

Key Contributions

  1. Vision Token Compression: LLaVA-Mini introduces a query-based compression module to reduce the number of vision tokens before they are fed into the LLM backbone. This module utilizes learnable queries to interact with all vision tokens and produce a compressed set that retains essential visual information. The compression method is more efficient than the token merging techniques used in previous models.
  2. Modality Pre-fusion: To compensate for the potential loss of visual information due to token compression, LLaVA-Mini employs a modality pre-fusion step. This involves a sequence of Transformer blocks that fuse visual information into text tokens before integration into the LLM. The paper shows that this approach allows the model to preserve high-quality visual understanding despite using a minimal number of vision tokens.
  3. High-Resolution and Video Processing: LLaVA-Mini is capable of efficiently managing high-resolution images and extended video sequences by representing each frame with a minimal number of vision tokens. This makes LLaVA-Mini suitable for applications requiring low latency and reduced memory consumption, exceeding the performance of traditional methods in long video understanding tasks.
  4. Enhanced Computational Efficiency: The model implementation achieves a 77% reduction in FLOPs compared to existing models like LLaVA-v1.5, resulting in significant improvements in inference speed and memory usage. LLaVA-Mini can handle over 10,000 video frames on GPU hardware with 24GB of memory, which is a substantial advancement in the domain of LMM efficiency.

Experimental Results

Experiments demonstrate the effectiveness of LLaVA-Mini across multiple benchmarks, including both image and video-based tasks. It outperforms LLaVA-v1.5 in various tasks while using a dramatically reduced number of vision tokens. The model also sets new standards in computational efficiency.

For high-resolution images, LLaVA-Mini-HD, a variant with enhanced capabilities, achieves superior performance with modest computational overhead. In video understanding tasks, LLaVA-Mini processes frames at 1fps, a significant advantage over baseline models that process fewer frames due to their heavier token load.

Implications and Future Work

LLaVA-Mini contributes significantly to the development of efficient LMMs by offering a viable pathway to reducing computational demands without sacrificing model performance. The implications are profound for real-time applications in AI-driven interfaces where speed and resource constraints are critical.

Future research could extend this work by exploring adaptive fusion techniques that dynamically adjust compression levels based on the complexity of visual input. Additionally, there is potential to apply similar efficient multimodal processing techniques in other domains, such as augmented reality and autonomous vehicles, where processing large volumes of visual and textual data swiftly is imperative.

Overall, LLaVA-Mini signifies a shift in multimodal modeling towards an emphasis on both performance and efficiency, setting new benchmarks for future innovations in the field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shaolei Zhang (36 papers)
  2. Qingkai Fang (19 papers)
  3. Zhe Yang (60 papers)
  4. Yang Feng (230 papers)