Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLaVA-OneVision: Easy Visual Task Transfer (2408.03326v3)

Published 6 Aug 2024 in cs.CV, cs.AI, and cs.CL
LLaVA-OneVision: Easy Visual Task Transfer

Abstract: We present LLaVA-OneVision, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series. Our experimental results demonstrate that LLaVA-OneVision is the first single model that can simultaneously push the performance boundaries of open LMMs in three important computer vision scenarios: single-image, multi-image, and video scenarios. Importantly, the design of LLaVA-OneVision allows strong transfer learning across different modalities/scenarios, yielding new emerging capabilities. In particular, strong video understanding and cross-scenario capabilities are demonstrated through task transfer from images to videos.

LLaVA-OneVision: Easy Visual Task Transfer

The research paper titled "LLaVA-OneVision: Easy Visual Task Transfer" explores the development of large multimodal models (LMMs) that can operate effectively across single-image, multi-image, and video scenarios. The authors present LLaVA-OneVision, a new family of open LMMs characterized by their versatility and ability to perform task transfer across multiple visual modalities.

Overview

LLaVA-OneVision consolidates various insights and techniques derived from the LLaVA-NeXT blog series. It aims to push the performance boundaries of open LMMs by leveraging a consolidated approach to data curation, modeling, and visual representation strategies. The model architecture connects vision encoders with LLMs through a minimalist connection module, facilitating strong transfer learning across different modalities.

Contributions

The paper makes several noteworthy contributions:

  1. Development of Large Multimodal Models: The authors develop LLaVA-OneVision, which improves the performance boundaries of open LMMs in single-image, multi-image, and video scenarios.
  2. Emerging Capabilities with Task Transfer: The design allows for strong task transfers, demonstrated through significant performance in video understanding and cross-scenario task transfer.
  3. Open-source Efforts: To support community efforts, the authors release the generated multimodal instruction data, the codebase, model checkpoints, and a visual chat demo.

Model Architecture

LLaVA-OneVision employs Qwen-2 as the LLM due to its strong language capabilities, SigLIP as the vision encoder, and a 2-layer MLP as the projector to map visual features into the word embedding space. The model processes a variety of visual inputs, including single images, multiple images, and video sequences, with strategies to balance computational resources and performance.

Visual Representations

A key innovation is the AnyRes strategy, which scales the resolution and the number of tokens to optimize performance across different visual scenarios. The strategy adapts the visual signal representation to the given task, ranging from high-resolution single images to multi-frame videos.

Data Curation

The paper emphasizes the importance of high-quality knowledge and visual instruction tuning data. They curate large datasets from multiple sources while prioritizing quality over quantity. The high-quality knowledge data includes re-captioned descriptions and OCR data, while the visual instruction tuning data spans single-image, multi-image, and video scenarios.

Training Strategies

The training process is divided into stages:

  1. Language-Image Alignment: Aligns the visual features with the word embedding space of the LLM.
  2. High-Quality Knowledge Learning: Integrates new, high-quality data into the LMM.
  3. Visual Instruction Tuning: Teaches the model to perform a diverse set of visual tasks through instruction tuning.

Experimental Results

Evaluations using LMMs-Eval demonstrate that LLaVA-OneVision achieves superior performance across a wide array of benchmarks in single-image, multi-image, and video scenarios. The largest model variant (72B parameters) yields competitive or superior results compared to both open-source and proprietary models like GPT-4V, particularly in complex tasks that require visual reasoning.

Conclusions and Future Directions

LLaVA-OneVision presents a significant advancement in building versatile LMMs capable of effective task transfer across visual modalities. The integration of high-quality data, innovative visual representation strategies, and a minimalist architecture enables strong performance in varied tasks. Looking forward, the research implies potential further improvements through scaling data and models, as well as exploring stronger LLMs. The open-source nature of the project will also facilitate future developments and applications in the broader AI community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Bo Li (1107 papers)
  2. Yuanhan Zhang (29 papers)
  3. Dong Guo (46 papers)
  4. Renrui Zhang (100 papers)
  5. Feng Li (286 papers)
  6. Hao Zhang (947 papers)
  7. Kaichen Zhang (11 papers)
  8. Yanwei Li (36 papers)
  9. Ziwei Liu (368 papers)
  10. Chunyuan Li (122 papers)
  11. Peiyuan Zhang (24 papers)
Citations (168)
Youtube Logo Streamline Icon: https://streamlinehq.com