Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks (2312.14238v3)

Published 21 Dec 2023 in cs.CV
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks

Abstract: The exponential growth of LLMs has opened up numerous possibilities for multimodal AGI systems. However, the progress in vision and vision-language foundation models, which are also critical elements of multi-modal AGI, has not kept pace with LLMs. In this work, we design a large-scale vision-language foundation model (InternVL), which scales up the vision foundation model to 6 billion parameters and progressively aligns it with the LLM, using web-scale image-text data from various sources. This model can be broadly applied to and achieve state-of-the-art performance on 32 generic visual-linguistic benchmarks including visual perception tasks such as image-level or pixel-level recognition, vision-language tasks such as zero-shot image/video classification, zero-shot image/video-text retrieval, and link with LLMs to create multi-modal dialogue systems. It has powerful visual capabilities and can be a good alternative to the ViT-22B. We hope that our research could contribute to the development of multi-modal large models. Code and models are available at https://github.com/OpenGVLab/InternVL.

Essay: InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks

The exponential rise of LLMs has driven significant advancements in multi-modal artificial general intelligence (AGI) systems, yet the development of vision and vision-language foundation models has lagged behind. This disparity is addressed by the research paper on InternVL, which presents a scalable vision-language foundation model designed to align with LLMs, using an extensive array of web-scale image-text data.

InternVL features a vision encoder, InternViT-6B, scaled up to six billion parameters, and is aligned with a multilingual LLaMA-based language middleware, QLLaMA. This combination supports a wide range of visual-linguistic tasks such as image and video classification, retrieval tasks, image captioning, and more, demonstrating state-of-the-art performance across 32 established benchmarks.

The paper highlights several key aspects:

  1. Vision-Language Alignment: InternVL utilizes a progressive training strategy to align vision encoders with LLMs. This involves an initial contrastive learning phase on a massive dataset, followed by a generative learning phase on refined data, bridging a significant gap in parameter scales and feature representations between vision encoders and LLMs.
  2. Scalable Architecture: The vision encoder, InternViT-6B, provides a considerable scale with an innovative design optimized for balanced parameter efficiency. Compared to other models, InternViT-6B delivers superior results in linear evaluation on tasks such as image classification, demonstrating notable improvements over preceding state-of-the-art methodologies.
  3. Multilingual Capabilities: The pre-trained multilingual LLaMA initialization of QLLaMA offers robust multilingual support, an advantageous feature for global applications involving various languages, as exhibited in several multilingual benchmarks.
  4. Efficacy in Zero-Shot Learning: InternVL has demonstrated high competence in zero-shot learning scenarios, notably in classification and retrieval tasks. This versatility and capability stem from the comprehensive and diverse training undertaken using web-scale data, which broadens the range of generalization in unseen contexts.
  5. Compatibility with LLMs: A well-aligned feature space facilitates effortless integration with current LLMs, such as LLaMA, Vicuna, and InternLM. This compatibility is quantitatively supported by strong performance in visual perception, semantic segmentation, and other pixel-level tasks.

Numerically, InternVL achieves top-tier performance in tests such as image classification with an average consistency in results across various benchmarks, and it notably excels in video classification and retrieval tasks. For instance, in comparison to EVA-02-CLIP-E+, InternVL delivers enhanced accuracy and consistency across derived datasets, emphasizing its resilient and adaptable architecture.

The implications of these findings are substantial for the advancement of VLLMs. By demonstrating that scaled-up, parameter-efficient vision models can align with LLMs and deliver robust performance across diverse tasks, InternVL sets a precedent for further exploration in AGI systems. Moving forward, extending such alignment methods to additional modalities and enhancing integration efficiencies will be pivotal in progressing toward fully multi-modal AGI frameworks.

In conclusion, InternVL provides significant insights into vision-LLM scaling and alignment, effectively bridging the existing gap with LLM capabilities. This contributes a critical stepping stone for leveraging web-scale data in developing comprehensive visual-linguistic models, with potential implications on both theoretical research and practical implementations in AI-driven applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Zhe Chen (237 papers)
  2. Jiannan Wu (12 papers)
  3. Wenhai Wang (123 papers)
  4. Weijie Su (37 papers)
  5. Guo Chen (107 papers)
  6. Sen Xing (6 papers)
  7. Qinglong Zhang (16 papers)
  8. Xizhou Zhu (73 papers)
  9. Lewei Lu (55 papers)
  10. Bin Li (514 papers)
  11. Ping Luo (340 papers)
  12. Tong Lu (85 papers)
  13. Yu Qiao (563 papers)
  14. Jifeng Dai (131 papers)
  15. Muyan Zhong (3 papers)
Citations (460)