Papers
Topics
Authors
Recent
2000 character limit reached

Visual Large Language Models

Updated 21 November 2025
  • Visual Large Language Models are multimodal systems that combine visual encoders with large language models to process images, videos, and text in a unified framework.
  • They employ cross-modal connectors such as adapters to map visual features into language token space, enabling open-ended generation and complex reasoning across diverse tasks.
  • Advanced training paradigms, including interleaved pretraining and parameter-efficient adaptation, drive state-of-the-art performance in tasks like VQA, captioning, and visual dialogue.

Visual LLMs (VLLMs) are multimodal foundation models that integrate visual encoders and LLMs via cross-modal connectors. This architecture enables unified processing of images, video, and text for open-ended generation, comprehension, and reasoning across generalized and specialized domains. Unlike classical vision-LLMs limited to captioning or VQA, VLLMs are instantiated as autoregressive transformers consuming both vision-derived tokens and text, facilitating complex multimodal understanding, high compositionality, and broad task coverage. The canonical VLLM framework comprises a vision encoder fv()f_v(\cdot), language decoder fl()f_l(\cdot), and an adapter g()g(\cdot) mapping visual representations into the LLM’s token space, supporting end-to-end gradient-based optimization under generative and alignment objectives (Li et al., 6 Jan 2025).

1. Formal Architecture and Training Paradigm

VLLMs consist of three principal modules: (i) a vision encoder—typically a large ViT or ResNet, possibly pre-trained on contrastive or masked modeling objectives; (ii) an adapter (or connector)—often a linear projector, MLP, Q-Former, or resampler, which transforms vision features into language-compatible embeddings; (iii) a causal LLM, such as LLaMA, Vicuna, or GPT-derivatives, with its transformer layers extended to accommodate/attend to visual tokens (Li et al., 6 Jan 2025, Lin et al., 2023).

Given input visual instance xvx_v (image/frame/video) and optional text sequence xlx_l, the vision encoder outputs a 2-D array of patch/region features v=fv(xv)RN×d\mathbf{v} = f_v(x_v) \in \mathbb{R}^{N \times d}. The adapter maps these to v~=g(v)RM×d\widetilde{\mathbf{v}} = g(\mathbf{v}) \in \mathbb{R}^{M \times d}, which are then concatenated or injected via cross-attention into the LLM. The generative likelihood factorizes as: p(yxv,xl)=t=1Tp(yty<t,v~,xl)p(y \mid x_v, x_l) = \prod_{t=1}^T p(y_t | y_{<t}, \widetilde{\mathbf{v}}, x_l) During pretraining and instruction tuning, loss functions include standard next-token cross-entropy as well as contrastive alignment (e.g., CLIP-style objectives), with parameter-efficient adaptation (e.g., LoRA) commonly employed for scalable fine-tuning (Umeike et al., 26 Jan 2025, Lin et al., 2023, Li et al., 6 Jan 2025).

2. Functional Taxonomy and Application Classes

VLLMs are functionally categorized into “generalized” and “specialized” types (Li et al., 6 Jan 2025). Generalized VLLMs (Flamingo, LLaVA, MiniGPT-4, mPLUG-Owl) aim for broad task coverage: image/video captioning, VQA, REC/RES, OCR, visual dialogue, and open-ended multimodal generation. Specialized variants target domains such as medical imaging, autonomous driving, remote sensing, embodied AI, or chart/text document understanding—often leveraging domain-specific pretraining or fine-tuning on curated datasets (Li et al., 6 Jan 2025, Umeike et al., 26 Jan 2025).

In the vision-to-text track, VLLMs process static images or dynamic videos to perform natural language generation, retrieval, and grounded reasoning. In the vision-to-action domain, VLLMs operate as cognitive control modules or planning agents, ingesting multimodal spatial inputs and producing policies or plans (e.g., DriveVLM, DriveGPT4, VLMPlanner) (Tang et al., 27 Jul 2025). For embodied agents, VLLMs fuse RGB, depth, and point-cloud modalities with instruction-following (Li et al., 6 Jan 2025). Tool-augmented VLLMs (MM-REACT, HuggingGPT, ViperGPT) orchestrate external vision/text APIs within a language-agentic interface. Text-to-vision VLLMs (e.g., GILL, Emu, DiffusionGPT) generate images, 3D, or video content from prompts via cross-modal autoregression (Li et al., 6 Jan 2025).

3. Methodologies for Training, Pretraining, and Adaptation

VLLM development universally follows a two-stage pipeline (Lin et al., 2023). Stage 1: large-scale pretraining aligns modalities using a blend of interleaved image–text corpora (e.g., MMC4, LAION, COYO) with contrastive and generative losses. Evidence shows interleaved data—where text and images appear in naturally co-occurring context—drives better VL alignment and preserves text-only proficiency, compared to pure caption-based pretraining (Lin et al., 2023). Stage 2: supervised fine-tuning (instruction tuning or SFT) on high-quality, human-annotated instruction data (e.g., LLaVA-Instruct, GRIT, domain-specific datasets) endows task specificity and further unifies multimodal and text-only skills (Umeike et al., 26 Jan 2025, Lin et al., 2023).

Full backbone fine-tuning is critical for few-shot and in-context learning; freezing the LLM during pretraining yields competitive zero-shot scores but destroys in-context capabilities. Instruction SFT with re-blending of pure text samples into image–text batches yields simultaneous gains in VLM task and text-only accuracy, preventing catastrophic forgetting (Lin et al., 2023).

Parameter-efficient adaptation (e.g., LoRA, ReLoRA) is ubiquitous, enabling large-scale updates with manageable compute. Specialized adapters, token-resampling modules (e.g., Q-Former, Perceiver), or token reduction/projection (e.g., Window Token Concatenation, FCoT-VL, B-VLLM) address context-window and efficiency constraints (Li et al., 5 Apr 2025, Li et al., 22 Feb 2025, Lu et al., 13 Dec 2024).

4. Evaluation Protocols, Benchmarks, and Empirical Performance

VLLMs are evaluated on tasks spanning captioning (COCO BLEU/METEOR/CIDEr/SPICE), VQA (VQAv2, GQA, POPE, ScienceQA), open-domain visual dialogue (MMBench), referential comprehension (REC/RES), OCR (TextVQA), video QA (MSVD-QA, ActivityNet-QA), action/planning (NuScenes-QA, nuPlan), chart/document QA (DocVQA, MathVista), and multi-image reasoning (MM-Vet, MM-Bench-CN) (Umeike et al., 26 Jan 2025, Lin et al., 2023, Lu et al., 13 Dec 2024, Tang et al., 27 Jul 2025).

Domain-adapted VLLMs fine-tuned on highly curated corpora outperform generalist models in their target domains: e.g., biomedical VLLMs achieve higher factuality, lower hallucination rates, and greater detail recall in LDRT VQA than base LLaVA checkpoints (Umeike et al., 26 Jan 2025). Specialized driving VLLMs (VLMPlanner) achieve SOTA on closed-loop planning and rare-scene robustness by coupling vision-language perception with latent plan injection (Tang et al., 27 Jul 2025). Efficient VLLM variants utilizing aggressive visual token reduction or adaptive sampling maintain or exceed original model performance at a fraction of inference cost (Li et al., 5 Apr 2025, Li et al., 22 Feb 2025, Lu et al., 13 Dec 2024).

Quantitative highlights include consistent gains from full VL pretraining (e.g., VILA: VQAv2 79.9% vs LLaVA-7B 78.5%, TextVQA 64.4% vs 58.2%), resolution-aware fine-tuning for adaptive task granularity (LLaVA-7B Adaptive: TextVQA 60.3% at task-picked resolution) (Lin et al., 2023, Luo et al., 10 Oct 2025), and near-human parity in cognitive set-shifting tasks under prompt-engineered chain-of-thought, as evidenced by WCST benchmarks (Hao et al., 28 May 2025).

5. Methodological Innovations and Efficiency Strategies

Recent VLLMs demonstrate several key architectural and methodological advances:

  • Spectral Dictionary Mixing: SDict-VLM eliminates both convolutions and self-attention, using a shared learnable frequency basis to achieve O(L log L) complexity and match transformer baselines on captioning/VQA with ≥2× fewer parameters and faster inference (Kiruluta et al., 22 Jun 2025).
  • Visual Token Compression: Window-based token concatenation (WiCo), self-distillation compression (FCoT-VL), and spatio-temporal adaptive selection (B-VLLM) permit efficient scaling to high-resolution or long-sequence video without sacrificing performance; e.g., B-VLLM achieves >8× reduction in visual tokens while improving video QA accuracy (Lu et al., 13 Dec 2024, Li et al., 5 Apr 2025, Li et al., 22 Feb 2025).
  • Prompting and Reasoning Enhancements: Set-of-Vision prompting and common-sense-generated descriptions augment in-context emotion recognition in natural scenes, while chain-of-thought prompting enables VLLMs to exhibit human-comparable cognitive flexibility and simulate neuropsychological deficits (Zhang et al., 3 Oct 2024, Xenos et al., 10 Apr 2024, Hao et al., 28 May 2025).
  • Knowledge Boundary Modeling: Sampling-based inference and lightweight boundary adapters allow VLLMs to dynamically gate expensive retrieval-augmented generation, maintaining or improving accuracy while reducing retrieval calls by 50% (Chen et al., 25 Feb 2025).
  • Modular Hybrid Systems: Lightweight LLM routers select optimal specialist models per query, outperforming monolithic VLLMs on object recognition while deferring to VLLM-enhanced reasoning when required (Cooper et al., 3 Oct 2024).

6. Technical Challenges, Limitations, and Ethical Considerations

Despite rapid advances, VLLMs face substantial hurdles:

7. Outlook and Future Directions

The field is converging on several frontiers:

  • Unified Multimodal Pretraining: Jointly scaling VLLMs to handle arbitrary numbers of images, videos, modalities (depth, point-cloud, 3D) with robust in-context learning and world knowledge (Lin et al., 2023, Li et al., 6 Jan 2025).
  • Hierarchical and Modular Reasoning: Architectures with dual-stream cross-modal attention, persistent visual memory, and explicit visual reasoning steps (e.g., visual chain-of-thought) to close the gap in causal and compositional tasks (Li et al., 15 Aug 2024).
  • Domain Expansion: Systematic development of VLLMs for face analysis, anomaly detection, scientific/industrial domains, and embodied control, requiring new datasets, expert priors, and security solutions (Li et al., 6 Jan 2025).
  • Interpretability and Accountability: Integrating attribution mapping, model introspection, and transparent RLHF or tool-usage modules.
  • Efficient Deployment: Pushing toward edge, real-time, and privacy-sensitive infrastructure via adaptive visual token budgets, encrypted inference, and dataset distillation (Li et al., 5 Apr 2025, Lu et al., 13 Dec 2024).

These advances collectively position VLLMs as a foundational technology for future multimodal AI, spanning language, vision, action, and beyond (Li et al., 6 Jan 2025, Lin et al., 2023, Umeike et al., 26 Jan 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Visual Large Language Models (VLLMs).