Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Interpreting Visual Information Processing in Vision-Language Models (2410.07149v1)

Published 9 Oct 2024 in cs.CV and cs.LG
Towards Interpreting Visual Information Processing in Vision-Language Models

Abstract: Vision-LLMs (VLMs) are powerful tools for processing and understanding text and images. We study the processing of visual tokens in the LLM component of LLaVA, a prominent VLM. Our approach focuses on analyzing the localization of object information, the evolution of visual token representations across layers, and the mechanism of integrating visual information for predictions. Through ablation studies, we demonstrated that object identification accuracy drops by over 70\% when object-specific tokens are removed. We observed that visual token representations become increasingly interpretable in the vocabulary space across layers, suggesting an alignment with textual tokens corresponding to image content. Finally, we found that the model extracts object information from these refined representations at the last token position for prediction, mirroring the process in text-only LLMs for factual association tasks. These findings provide crucial insights into how VLMs process and integrate visual information, bridging the gap between our understanding of language and vision models, and paving the way for more interpretable and controllable multimodal systems.

This paper investigates how Vision-LLMs (VLMs), specifically focusing on the LLaVA architecture, process visual information. LLaVA combines a pre-trained image encoder (CLIP), a pre-trained LLM (Vicuna), and an adapter network that maps image features into "visual tokens" fed into the LM. While VLMs are powerful, their internal workings, particularly how the LM handles these visual tokens, are not well understood.

The paper addresses key questions about the localization of object information within visual tokens and the mechanisms by which the LM processes this information for predictions. The visual tokens are soft prompts, meaning they don't directly correspond to words in the vocabulary and their meaning is initially unclear.

The researchers employed several practical interpretability techniques:

  1. Ablation Studies for Localization: To determine where object information resides, they created a dataset using COCO images, filtered to focus on simpler scenes and ensure the model relies on visual evidence rather than hallucination. They then ablated (replaced with a mean embedding) subsets of visual tokens. The subsets were chosen based on:

    • Corresponding to the object's spatial location ("Object Tokens")
    • Object tokens plus neighbors ("Object Tokens with Buffer")
    • Tokens with high norms ("Register Tokens"), hypothesized to encode global features
    • Random tokens (baseline)
    • Tokens identified as important by Integrated Gradients (stronger baseline)

    They evaluated the impact of ablation on the model's ability to identify the target object using three tasks: generative image description, binary "Yes/No" polling, and visual question answering. The results showed that ablating object tokens, especially with a small buffer of neighboring tokens, caused a significantly larger drop in object identification accuracy (over 70% in some cases) compared to ablating register, random, or even high-gradient tokens. This strongly suggests that object-specific information is primarily localized to the visual tokens corresponding to the object's position in the image, rather than being diffused across global tokens.

    The ablation involves creating a modified set of visual embeddings EAE_A' from the original EA={e1,,eN}E_A = \{e_1, \ldots, e_N\}:

    ei={eˉif iS eiotherwisee_i' = \begin{cases} \bar{e} & \text{if } i \in S \ e_i & \text{otherwise} \end{cases}

    where SS is the set of indices of tokens to ablate, and eˉ\bar{e} is a mean embedding.

  2. Logit Lens for Representation Evolution: To understand how visual token representations change through the LM layers, they applied the logit lens technique. This involves projecting the hidden state hilh_i^l of each token ii at each layer ll into the model's vocabulary space using the unembedding matrix WUW_U and observing the top predicted tokens.

    logitv=hilWUT\text{logit}_v = h_i^l W_U^T

    Surprisingly, they found that in later layers, the representations of visual tokens align with interpretable text tokens describing the content of the original image patch. This included specific details (e.g., "diam" for a diamond pattern on a sweater) and sometimes even non-English terms. This indicates that the LM, despite being fine-tuned and not pre-trained on next-token prediction for visual inputs, refines visual information towards a language-interpretable space. However, some global features (like object counts) sometimes appeared in background tokens, suggesting potential artifacts of the LM's text processing. This finding is significant because it implies the hypothesis that transformer layers iteratively refine representations towards vocabulary concepts might generalize to multimodal fine-tuning. Potential practical uses include deriving coarse segmentation maps from logit lens activations and potentially improving methods for reducing hallucination by directing attention.

  3. Attention Knockout for Information Flow: To trace how information flows from visual tokens to the final prediction, they used attention knockout. This technique involves setting attention weights between specific token groups to -\infty in the attention mask MM at certain layers to block information flow.

    Mrc+1,j=M^{\ell+1,j}_{rc} = -\infty

    They blocked attention in windows of layers between different token groups:

    • From object tokens (with/without buffer) to the Last Token Position (where the model generates the answer).
    • From non-object tokens to the Last Token Position.
    • Among visual tokens themselves (e.g., non-last row to last row of visual tokens), testing a hypothesis that information is summarized in a subset of visual tokens.

    The results showed that blocking attention from object tokens to the Last Token Position in mid to late layers noticeably degraded performance. This suggests that the model directly extracts object-specific information from these localized visual tokens in later processing stages. Blocking attention from non-object tokens in early layers also impacted performance, indicating the early integration of broader contextual information. Crucially, blocking attention among visual tokens themselves had minimal impact, suggesting the model does not rely on summarizing visual information within a specific subset of visual tokens before using it for the final prediction.

In conclusion, the paper provides evidence that in LLaVA: object information is localized to specific visual tokens, these visual tokens become interpretable as language concepts through the layers, and the model extracts information directly from relevant visual tokens in later layers for prediction. These findings are foundational for building more interpretable, controllable, and robust multimodal systems. The techniques used (ablation, logit lens, attention knockout) are practical methods for probing VLM internals and can be applied to further research into hallucination reduction and model editing. The code for the experiments is publicly available, enabling practitioners to replicate and extend these analyses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Clement Neo (9 papers)
  2. Luke Ong (23 papers)
  3. Philip Torr (172 papers)
  4. Mor Geva (58 papers)
  5. David Krueger (75 papers)
  6. Fazl Barez (42 papers)
Citations (2)