Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 155 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 115 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 427 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

ACT-ViT: Vision Transformer for Hallucination Detection

Updated 4 October 2025
  • The paper introduces ACT-ViT, which leverages full activation tensors and a ViT backbone to detect hallucinations in LLM outputs with improved accuracy.
  • ACT-ViT processes pooled activation tensors to capture global cross-layer and token dependencies, enabling efficient cross-model training and robust zero-shot generalization.
  • Empirical results show ACT-ViT achieving 1 to 7 point AUC improvements over static probes with rapid inference suitable for real-time deployment.

ACT-ViT is a Vision Transformer–inspired architecture designed for efficient, accurate, and transferable hallucination detection in LLMs. Unlike traditional static probes, which classify hallucinations based on local, model-specific features (such as individual layer–token representations), ACT-ViT processes the full activation tensor—jointly capturing interactions across all layers and all generated tokens. This enables robust cross-LLM training, strong zero-shot generalization, highly efficient inference, and state-of-the-art detection performance on multiple LLM–dataset combinations (Bar-Shalom et al., 30 Sep 2025).

1. Motivation and Conceptual Overview

LLMs often generate erroneous or fabricated content, known as hallucinations, which can vary in locus and expression across different architectures, outputs, and datasets. Conventional detection methods—primarily static token probes operating on isolated layer–token pairs—suffer from the inability to aggregate cues distributed across the model’s internal state, and from overfitting to individual LLM idiosyncrasies.

ACT-ViT addresses these limitations by (i) leveraging the sequential structure of hidden activations across both layers and tokens, (ii) treating the full activation tensor (AT) as a spatial entity analogous to an image, and (iii) employing a Vision Transformer–based backbone to model global dependencies and patterns. The approach is agnostic to the specific LLM and supports efficient, multi-model, and multi-dataset training and adaptation.

2. ACT-ViT Model Architecture

The ACT-ViT framework comprises three functional modules, each of which processes and adapts the activation tensor for cross-model classification:

Module Function Output Shape
Pooling Layer Downsamples the activation tensor (AT) Lp×Np×DmL_p \times N_p \times D_m
Linear Adapter (LA) Projects model-specific activations to shared DD' Lp×Np×DL_p \times N_p \times D'
ViT Backbone Extracts features from (layer, token) “image” Global hallucination score

Given an LLM’s output activation tensor ARLm×N×Dm\mathcal{A} \in \mathbb{R}^{L_m \times N \times D_m} (where LmL_m is the number of layers, NN the token length, DmD_m the hidden dimension), ACT-ViT applies a pooling layer (typically max-pooling) to resize to a fixed spatial grid (Lp×NpL_p \times N_p) regardless of sequence or model depth. The per-LLM linear adapter LAM:RDmRD\text{LA}_M: \mathbb{R}^{D_m} \to \mathbb{R}^{D'} aligns features from different models into a unified space. The resulting tensor is partitioned into spatial patches and flattened, with positional encodings added, before being processed by a standard ViT-based backbone (multi-layer, multi-head self-attention followed by MLP blocks). The final classification is obtained from the ViT head after aggregation.

This architecture exploits the analogy between (layer, token) axes in the activation tensor and (height, width) axes in images, while the channel dimension corresponds to the feature space—enabling the ViT to learn distributed, spatial predictors of hallucination cues.

3. Role of Activation Tensors

The central data structure in ACT-ViT is the activation tensor: the stack of all hidden representations across every transformer layer and every token in the generated sequence. Formally, for a model MM, this tensor is AMRLm×N×Dm\mathcal{A}_M \in \mathbb{R}^{L_m \times N \times D_m}.

The efficacy of hallucination detection depends critically on the ability to aggregate information spread across both depth (layers) and position (tokens), since key signals may surface nonlocally and will differ among architectures and tasks. By “pooling” the activation tensor to a fixed size and treating layer–token axes as spatial, ACT-ViT enables pattern recognition methods originally derived for vision to capture latent interactions and positional dependencies.

A further advantage of this approach is support for datasets from multiple models: by adapting only the LA to agree on DD', the ViT backbone learns reusable features shared across LLMs.

4. Training Regime and Computational Efficiency

ACT-ViT is jointly trained on labeled datasets from multiple LLMs and tasks, exploiting shared features for hallucination prediction. The LA modules are model-specific but lightweight; the ViT backbone and pooling layers are shared.

Key properties:

  • Multi-LLM training: All activation tensors are pooled and projected to a shared shape/space, enabling the ViT backbone to learn detecting hallucinations in a model-independent way.
  • Fine-tuning: Adapting ACT-ViT to an unseen LLM requires only updating the LA; the ViT backbone remains fixed.
  • Data efficiency: The shared backbone and pooling make it possible to transfer to new domains or LLMs using limited data.
  • Computational efficiency: End-to-end training on 15 model–dataset pairs completes in under three hours on a single GPU. Inference per instance takes 105\sim 10^{-5} seconds, enabling real-time deployment.

5. Empirical Performance

Comprehensive experiments were conducted across 15 combinations of LLMs and datasets, including models such as Mistral-7B-Instruct, Llama-3-8B-Instruct, and Qwen-7B, spanning question answering, sentiment analysis, and factual retrieval scenarios. Performance was primarily measured via area under the ROC curve (AUC), comparing with traditional static probes and probability-based methods.

ACT-ViT consistently achieved higher AUC scores, reporting improvements of 1 to 7 points over the best baselines in various tasks. Layer–token heatmap analyses revealed that predictive hallucination cues were distributed differently across datasets and models, justifying the global approach. In leave-one-dataset-out (zero-shot) evaluation, ACT-ViT maintained strong detection accuracy on unseen data, highlighting robust generalization.

6. Transferability and Adaptation

A notable attribute is the framework’s ability to generalize to both new datasets and new LLMs:

  • Zero-shot: Training on 14 of 15 LLM–dataset pairs, ACT-ViT achieves strong detection on the remaining pair with no retraining.
  • New-model adaptation: When faced with a novel LLM, only the lightweight LA requires fine-tuning—all shared parameters (including the ViT backbone) remain fixed. This results in rapid deployment and sample efficiency.
  • Few-shot: Even when only a small subset of annotated activations is available, transfer learning via the LA module is effective; the paper shows detection performance is competitive or superior to static probes.

7. Practical Use and Future Directions

ACT-ViT’s low inference latency (105\approx 10^{-5} seconds) and broad LLM compatibility make it suitable for live LLM output monitoring. The approach dramatically reduces the burden of per-model reannotation, as the backbone learns cross-LLM features.

Current limitations include potential information loss at the pooling stage—a tradeoff for computational speed—and handling hidden dimension permutation symmetries, which currently necessitates the use of per-model LAs.

Potential avenues for further research highlighted by the authors include:

  • Enhanced pooling strategies that preserve more activation detail while controlling inference cost,
  • Architecture modifications to achieve invariance to neuron permutations, possibly obviating model-specific adapters,
  • Application of the framework to other LLM error types (such as data contamination or output verification).

8. Significance and Impact

ACT-ViT demonstrates that full-tensor, vision-inspired modeling of LLM activations provides substantial gains in error detection over local, model-specific probes. Its performance, efficiency, and generalization properties recommend it as a standard method for hallucination detection at scale, addressing a critical need for safe and reliable LLM deployment (Bar-Shalom et al., 30 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to ACT-ViT.