Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions (2305.10614v2)

Published 17 May 2023 in cs.CL and cs.AI

Abstract: While there is much recent interest in studying why Transformer-based LLMs make predictions the way they do, the complex computations performed within each layer have made their behavior somewhat opaque. To mitigate this opacity, this work presents a linear decomposition of final hidden states from autoregressive LLMs based on each initial input token, which is exact for virtually all contemporary Transformer architectures. This decomposition allows the definition of probability distributions that ablate the contribution of specific input tokens, which can be used to analyze their influence on model probabilities over a sequence of upcoming words with only one forward pass from the model. Using the change in next-word probability as a measure of importance, this work first examines which context words make the biggest contribution to LLM predictions. Regression experiments suggest that Transformer-based LLMs rely primarily on collocational associations, followed by linguistic factors such as syntactic dependencies and coreference relationships in making next-word predictions. Additionally, analyses using these measures to predict syntactic dependencies and coreferent mention spans show that collocational association and repetitions of the same token largely explain the LLMs' predictions on these tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Byung-Doh Oh (9 papers)
  2. William Schuler (15 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.