Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 186 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 41 tok/s Pro
GPT-4o 124 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 440 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Semantic Information Theory for LLMs

Updated 5 November 2025
  • The paper introduces a token-centric framework that redefines rate-distortion and directed information to quantify semantic flow in LLMs.
  • It employs semantic embeddings and optimal vectorization techniques, using methods like Johnson-Lindenstrauss for effective compression and prediction.
  • The work provides a unified toolkit for LLM evaluation, architecture design, and diagnostics, highlighting tradeoffs between compression and semantic fidelity.

Semantic Information Theory for LLMs encompasses a formal, information-theoretic investigation into how LLMs encode, process, compress, and transmit semantically meaningful information, with a focus on tokens (not bits) as the primary units. Recent advances in this area unify classical concepts from information theory, such as rate-distortion, mutual and directed information, and model them at the token sequence level, providing a comprehensive mathematical framework to interpret, analyze, and guide both model architecture and practical deployment.

1. Token-Level Foundations of Semantic Information in LLMs

Traditional information theory is bit-centric, but bits are semantically opaque in natural language contexts. Semantic information theory for LLMs, as established in (Bai, 3 Nov 2025), reorients the field: the token—that is, a word or subword unit—is the atomic, interpretable unit of information. LLMs are formalized as discrete-time, feedback-enabled channels operating on token sequences, permitting semantic information flow to be tracked and quantified over individual tokens rather than undifferentiated bitstreams.

The token-level probabilistic process underlying LLMs is:

  • Input sequence: X1:nX_{1:n}
  • Semantic embedding function: f(X1:n)=S1:nf(X_{1:n}) = S_{1:n}
  • LLM next-token prediction: P(UtUn+1:t1,S1:n;Φ)P( U_t | U_{n+1:t-1}, S_{1:n} ; \Phi )
  • Output decoding back to tokens: Yt=φ(Ut)Y_t = \varphi(U_t)

Such formalism is architecture-agnostic, unifying Transformers, Mamba architectures, and large language diffusion models (LLaDA) under a common information-theoretic abstraction.

2. Redefining Information-Theoretic Quantities: Directed Rate-Distortion, Reward, and Semantic Flow

Classical measures—rate-distortion, mutual information—are redefined over tokens:

  • Directed Rate-Distortion (Pre-training):

Rpre(D)=1TinfΦ:1Tt=n+1TDKL(PtQtΦ)<DI(S1:nUn+1:T;Φ)R_{\mathrm{pre}}(D)=\frac{1}{T} \inf_{\Phi: \frac{1}{T}\sum_{t=n+1}^T D_{\mathrm{KL}}(P_t^{\hbar} \Vert Q_t^\Phi)<D} I(S_{1:n} \rightarrow U_{n+1:T}; \Phi)

where I()I(\cdot \rightarrow \cdot) is directed information, quantifying semantic information flow from context to generation.

  • Directed Rate-Reward (Post-training/RLHF):

Rpost(W)=1TinfΦ:w(S1:n,Un+1:T)>WI(S1:nUn+1:T;Φ)R_{\mathrm{post}}(W)=\frac{1}{T} \inf_{\Phi^{\hbar}: w(S_{1:n},U_{n+1:T})>W} I(S_{1:n} \rightarrow U_{n+1:T};\Phi^{\hbar})

incorporating reward signals such as human preference.

  • Semantic Information Flow (Inference):

ı(S1:nUn+1:t;Φ+)=τ=n+1tı(S1:n;UτUn+1:τ1;Φ+)\imath(S_{1:n} \rightarrow U_{n+1:t}; \Phi^{\hbar+}) = \sum_{\tau=n+1}^t \imath(S_{1:n}; U_\tau | U_{n+1:\tau-1};\Phi^{\hbar+})

defines information density (realization-wise), central to prompt analysis and inference diagnostics.

  • Granger causality operationalizes the causal role of context in sequence prediction, capturing the capacity of an LLM to emulate human-like semantic reasoning under sequential inference.

3. Token-Level Semantic Embeddings and Optimal Vectorization

Semantic information theory provides rigorous tools to analyze and construct vector representations for tokens:

  • Semantic Embedding Spaces:

Formalized as probabilistic inner product spaces (SN1,F,μ,,)(\mathbb{S}^{N-1}, \mathscr{F}, \mu, \langle\cdot,\cdot\rangle), where each token’s meaning is embedded as a high-dimensional vector.

  • Dimensionality Reduction via Johnson-Lindenstrauss:

Provides bounds on semantic compression—how many dimensions can be discarded without incurring excessive semantic distortion.

  • Optimal Semantic Embeddings for Prediction:

The ideal embedding maximizes backward directed information:

maxSt=f(X1:t)I(Xt+1:n;StS1:t1)\max_{S_t=f(X_{1:t})} I(X_{t+1:n}; S_t | S_{1:t-1})

yielding vectorizations supporting maximal predictive fidelity with minimal redundancy.

  • Connections to Contrastive Predictive Coding (CPC):

CPC can be interpreted as an upper-bound maximizer of this information objective, serving as a practical approach though suboptimal theoretically.

4. Compression, Semantic Fidelity, and the Human-LLM Tradeoff

Recent frameworks (Shani et al., 21 May 2025) extend semantic information theory to analyze the tradeoff LLMs make between compression (minimizing representation redundancy) and semantic fidelity (preserving meaning):

  • Objective Function for Compression-Fidelity Tradeoff:

L(X,C;β)=Complexity(X,C)+βDistortion(X,C)\mathcal{L}(X, C; \beta) = \text{Complexity}(X, C) + \beta \cdot \text{Distortion}(X, C)

with complexity quantifying representational cost (mutual info between items and clusters) and distortion measuring loss of semantic nuance.

  • Empirical Results:

LLMs efficiently compress semantic space, clustering concepts as humans do for broad categories, but fail to capture the rich typicality gradients and context-adaptive richness that characterize human conceptual organization.

  • Architecture Impact:

Encoder-only (BERT-like) models align more with human semantic categorization than larger decoder-only models, emphasizing the importance of architectural bias and pretraining objective.

5. Quantifying Semantic Emergence and Preservation

Measuring semantic information emergence and retention layer-by-layer reveals where and how LLMs develop and lose meaning:

E(l)=MI(hl+1ma,hlma)1Tt=0T1MI(hl+1mit,hlmit)E(l) = MI(h^{ma}_{l+1}, h^{ma}_l) - \frac{1}{T}\sum_{t=0}^{T-1} MI(h^{mi_t}_{l+1}, h^{mi_t}_{l})

where the first term measures macro-level (semantic) mutual information, and the second term, averaged micro-level (token) information propagation. Positive emergence reflects the transformer’s ability to aggregate local information into holistic meaning.

Sentence-level and token-level MI are tightly coupled, with Fano’s inequality providing practical ways to measure semantic preservation via token recoverability from hidden states. Encoder-only architectures maintain higher recoverability, while decoder-only models exhibit pronounced late-layer “forgetting,” especially for longer inputs.

6. Applications and Implications for LLM Design and Evaluation

Semantic information theory provides operational metrics and design criteria applicable across the LLM lifecycle:

  • LLM Evaluation and Comparison:

Recoverability, emergence metrics, and directed information enable comparison of “semantic understanding” in architecture-agnostic ways, informing both competitive benchmarking and model improvement.

  • Model Training and Embedding Design:

Semantic compression bounds, optimal vectorization, and semantic flow measures point to principled approaches for parameter allocation, tokenization, and even new architecture design (e.g., for quantum-inspired or multimodal models (Laine, 13 Apr 2025, Tao et al., 24 Oct 2024)).

  • Trust, Uncertainty, and Privacy:

Semantic information theory informs uncertainty quantification (e.g., via semantic cluster consistency (Ao et al., 5 Jun 2024)) and privacy leak detection (layer-wide semantic coherence (He et al., 24 Jun 2025)), with high semantic certainty being an indicator of memorized/private data.

  • Semantic Communication:

LLMs as semantic encoders/decoders enable new paradigms for communication-theoretic system design, exploiting their internal knowledge base for optimal error correction and meaning transmission (Wang et al., 19 Jul 2024).

  • Node and Graph Semantics:

LLM-guided semantic augmentation strengthens graph-based reasoning and node importance estimation by fusing LLM-extracted knowledge with structured ontologies (Lin et al., 30 Nov 2024).

7. Future Directions, Limitations, and Open Questions

Semantic information theory as applied to LLMs is an emerging field with several ongoing debates and research frontiers:

  • Architecture-independent Principles:

The shift to token-level, structure-agnostic measures permits comparison and analysis across diverse LLM variants, but the mapping from formal definitions to practical metrics is sometimes indirect and dataset-dependent.

  • Limitations of Granger Causality and Directed Information:

The current theory is closely tied to first-order (system-1) causal inference; true counterfactual reasoning and higher-order logic in LLMs remain open problems.

  • Tradeoff between Compression and Semantic Richness:

Evidence indicates that human semantic representations prioritize richness and flexibility over compactness, while present-day LLMs are biased towards efficient, but sometimes over-compressed, abstractions. Developing “useful inefficiency” in models is a prospective direction.

  • Quantum-inspired and Multimodal Extensions:

Quantum formalism offers an expanded mathematical palette for semantic representation (Hilbert spaces, gauge fields, interference), with potential for architectures encoding contextuality and meaning in fundamentally new ways (Laine, 13 Apr 2025).

  • Layerwise Understanding and Diagnostic Tools:

Information emergence and recoverability measures provide layer-by-layer diagnostics, supporting targeted interventions to mitigate forgetting or enhance reasoning chains (Ton et al., 18 Nov 2024, Tan et al., 1 Feb 2024).

Summary Table: Principal Constructs in Semantic Information Theory for LLMs

Classical Concept Token-Level Semantic Analog Key Reference
Bit entropy/mutual info Token-sequence entropy / Directed information (Bai, 3 Nov 2025)
Rate-distortion Directed rate-distortion over token/channel pairs (Bai, 3 Nov 2025)
Coding/Compression Semantic compression tradeoff: complexity vs. semantic fidelity (Shani et al., 21 May 2025)
Emergence measure Macro vs. micro mutual information (IE metric) (Chen et al., 21 May 2024)
Causal inference Granger causality applied to token-sequence modeling (Bai, 3 Nov 2025)
Embedding optimality Maximize backward directed information for vectorization (Bai, 3 Nov 2025)
Semantic flow (inference) Information density tracked token-by-token, realization-based Markov sub-martingale (Bai, 3 Nov 2025)

Semantic information theory for LLMs establishes mathematically rigorous, empirically grounded methods to analyze, optimize, and interpret how models process and convey meaning, offering a foundation and a toolkit for principled LLM development, evaluation, and theoretical advancement.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Semantic Information Theory for LLMs.