Papers
Topics
Authors
Recent
Search
2000 character limit reached

Path Log-Likelihood in Sequential Modeling

Updated 10 February 2026
  • Path log-likelihood is the log-probability of an entire trajectory under a probabilistic model, decomposing contributions from transitions and emissions.
  • It underpins efficient inference and principled search in diverse domains including diffusion LLMs, hidden Markov models, polar code decoders, and network tomography.
  • By leveraging recursive decomposition and dynamic programming, path log-likelihood optimizes model performance, enhances error detection, and improves anomaly scoring in VAEs.

Path log-likelihood (Path LL) is a fundamental concept for quantifying and optimizing the joint probability of sequential or structured assignments in temporal, graphical, and generative modeling. The term formally denotes the log-probability of an entire path or trajectory under a model, accumulating the contributions of transition and emission or generative kernels at each step. Path LL and its variants underpin tractable inference, principled search, and efficient estimation across diffusion LLMs, Markovian sequence decoding, polar code hardware, network loss tomography, and out-of-distribution (OOD) detection with variational autoencoders.

1. Formal Definitions Across Domains

The central object of the path log-likelihood is the log-joint probability of an assignment trajectory under a generative or probabilistic model, typically decomposed as a (conditional) sum over temporal or sequential indices:

  • Diffusion LLMs: Given an unmasking order τ=(QT,,Q1)\tau = (Q_T,\ldots,Q_1), Path LL of a final sequence xx is

logpD(x;θ,τ)=t=1TlogpD(xQtxOt;θ)\log p_D(x;\theta,\tau) = \sum_{t=1}^T \log p_D(x_{Q_t} \mid x_{O_t};\theta)

where xOtx_{O_t} is the set of already observed positions at step tt (Liu et al., 3 Feb 2026).

  • Hidden Markov Models (HMMs) / Convolutional Code Decoders: For state path u(D)={u[0],...,u[N]}u(D) = \{u[0],...,u[N]\} and observations r(D)r(D),

Path LL(u(D))=logP(u(D),r(D))=d=0N[logfo(r[d]y[d])+logPt(u[d]u[d1])]\text{Path LL}(u(D)) = \log P(u(D), r(D)) = \sum_{d=0}^{N} [\log f_o(r[d]\mid y[d]) + \log P_t(u[d] \mid u[d-1])]

where y[d]y[d] are outputs and fof_o is the observation density (0711.3077).

  • Polar Code List Decoding: For a partial code path u1i=(z1,,zi)u_1^i = (z_1,\ldots,z_i),

LLL(i)(z1i)=j=1ilnPr(uj=zjy1n,u1j1=z1j1)LL_L^{(i)}(z_1^i) = \sum_{j=1}^i \ln Pr(u_j=z_j \mid y_1^n, u_1^{j-1} = z_1^{j-1})

Often computed efficiently with log-likelihood ratios (LLRs) (Yuan et al., 2014).

  • Multicast Network Tomography: For a tree path to internal node kk (with children dkd_k),

Path LL(Ak)=n[ln(1γk/Ak)jdkln(1γj/Ak)]\text{Path LL}(A_k) = n \left[ \ln(1 - \gamma_k/A_k) - \sum_{j \in d_k} \ln(1 - \gamma_j/A_k) \right]

where AkA_k is the pass probability and γi\gamma_i are empirical pass rates (Zhu, 2010).

  • Variational Autoencoders (VAEs) (Likelihood Path Principle): For input xx and a sampled latent zz,

PathLL(x)=(logqϕ(zx), logpθ(xz))\text{PathLL}(x) = \left( \log q_\phi(z\mid x),~\log p_\theta(x\mid z) \right)

or, equivalently, their minimal sufficient statistics in the exponential family setting (Huang et al., 2024).

2. Incremental Decomposition and Dynamic Programming

Path LL admits recursive or incremental decomposition, which is exploited for both efficient inference and principled search:

  • In diffusion LLMs, the sum over blockwise log-probs along unmasking sequences supports fine-grained control over generative ordering, allowing for the design of lookahead policies such as POKE and search methods like POKE-SMC (Liu et al., 3 Feb 2026).
  • In Viterbi or list decoders for HMMs and polar codes, path log-likelihood propagates via dynamic programming along graph or tree structures, with the path metric accumulation aligned with max-log or min-sum rules on branch metrics (0711.3077, Yuan et al., 2014).
  • In network loss tomography, Path LL enables efficient estimation via polynomial moment equations reflecting empirical multi-terminal probe outcomes (Zhu, 2010).

3. Path LL as an Optimization and Ranking Objective

Path LL notably serves as a globally consistent, trajectory- and context-sensitive inference objective:

  • Diffusion LLMs: Path LL strongly correlates with downstream accuracy, outperforming local uncertainty metrics and entropy proxies. High Path LL aligns with chains of thought and output consistency, making it a superior criterion for unmasking path selection (Liu et al., 3 Feb 2026).
  • OOD Detection with VAEs: The likelihood path principle advocates using PathLL statistics (e.g., encoder/decoder mean and variance vectors) rather than marginal logpθ(x)\log p_\theta(x) for anomaly scoring, with provable non-asymptotic separation bounds between IID and OOD distributions (Huang et al., 2024).
  • List Decoding: In LLR-based SCL algorithms, maintaining the log-metric along each path allows prioritization of most probable codeword candidates for optimal or near-optimal error performance (Yuan et al., 2014).

4. Computational and Structural Properties

Path LL’s recursive structure enables efficient algorithmic and hardware implementation, but the specifics are model dependent:

Application Domain Path LL Computation Efficiency Implications
Diffusion LLMs POKE and SMC with blockwise lookahead 4×\sim4\times inference time, large accuracy gains (Liu et al., 3 Feb 2026)
Polar Codes (SCL) LLR-based path metric updates 50%50\% hardware, 98%98\% throughput/gate reduction (Yuan et al., 2014)
Convolutional Codes (SLL/NLL Tests) Partial path bounds/local window tests NLL tests freeze symbols in O(1)O(1) window; complexity per time <1+ε<1+\varepsilon at high SNR (0711.3077)
Network Tomography Closed-form polynomial or explicit quadratic MLE Non-iterative, efficient for small- to medium-degree paths (Zhu, 2010)

In several domains, direct use of future-sum bounds (SLL, best-case costs) in high-dimensional or long-sequence settings becomes ineffective as problem size increases, motivating localized alternatives (e.g., NLL tests freezing symbols via local evidence only in convolutional/HMM decoding) (0711.3077).

5. Estimation and Inference Strategies

Path LL frameworks yield distinct estimation procedures in each context:

  • Diffusion LLMs: POKE provides a blockwise optimistic lookahead estimator, upper-bounding total correlation via entropy and thereby supporting admissible search (Liu et al., 3 Feb 2026).
  • Network Tomography: Merging child subtrees into two groups enables closed-form quadratic solutions for end-to-end path pass rates, subsuming all relevant correlation statistics unlike previous LLN-based methods (Zhu, 2010).
  • VAEs/OOD Detection: LPath distills minimal sufficient statistics along the encoder–decoder route, with efficient classical anomaly detectors fitted to these low-dimensional summaries (Huang et al., 2024).
  • Polar Code Decoding: Efficient log-domain recursions and max-log approximations accelerate metric evaluation and enable very large blocklength implementations in hardware (Yuan et al., 2014).

6. Impact, Generalizations, and Empirical Insights

Broad empirical and theoretical evidence demonstrates the consistent superiority or efficiency of Path LL-centric methodologies:

  • Diffusion LLMs: Dynamic path-level search yields 2%2\%3%3\% average accuracy improvement over strong baselines, with higher gains on arithmetic and strict reasoning datasets. Post-hoc Path LL reranking (Best-of-NN) achieves only half the gain compared to dynamic lookahead (Liu et al., 3 Feb 2026).
  • VAEs/OOD Detection: LPath achieves state-of-the-art AUROC in challenging OOD settings, outperforming ELBO, DoSE, and large-flow models, leveraging the statistical minimality of Path LL features (Huang et al., 2024).
  • Convolutional Codes/HMMs: NLL tests freeze symbols locally, keeping decoder complexity essentially constant as sequence length grows, even with finite blocks and under high SNR (0711.3077).
  • Network Tomography: Closed-form explicit MLE via path LL achieves exact, efficient, and statistically superior performance at moderate sample sizes, subsuming previous LLN-only estimators (Zhu, 2010).

A recurring theme is that path log-likelihood encapsulates structured, context-aware statistical dependencies that local or post-hoc metrics miss. Optimization and representation strategies exploiting the full path LL, augmented by decomposition, lookahead, or closed-form techniques, consistently yield improvements in both theory and practical accuracy, latency, and efficiency.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Path Log-Likelihood (Path LL).