Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 164 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 72 tok/s Pro
Kimi K2 204 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Logarithmic Sobolev Inequality (LSI)

Updated 16 October 2025
  • LSI is a functional inequality that relates entropy-like integrals to gradient norms and plays a pivotal role in analysis, probability, and geometry.
  • It underpins key concepts such as hypercontractivity, concentration of measure, and rapid mixing in Markov processes with dimension-robust constants.
  • Recent advances extend LSI to manifold, quantum, and high-dimensional logconcave settings, emphasizing stability, sharpness, and optimal sampling techniques.

The logarithmic Sobolev inequality (LSI) is a fundamental functional inequality relating entropy-like integrals to gradient norms, playing a central role in analysis, probability, geometry, and statistical mechanics. The classical version, introduced by Gross, is remarkable for its dimension-free constants in Gaussian space, its connections to concentration of measure, hypercontractivity, mixing properties of Markov processes, and its deep relations to optimal transport and information-theoretic inequalities. Modern developments include sharp geometric inequalities, stability theories, high-dimensional probability, analysis on manifolds and submanifolds, quantum information, and applications in numerical methods for Markov processes and sampling.

1. Classical Formulation and Contexts

The classical LSI for a probability measure μ (notably the standard Gaussian measure dγ) states that for all smooth functions f with ∫ f² dμ = 1,

f2logf2dμ2Cf2dμ\int f^2 \log f^2\, d\mu \leq 2C \int |\nabla f|^2 d\mu

for some constant C that is independent of the ambient dimension in the Gaussian case. For μ = γ (standard Gaussian), the optimal constant is C = 1 and equality is achieved when f is linear in x (the family of optimizers). The entropy term on the left (relative to μ) is: Entμ(f2):=f2log(f2)dμ(f2dμ)log(f2dμ)\operatorname{Ent}_\mu(f^2) := \int f^2 \log(f^2)\, d\mu - \left(\int f^2 d\mu \right) \log \left(\int f^2 d\mu \right) and the right side involves the Dirichlet form (or Fisher information).

Gross’ foundational result and its extensions underlie much of modern analysis in both the Euclidean and manifold setting. The LSI is stronger than the Poincaré inequality, and its invariance under product measures makes it particularly powerful in high-dimensional probability.

2. LSI in the Gaussian and Weighted Sobolev Setting

Weighted Sobolev spaces with respect to the Gaussian measure inform the structure and optimal constants in the LSI. For ℝⁿ equipped with the Gaussian measure γ, one has for all u∈W{1,p}(Ω,γ), embedding theorems into weighted Zygmund spaces, such as

uLp(logL)1/2(Ω,γ)CuW1,p(Ω,γ),\|u\|_{L^{p}(\log L)^{1/2}(\Omega, \gamma)} \leq C \|u\|_{W^{1,p}(\Omega,\gamma)},

where Lᵖ(log L)α are Zygmund spaces adapted to the geometry of γ (Feo et al., 2011). The crucial property is dimension-robustness due to the uniform convexity of the exponential weight.

Moreover, the trace version of the LSI for regular domains establishes that

Ωuplog2p(2+u)dSCuW1,p(Ω,γ),\int_{\partial \Omega} |u|^p \log^{2p}(2+|u|)\, dS \leq C \|u\|_{W^{1,p}(\Omega,\gamma)},

and this correction is sharp in the logarithmic exponent. These embedding and trace inequalities are essential in PDE analysis, particularly for degenerate or weighted equations.

3. Structural Stability, Deficit Estimates, and Instability

Stability of the LSI—quantifying proximity to optimizers when the deficit is small—is a major current topic.

Stability Theorems

For normalized densities f under a second-moment bound, quantitative stability in norms can be established: f1W1,1(R,dγ)aα(δ(f)1/4+δ(f)3/4),\|f - 1\|_{W^{1,1}(\mathbb{R},d\gamma)} \leq a_{\alpha} \left(\delta(f)^{1/4} + \delta(f)^{3/4}\right), where δ(f) = (1/2)I(f) - H(f) is the LSI deficit, with I(f) the Fisher information and H(f) the relative entropy (Indrei, 2 Jun 2024, Indrei et al., 2018, Brigati et al., 11 Apr 2025). In one-dimensional and product form, such bounds control the W1W_1 Wasserstein distance and exhibit equivalence to uncertainty principles.

Instability and Sharpness

However, sharp examples exhibit that, in the absence of tight moment controls, the LSI can be unstable in strong norms: sequences with vanishing deficits may be far from any optimizer in H¹ or W1W_1; thus, moment/exponential decay assumptions are necessary for stability (Brigati et al., 11 Apr 2025). For spaces lacking such control, the deficit does not provide quantitative closeness to optimizers.

4. LSI on Manifolds, Submanifolds, and under Geometric Evolutions

Submanifolds and Extrinsic Geometry

LSIs are extended to submanifolds and evolving manifolds, with sharp constants and the incorporation of mean curvature terms: Eflogfdvol(Efdvol)log(Efdvol)EfΣlogf2dvol+EfH2dvol,\int_E f \log f \, d\mathrm{vol} - \left(\int_E f \, d\mathrm{vol}\right) \log \left(\int_E f \, d\mathrm{vol}\right) \leq \int_E f|\nabla^\Sigma\log f|^2 d\mathrm{vol} + \int_E f|H|^2 d\mathrm{vol}, where E ⊂ ℝ{n+m} is a compact submanifold, ∇Σ is the intrinsic gradient, and H is the mean curvature vector (1908.10360, Yi et al., 2021). In the presence of curvature, such inequalities precisely quantify the role of extrinsic geometry in controlling entropy.

Evolving Metrics and Flows

On evolving (possibly noncompact) manifolds with nonnegative sectional curvature, analogs of the LSI are established, incorporating curvature terms and yielding long-time non-collapsing or non-inflation results, which are fundamental in geometric flows like Ricci, Kähler-Ricci, or mean curvature flow (Fang et al., 2015).

5. LSI for Spin Systems, Product Spaces, and Markov Chains

Conservative Spin Systems

For canonical ensembles of noninteracting or weakly interacting spin systems, especially with super-quadratic single-site potentials ψ(x) = ψ_c(x) + δψ(x), where ψ''_c ≥ c > 0 and δψ bounded, uniform LSI holds with constant independent of system size N (Menz et al., 2013, Kwon et al., 2018). The proof adapts a two-scale coarse-graining, hierarchical block-decomposition, and an asymmetric Brascamp–Lieb inequality (handling super-quadratic growth without restrictive upper bounds on ψ'').

Finite Spin Systems and Markov Chains

For finite product measures, sufficient conditions for modified LSI (mLSI) are established via uniform lower bounds on conditional probabilities and bounded interaction strengths (interdependence matrix with spectral norm < 1), ensuring rapid mixing for the associated Glauber dynamics and concentration of measure for higher-order statistics (Sambale et al., 2018). For Markov chains, sum-of-squares (SOS) semidefinite programming hierarchies provide certified lower bounds on the LSI constant, converging to the true value, and these methods rigorously establish new concentration and mixing time properties (Faust et al., 2021).

6. LSI in High-Dimensional, Logconcave, and Quantum Settings

High-Dimensional Logconcave Measures

Groundbreaking progress using stochastic localization with Stieltjes-type barrier functions achieved the tight, dimension-robust bound

ρp=Ω(1/D),\rho_{p} = \Omega(1/D),

for the log-Sobolev constant of isotropic logconcave densities of diameter D, improving upon previous Ω(1/D²) bounds (Lee et al., 2017). This result optimizes mixing times for MCMC (ball walk) from any starting point and yields new concentration bounds for Lipschitz observables.

Quantum Markov Semigroups

In quantum settings (non-primitive QMS), LSI and hypercontractivity are characterized via amalgamated Lᵖ norms, capturing convergence not to a point but to a decoherence-free algebra. Weak LSI constants govern decoherence rates, and “tensorized” completely bounded versions retain explicit structural constants (Bardet et al., 2018).

7. Dimension-Reduction, Applications, and Information-Theoretic Interconnections

Dimension-Aware LSI and Model Reduction

Dimensional refinements of the LSI explicitly incorporate ambient and reduced dimensions, yielding sharper majorants for KL approximation errors and guiding the detection of low-dimensional structure in high-dimensional measures (Li et al., 18 Jun 2024). For a measure π on ℝd, the dimensional Gaussian LSI can be written as

Entμ(f)12x2fdμd2+12logdet(logf(x)x)2f(x)dμ\operatorname{Ent}_\mu(f) \le \frac{1}{2} \int \|x\|^2 f\,d\mu - \frac{d}{2} + \frac{1}{2} \log\det \int \left(\nabla\log f(x) - x\right)^{\otimes 2} f(x) d\mu

yielding tight upper and lower bounds for KL errors and reflecting the active subspace structure through the Fisher information.

Convolution, Hypercontractivity, and Central Limit Theorems

Dimensional and convolutional properties of LSI relate entropy, Fisher information, and mixing via dimension-free inequalities interpolating the classical LSI, Fisher information inequalities (FII), and entropy power inequalities (EPI) (Courtade, 2016). The deficit in LSI satisfies a convolution inequality, controls speed of convergence in central limit theorems for entropy and Fisher information, and ties to Nelson’s hypercontractivity theorem, which itself is equivalent to LSI in Gaussian space.

Algorithmic Impact and EM Analysis

Extended LSIs for spaces combining parameters and distributions (e.g., ℝⁿ × Wasserstein spaces) provide key descent inequalities for the EM algorithm and its variants. Under an “xLSI” (an extended log-Sobolev inequality in product space), one obtains finite-sample, exponential convergence guarantees for the EM algorithm via entropy gap and Fisher information control, unified under optimal transport and Wasserstein gradient flow frameworks (Caprio et al., 25 Jul 2024).


Overall, the logarithmic Sobolev inequality synthesizes advanced concepts in analysis, geometry, probability, and information theory. Recent advances have clarified its sharpness, stability, and instability, extended its reach to geometric and quantum contexts, and powered the development of both new theoretical tools (e.g., block decomposition, stochastic localization, sum-of-squares certificates) and practical applications (optimal sampling, fast mixing, robust inference in high-dimensional models).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Logarithmic Sobolev Inequality (LSI).