Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 75 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Shannon's Entropy Power Inequality

Updated 18 September 2025
  • Shannon’s Entropy Power Inequality is a fundamental result in information theory that establishes a lower bound on the differential entropy of the sum of independent random variables.
  • It provides precise conditions for equality, notably when the variables are Gaussian with identical variances, and employs de Bruijn’s identity even under weak moment assumptions.
  • Generalizations to discrete and quantum settings extend its applications to coding, signal processing, robust statistics, and modern communications.

Shannon's Entropy Power Inequality (EPI) is a foundational result in information theory, establishing a lower bound on the differential entropy of the sum of independent random variables. Its implications extend across probability, statistics, communication theory, signal processing, and beyond. The EPI has also served as a template for numerous generalizations, including quantum, discrete, and Rényi entropy settings. The following entry provides a rigorous overview of the EPI, the conditions for equality, the analytic framework including de Bruijn’s identity, technical subtleties such as the finiteness of entropy for sums, and both qualitative and quantitative stability results in continuous and discrete domains.

1. Fundamental Inequality and Conditions for Equality

Let XX and YY be independent real-valued random variables with finite differential entropies h(X)h(X) and h(Y)h(Y). The Shannon EPI in one dimension typically reads: exp(2h(λX+1λY))λexp(2h(X))+(1λ)exp(2h(Y))\exp(2h(\sqrt{\lambda}X + \sqrt{1 - \lambda}Y)) \geq \lambda \exp(2h(X)) + (1-\lambda)\exp(2h(Y)) for any λ(0,1)\lambda \in (0,1). The entropy power of a random variable XX is N(X):=exp(2h(X))N(X) := \exp(2h(X)), so the inequality is linear in NN.

The sharp equality condition is that XX and YY must be Gaussian random variables with identical variances: h(λX+1λY)=λh(X)+(1λ)h(Y)    X,Y are Gaussian with Var(X)=Var(Y)h(\sqrt{\lambda}X + \sqrt{1-\lambda}Y) = \lambda h(X) + (1-\lambda)h(Y) \quad \implies \quad X,\,Y\ \text{are Gaussian with}\ \mathrm{Var}(X) = \mathrm{Var}(Y) No further conditions are necessary beyond the existence (finiteness) of the differential entropies; this is justified at the minimal regularity level in (Gavalakis et al., 17 Sep 2025).

2. De Bruijn’s Identity without Second Moment Assumptions

Classical proofs of the EPI rely on de Bruijn’s identity, which relates the evolution of entropy under Gaussian noise to Fisher information: ddth(X+tZ)=12I(X+tZ)\frac{d}{dt} h(X + \sqrt{t}Z) = \frac{1}{2} I(X + \sqrt{t}Z) where ZZ is independent standard normal and I()I(\cdot) denotes the Fisher information. The classical derivation assumes finite second moments to justify differentiation under the integral in the convolution representation.

However, (Gavalakis et al., 17 Sep 2025) provides a justification under strictly weaker conditions. Specifically, if h(X+t0Z)h(X + \sqrt{t_0}Z) is finite for some t0>0t_0 > 0, or equivalently, there exists an independent YY with finite h(Y)h(Y) such that h(X+Y)<h(X+Y) < \infty, the identity above remains valid. The proof utilizes truncation methods and dominated convergence, leveraging uniform integrability of tgtloggt|\partial_t g_t \log g_t|, where gtg_t is the density of X+tZX+\sqrt{t}Z. Thus, de Bruijn’s identity is established absent a finite second moment assumption, providing broader applicability for EPI proofs.

3. Pathological Behavior: Sums of Random Variables with Infinite Entropy

The work of Bobkov and Chistyakov demonstrates a critical technical caveat: there exist random variables XX with finite differential entropy for which h(X+Y)=h(X+Y) = \infty for any independent YY with finite entropy. This pathology arises from the possible nonlocal increase in entropy under convolution and highlights the necessity for care, particularly in degenerate or heavy-tailed settings (Gavalakis et al., 17 Sep 2025). The EPI and its proof mechanisms demand that at least one random variable avoids this class, i.e., that h(X+Y)h(X+Y) is finite for some suitable YY.

4. Continuity and Stability of the EPI

a) Continuity of Entropy under Gaussian Perturbation

Given a random variable XX with h(X)<h(X) < \infty and an independent standard normal ZZ, the function th(X+tZ)t \mapsto h(X + \sqrt{t}Z) is right-continuous at t=0t=0. The rigorous proof in (Gavalakis et al., 17 Sep 2025) relies on majorization via truncated approximations and dominated convergence. Thus,

limt0h(X+tZ)=h(X)\lim_{t \to 0} h(X + \sqrt{t}Z) = h(X)

whenever h(X+Y)h(X+Y) is finite for some independent YY.

b) Qualitative and Quantitative Stability

Qualitative stability: If the EPI deficit

δEPI=h(λX+1λY)[λh(X)+(1λ)h(Y)]\delta_{\mathrm{EPI}} = h(\sqrt{\lambda}X + \sqrt{1-\lambda}Y) - [\lambda h(X) + (1-\lambda)h(Y)]

is small, then XX and YY must be close (in the weak, e.g., Lévy, topology) to Gaussian distributions of the same variance, possibly with quantitative rates under higher moment assumptions. This type of stability is proven in (Gavalakis et al., 17 Sep 2025) using compactness arguments and is a general property of the EPI.

Quantitative stability in the discrete case: For Tao's discrete entropy power inequality (for random variables on torsion-free groups), a sharp quantitative stability estimate is proven under log-concavity of the discrete distribution. Let XX be discrete log-concave, UUniform[0,1]U \sim \mathrm{Uniform}[0,1], and σ2\sigma^2 its variance. Then, after smoothing, the Poincaré constant is controlled: CP(X+U)438244σ2C_P(X+U) \leq 438244 \cdot \sigma^2 leading to explicit bounds in relative entropy between the smoothed XX (or its sum) and discretized Gaussians.

5. Discrete and Quantum Generalizations

The EPI admits both discrete and quantum analogues, each with domain-specific technicalities. In the discrete case, analogues often require log-concavity or ultra log-concavity for sharp results and are closely connected with sumset theory and combinatorics. For quantum systems, the EPI is generalized via von Neumann entropy and additive, beamsplitter-like operations, with equality and optimality conditions often stricter or more subtle due to noncommutativity (Koenig et al., 2012, Palma et al., 2014).

6. Summary Table of EPI Properties

Statement Condition for Equality Stability/Continuity
Shannon EPI, continuous case X, Y Gaussian, same var. Stable in weak convergence (Gavalakis et al., 17 Sep 2025)
de Bruijn’s Identity See finiteness condition Valid under minimal entropy regularity (Gavalakis et al., 17 Sep 2025)
Tao’s Discrete EPI, log-concave case n/a (no Gaussian in discrete) Quantitative in relative entropy (Gavalakis et al., 17 Sep 2025)
Generalizations (quantum, discrete) See model-specific Requires further structure/regularity

7. Implications and Applications

  • Coding and Channel Capacity: The EPI provides the analytic backbone for Gaussian channel coding theorems and converse results.
  • Robust Statistics and Signal Processing: Stability and continuity ensure robustness of entropy-based performance metrics under model perturbations.
  • Discrete Information Theory: Quantitative stability results for log-concave discrete laws underpin new developments in discrete analogues of classical theorems, including capacity and central limit phenomena.
  • Functional Inequalities: The EPI, via de Bruijn’s identity and connections with Fisher information, feeds into sharp inequalities (e.g., Nash, Sobolev, log-Sobolev).

In conclusion, the entropy power inequality remains central to modern information theory, with its equality, continuity, and stability properties now rigorously characterized under minimal regularity. These advances have extended the reach of the EPI from theoretical domains to practical areas—ranging from communications and cryptography to high-dimensional data analysis—characterizing not only the optimality conditions but also the sensitivity and robustness of entropy-based performance measures (Gavalakis et al., 17 Sep 2025).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Shannon's Entropy Power Inequality (EPI).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube