Log-Concave Random Variables
- Log-concave random variables are defined by densities or mass functions expressible as exp(-V(x)) with convex V, ensuring exponential tail decay and unimodality.
- They maintain structural stability under convolution and affine maps, with extremal cases (exponential/geometric) providing sharp reverse entropy bounds.
- Their applications in information theory, convex geometry, and statistical modeling yield key concentration results and optimal bounds for additive noise channels.
A log-concave random variable is a random variable whose probability density (in the continuous case) or probability mass (in the discrete case) function is log-concave, i.e., takes the form where is convex. This class includes distributions fundamental in probability and geometric analysis, such as Gaussian, uniform on convex sets, exponential, binomial, and geometric laws. Log-concavity confers strong concentration, stability under convolution, and unimodality—properties that are critical across probability theory, convex geometry, information theory, and functional analysis.
1. Definitions, Characterizations, and Structural Properties
A (real) random variable is log-concave if its density on satisfies for all and , or equivalently for some convex function . In higher dimensions, similar definitions hold with . For integer-valued random variables, log-concavity of the probability mass function is defined by for all in the support (with contiguous support).
Key properties:
- The class is closed under convolution, affine maps, and marginalization.
- Tails are at least exponentially decaying; concentration inequalities such as Poincaré and log-Sobolev inequalities are frequently sharp within this class.
- Common families: Gaussian, (one-sided and two-sided) exponential, uniform on an interval or convex body, geometric, binomial, and discrete uniform.
For product measures, log-concavity passes to joint laws, a feature critical for sums and limit theorems.
2. Reverse Entropy Power Inequalities and Sharp Extremal Laws
The entropy power inequality (EPI) for the Shannon differential entropy asserts that for independent , on ,
In the absence of restrictive assumptions, the reverse inequality (bounding above by or similar) fails. However, for log-concave variables and especially for Rényi entropies of order , sharp reverse inequalities emerge:
- Rényi Entropy Order Infinity: For independent identically distributed (i.i.d.) log-concave random variables , ,
where . The maximum "entropy jump" is attained when is exponential. For integer-valued monotone log-concave laws, the same sharp constant holds:
with . The extremal law is the geometric distribution. These results quantify an intrinsic anti-concentration: convolution "spreads" the mass in a maximal sense precisely for exponential/geometric laws (Fu et al., 10 Oct 2025).
- Other Rényi Orders: For , the supremum is realized by the exponential law as well; for in the symmetric setting, the uniform law is extremal; for , two-sided exponentials dominate, with determined by (Białobrzeski et al., 2021).
These represent sharp analogs to classical Young's convolution inequality in (and ) norms, with explicit form and optimal maximizers.
3. Methods: Rearrangement, Majorization, Convexity
The central arguments rely on functional rearrangements and majorization:
- Decreasing Rearrangement: For any density , one defines —a symmetric, monotonic function equimeasurable with , whose level sets are intervals with the same measure as those of . For log-concave , the rearrangement preserves log-concavity, facilitating extremal comparisons.
- Majorization and Convex Order: Via Karamata's inequality, convex combinations of log-concave densities inherit orderings under convolution, used to reduce the general problem to an explicit comparison with exponential laws. For instance, comparing (convolution) with where is exponential or log-affine and is rearranged or majorized by .
- Explicit Calculation and Transport: For exponential (resp. geometric) laws, the convolution has a calculable supremum norm, making entropy increments explicit: when are identical exponentials with .
For monotone discrete laws, the argument translates to comparison with a geometric law (whose pmf is affinely log-concave), again via majorization and convexity.
4. Discrete Log-Concave Laws: Integer-Valued Counterparts
For , i.i.d. log-concave integer-valued variables with monotone pmf:
- The maximum possible increase in after convolution is strictly less than 1:
and this is tight as approaches a geometric law concentrated near zero.
- The result extends (with a different constant) to -norm and Rényi entropy.
The technical strategy is identical in spirit: identify extremal log-concave discrete laws, reduce to cases where explicit supremum computations are tractable (affine log-concave or geometric), and synthesize using convex ordering (Fu et al., 10 Oct 2025).
5. Applications and Broader Impact
These reverse entropy inequalities yield fundamental anti-concentration controls:
- In information theory, they concretely limit the "peakiness" of the distribution after convolution—a key metric in robust coding and the analysis of additive-noise channels, where the exponential/geometric law sets the worst-case upper bound for the decrease of the essential supremum under noise aggregation.
- In convex geometry, they link to reverse Brunn–Minkowski and Rogers–Shephard inequalities, and provide optimal bounds for the spread of measures with given density maxima—these are closely tied to covering estimates and isoperimetric-type problems.
- For statistical and probabilistic modeling, understanding when the supremum norm of convolution is controlled sharpens the design of estimators and concentration inequalities for sums.
- In the discrete field, sharp inequalities mediate bounds for the concentration function and underpin combinatorial extremal problems, especially in matroid theory and related enumeration of independent sets (Alqasem et al., 2022, Fradelizi et al., 27 Jan 2024).
Overall, these results expose a sharp "boundary": among all log-concave distributions with fixed essential supremum, the exponential (continuous), or geometric (discrete), law exhibits maximal anti-concentration under summation.
6. Connections to Other Reverse Inequalities and Open Directions
The sharp reverse EPI at order infinity complements the more general reverse inequalities at order and the reverse norm-like relations described for the entropy exponent (Ball et al., 2015). In contrast to the continuous EPI, where only Gaussians saturate equality, the reverse inequalities select exponentials as critical cases within the log-concave class. Current research aims at identifying the optimal constants for other Rényi orders, extending these to the multivariate setting, and understanding connections to geometric functional inequalities. There remains interest in structural characterization of extremal log-concave laws in higher dimensions and for higher convolutions.
Key formulas:
| Setting | Inequality | Extremal Law |
|---|---|---|
| Continuous, | (i.i.d.) | Exponential |
| Discrete, | (i.i.d., monotone pmf) | Geometric |
| General | , | Exponential (or geometric) |
In summary, the reverse entropy power inequality of order infinity for i.i.d. log-concave random variables is sharply maximized by the exponential law, providing a universal upper bound for the "entropy jump" under convolution in both continuous and (with suitable monotonicity) discrete settings. These findings delineate the structure of log-concave measures under addition, with immediate implications in information theory, convex geometry, and probabilistic analysis.