Log-Concave Random Vectors
- Log-concave random vectors are defined by densities whose logarithms are concave, ensuring stability under convolution and affine transformations.
- They exhibit strong concentration of measure with sharp entropic bounds and nearly dimension-free central limit theorems that underpin key functional and geometric inequalities.
- Their applications span convex geometry, order statistics, and efficient high-dimensional statistical estimation, making them central in modern probability theory.
A log-concave random vector is a random vector in Euclidean space whose probability law possesses the log-concavity property: its density (with respect to Lebesgue measure, if it exists) is log-concave, i.e., the logarithm of the density function is concave on its support. This class is fundamentally important in high-dimensional probability, convex geometry, and information theory, encompassing Gaussian distributions, uniform distributions on convex bodies, and product exponential distributions, and serving as a canonical setting for sharp functional and geometric inequalities.
1. Definition and Structural Properties
A random vector in is log-concave if it has a density such that for all and ,
or equivalently, is convex. If no density exists, the notion can be extended to log-concave measures. Important subclasses include isotropic log-concave vectors (zero mean, identity covariance) and distributions with additional symmetries such as unconditionality.
Structural features of log-concave vectors include:
- Stability under convolution: The convolution of log-concave densities is log-concave.
- Closure under affine transformations: Any affine image of a log-concave vector is log-concave.
- Strong concentration of measure: Satisfy dimension-free functional inequalities (Poincaré, log-Sobolev) with constants of rich geometric and analytic significance (Ball et al., 2012).
- Extremal distributions: Within fixed classes, the uniform distribution on convex bodies and the product exponential distribution attain minimal and maximal entropy, respectively, while the Gaussian is always central (Bobkov et al., 2010).
2. Entropic and Information-Theoretic Constraints
The Shannon entropy of a log-concave vector is tightly pinned between lower and upper bounds determined by the supremum of its density :
This constraint is universal across dimension, sharply attained by different archetypes: the uniform law on a convex body saturates the lower bound, the product exponential law the upper; the normal distribution is exactly in the middle. The entropy per coordinate is thus confined to a narrow interval—differing from the Gaussian by at most $1/2$ per coordinate (Bobkov et al., 2010).
Beyond Shannon entropy, varentropy (the variance of ) is optimally bounded by the dimension: , with equality precisely for product exponential laws (Fradelizi et al., 2015). The information content enjoys sharp sub-Gaussian concentration with fluctuations at scale and exponential tail decay (Bobkov et al., 2010).
These features underwrite quantitative forms of the entropy power inequality (EPI), including reverse EPI statements for log-concave laws, and enable close estimates of rate-distortion functions and channel capacities—ensuring for example that for any log-concave noise, channel capacity is within $1$ bit of the corresponding Gaussian case (Marsiglietti et al., 2017), and the difference between the true rate-distortion function and the Shannon lower bound is at most bits regardless of distortion criterion.
3. Central Limit Theorems and Dimension-Dependence
The sum of i.i.d. log-concave random vectors in converges in law to the Gaussian with matching mean and covariance. Quantitative central limit theorems for log-concave vectors have established that, for all and ,
where denotes the class of axis-aligned rectangles. If the Kannan–Lovász–Simonovits (KLS) spectral gap conjecture holds, the bound improves to , which is optimal for (2207.14536). Similar nearly-dimension-free rates hold for Wasserstein distances (), and moderate deviation results of Cramér-type. These bounds reflect the pivotal role of log-concavity in taming high-dimensional dependences.
4. Functional and Geometric Inequalities
Log-concave random vectors form the setting for numerous sharp inequalities that link convex geometry, analysis, and probability:
- Poincaré and weighted Poincaré inequalities: Quantitative variance bounds for smooth functions, with weighted forms that account for local conditional variances (Cordero-Erausquin et al., 2014).
- Thin-shell and concentration estimates: Variance of the Euclidean norm is conjecturally uniformly bounded (the "thin-shell" conjecture), with best-known bounds at ; improved bounds imply improved estimates on the isotropic constant and slicing conjecture (Eldan et al., 2013).
- Operator norms and tail bounds: Maximal operator norms of random matrices with log-concave rows, as well as coordinate projections and submatrix norms, admit exponential tail bounds—crucial for applications in compressive sensing (e.g., verifying the Restricted Isometry Property) (Adamczak et al., 2011).
- Weak and strong moments: The th "strong" moment is universally controlled by its first moment and "weak" moments (maximal one-dimensional projections) (Latała, 2010).
5. Applications in Convex Geometry and High-dimensional Analysis
Log-concave vectors serve as a bridge between probabilistic and geometric phenomena:
- Slicing (Hyperplane) Conjecture: Upper bounds for the isotropic constant of convex bodies can be reformulated and analyzed in entropic terms, with log-concave bounds encoding analytic information about geometric sections (Bobkov et al., 2010, Ball et al., 2012).
- Norms of sums and random polytopes: Integral inequalities for sums of independent (or weighted) log-concave vectors underlie the analysis of random polytopes, with sharp estimates for volume, mean width, and quermassintegrals in regimes (Chasapis et al., 2019, Giannopoulos et al., 2016, Skarmogiannis, 2022).
- Order statistics and extreme value analysis: Two-sided estimates for expectations of order statistics (e.g., th maxima of coordinate moduli) provide dimension-independent constants in both unconditional and isotropic settings (Latała et al., 2019).
6. Recent Algorithmic and Statistical Developments
- Density estimation with independent components: When a log-concave density on admits an orthogonal transformation splitting it into independent components, efficient two-stage estimators canonically reduce the -dimensional problem to univariate estimations (using log-concave MLE for each), leading to near-optimal sample complexity in squared Hellinger distance—dramatically improving computational and statistical tractability (Kubal et al., 3 Jan 2024).
- Discrete analogues and entropy monotonicity: Discrete entropy-power-like inequalities extend to log-concave random vectors on , with precise monotonicity increments up to negligible errors as the entropy grows, leveraging deep connections between discrete and continuous notions of isotropy, convexity, and entropy (Fradelizi et al., 27 Jan 2024).
- Reverse entropy power inequalities and norm structure: For symmetric log-concave vectors, functionals of the form yield a $1/5$-seminorm structure, suggesting new perspectives on the stability and geometry of entropy under linear operations (Ball et al., 2015).
7. Future Directions and Open Problems
- Thin-shell and KLS spectral gap conjectures: These two conjectures continue to drive research, with deep implications for concentration, geometric inequalities, and optimality of high-dimensional CLTs (Eldan et al., 2013, Ball et al., 2012, 2207.14536).
- Behavior under convolution and discrete extensions: Understanding the preservation and propagation of log-concavity, both in the continuous and discrete (lattice) world, remains a delicate area, especially concerning discretization, convolution powers, and the precise interplay with isotropy (Fradelizi et al., 27 Jan 2024).
- Sharp constants for functional inequalities: Further refinement in the constants for operator norm, moment, concentration, and entropy inequalities remains an active field, particularly as techniques from optimal transport, stochastic localization, and geometric measure theory are further developed.
- Algorithmic applications: Efficient high-dimensional methods for estimation, learning, and inference under log-concavity assumptions—including mixtures, projections, and latent structure—represent a fertile ground for both theory and application (Kubal et al., 3 Jan 2024).
Log-concave random vectors form the backbone of modern high-dimensional probability, providing robust analytic structure, deep geometric insights, and a unifying context for inequalities central to mathematical analysis, data science, and convex geometry.