Bernoulli f-Divergence Inequality
- Bernoulli f-Divergence Inequality is a framework defining sharp, explicit bounds linking f-divergences to total variation in Bernoulli distributions through convex generating functions.
- Its methodology leverages reduction to two-point supports and precise extremal conditions, thereby generalizing classical results like Pinsker’s inequality to quantum contexts.
- The inequality underpins applications in statistical decision making and information theory, offering actionable insights for hypothesis testing, risk minimization, and quantum divergence analysis.
The Bernoulli -divergence inequality provides sharp, explicit relations between various -divergences (of the Csiszár type) for Bernoulli distributions, frequently parameterized in terms of the total variation distance. These inequalities subsume and generalize classical results such as Pinsker’s, and form a kernel for both classical and quantum information theoretic bounds. The foundational results revolve around convexity properties of the generating function and leverage reduction arguments to two-point supports.
1. Definition and Principal Formulation
Let be convex with . For probability measures , the -divergence is defined by
$D_f(P\|Q) = \int_{q>0} f\left(\frac{p}{q}\right) dQ + f'(\infty) P\{ q=0 \}$
where , under any dominating measure . For Bernoulli distributions , ,
(Guntuboyina et al., 2013, 0903.1765, Bongole et al., 17 Jan 2026, Lanier et al., 24 Jan 2025).
2. Sharp Lower Bounds via Total Variation
The central inequalities relate to the total variation distance :
- Bröcker’s monotonic lower bound (0903.1765):
This is tight for Bernoulli variables. The bounding function is strictly increasing in under mild regularity assumptions.
- Sharp minimization via support reduction (Guntuboyina et al., 2013): For the minimum at fixed ,
attained when , , i.e., at symmetric pairs.
3. Best-Possible Generalized Pinsker Inequalities
The framework in (0906.1244) gives integral representations and tight “Pinsker-type” lower bounds for arbitrary in terms of total variation: where , , and for twice-differentiable .
The minimizing, or extremal, Bernoulli pairs for fixed have points at , .
4. Explicit Algebraic and Sandwich Inequalities
The “binary -divergence inequality” (Lanier et al., 24 Jan 2025, Sason, 2015) provides sharp algebraic sandwich bounds between any two Bernoulli -divergences, with formulas involving ratios and the divergence: where
and the total variation and divergence are
This inequality gives explicit control of the -divergence in terms of basic symmetric functions of and (Sason, 2015).
5. Optimality, Tightness, and Equality Conditions
The reductions above are maximally tight for Bernoulli laws. Tightness follows from the fact that the relevant functions (Bayes-risk curve, data processing contractions, etc.) achieve their extrema for binary distributions. Equality is attained precisely when takes only two values and is affine over the critical support points involved in the inequalities.
Cases of equality in the sandwich bound occur only in degenerate cases (i.e., or affine) or for the aforementioned symmetric extremal pairs.
6. Instantiations and Special Cases
The Bernoulli -divergence inequalities specialize to classical divergences:
| function | expression | Lower Bound Example |
|---|---|---|
| (KL) | ||
| (Hellinger) | ||
| () |
All these bounds encode sharp relationships that are maximally attained for the extremal Bernoulli pairs (Guntuboyina et al., 2013, Lanier et al., 24 Jan 2025, 0903.1765, 0906.1244).
7. Applications and Extensions
The Bernoulli -divergence inequality underpins several advanced methods:
- Interactive statistical decision making: The reduction and inversion to two-sided intervals for monotone transforms of risk (e.g., for prior-predictive CVaR and quantile lower bounds) (Bongole et al., 17 Jan 2026).
- Transfer to quantum divergences: The inequalities lift directly to quantum settings by reduction to classical analogues on two-point supports, sidestepping complex matrix analysis (Lanier et al., 24 Jan 2025).
- Information-theoretic converse bounds: Generalization of Fano’s inequality and derivation of tight explicit bounds for loss probabilities, exponential moments, and tail risks.
The Bernoulli -divergence inequality is thus a foundational tool for optimally relating statistical divergences under minimal informativeness constraints, with broad implications for hypothesis testing, risk minimization, and quantum information theory.