Papers
Topics
Authors
Recent
2000 character limit reached

Fourier Entropy of Boolean Functions

Updated 7 December 2025
  • Fourier entropy of Boolean functions is defined as the Shannon entropy of the squared Fourier coefficients, capturing the distribution of spectral mass.
  • The FEI framework connects the spectral spread to total influence, indicating that functions with higher entropy exhibit greater sensitivity and complexity.
  • Recent advances extend FEI principles to p-biased measures, establish tight entropy bounds for low-degree functions, and highlight open challenges in removing logarithmic terms.

A Boolean function on the discrete hypercube admits a unique expansion in terms of Fourier characters, and the distribution of its squared coefficients—the Fourier spectrum—captures essential analytic and combinatorial properties. The Fourier entropy of a Boolean function quantifies the spectral spread of its Fourier–Walsh expansion, while the total influence measures the average sensitivity to coordinate flips. The intricate relationship between these two measures, formalized in the influential Fourier Entropy–Influence (FEI) conjecture, has driven a rich interplay among analysis, combinatorics, and theoretical computer science.

1. Fundamental Definitions and Spectral Framework

Let f:{0,1}n{1,1}f:\{0,1\}^n\to\{-1,1\} be a Boolean function equipped with the product measure μp\mu_p on the discrete cube; for p=1/2p=1/2 this is the uniform measure. The function ff admits a unique Fourier–Walsh expansion: f(x)=S[n]f^(S)uS(x),f(x) = \sum_{S\subseteq [n]} \hat{f}(S) u_S(x), where uS(x)=iS(xip)/p(1p)u_S(x) = \prod_{i\in S} (x_i-p)/\sqrt{p(1-p)} are the normalized Walsh characters under μp\mu_p and f^(S)=Eμp[f(x)uS(x)]\hat{f}(S)=\mathbb{E}_{\mu_p}[f(x)u_S(x)] are the Fourier coefficients.

The (spectral or Fourier) entropy of ff is

Entp(f)=S[n]f^(S)2log(f^(S)2),\mathrm{Ent}_p(f) = -\sum_{S\subseteq[n]} \hat{f}(S)^2 \log\bigl(\hat{f}(S)^2\bigr),

which is the Shannon entropy of the spectral distribution {f^(S)2}\{\hat{f}(S)^2\}. The influence of coordinate ii is

Infi(p)(f)=Prxμp[f(x)f(xi)],\mathrm{Inf}_i^{(p)}(f) = \Pr_{x\sim\mu_p}\bigl[f(x)\neq f(x^{\oplus i})\bigr],

with xix^{\oplus i} denoting xx with its iith bit flipped. The total influence is

Ip(f)=i=1nInfi(p)(f)=14p(1p)SSf^(S)2.I_p(f) = \sum_{i=1}^n \mathrm{Inf}_i^{(p)}(f) = \frac{1}{4p(1-p)}\sum_{S}|S| \hat{f}(S)^2.

Parseval's identity ensures Sf^(S)2=1\sum_S \hat{f}(S)^2=1, so the spectrum forms a probability distribution.

2. The Entropy/Influence Conjecture and Its Extensions

The FEI conjecture, formulated by Friedgut and Kalai (1996), asserts the existence of a universal C>0C>0 such that for all Boolean ff,

Ent1/2(f)CI1/2(f).\mathrm{Ent}_{1/2}(f) \leq C\,I_{1/2}(f).

Keller, Mossel, and Schlank extend this to the μp\mu_p-biased product measure: Entp(f)CplogpIp(f).\mathrm{Ent}_p(f) \leq C\,p|\log p|\,I_p(f). This scaling accommodates the higher "compression" of the Fourier spectrum under highly imbalanced biases. The conjecture captures the intuition that functions with spectrally "spread out" Fourier weight are necessarily sensitive (high influence)—and hence structurally complex—while functions with highly concentrated spectra (e.g., juntas) are stable to perturbations (Keller et al., 2011).

3. Structural Results and Proven Bounds

While the general FEI conjecture remains unproven, multiple significant results establish its validity in restricted regimes and motivate sharp upper and lower bounds.

A. Functions with Low High-Level Fourier Weight

If all Fourier weight of ff lies on levels Sk|S|\leq k, then

Ent1/2(f)2k.\mathrm{Ent}_{1/2}(f) \leq 2k.

Moreover, if the mass above level tt decays exponentially, i.e., S>tf^(S)2<ec0keat\sum_{|S|>t}\hat{f}(S)^2 < e^{-c_0k}\,e^{-a t} for all tkt\geq k, sharp supports and entropy bounds follow: Ent1/2(f)=O(k).\mathrm{Ent}_{1/2}(f) = O(k). For these classes, the Fourier spectrum is both support- and entropy-concentrated, corroborating the principle that higher-level mass inflates spectral entropy (Keller et al., 2011).

B. Sharp Inequality Incorporating Coordinate-Wise Terms

Recent advances demonstrate that for every Boolean or real-valued ff with E[f2]=1\mathbb{E}[f^2]=1,

Ent(f)I(f)+i=1nInfi(f)ln1Infi(f),\mathrm{Ent}(f) \leq I(f) + \sum_{i=1}^n \mathrm{Inf}_i(f)\ln \frac{1}{\mathrm{Inf}_i(f)},

where the sum encodes the contribution from individual influences. This sharp result tightens earlier "mixed" bounds with suboptimal constants, and shows the entropy-excess over I(f)I(f) is controlled by a sum that only becomes large when ff is highly unbalanced in variable sensitivity (Li et al., 2 Dec 2025).

4. Key Lemmas and Reduction Principles

Friedgut–Kalai's reduction mechanism allows transference of FEI-type statements between biased and unbiased settings. For g=Red(f)g=\mathrm{Red}(f), a suitable encoding of a pp-biased function as a uniform function, one has

I1/2(g)6plogpIp(f),I_{1/2}(g) \leq 6\,p\,|\log p|\,I_p(f),

with spectral mass preserved in a block-wise sense (Keller et al., 2011). This tensorization approach is reinforced by information-theoretic chain rules: by progressively revealing variables and controlling spectral moments at each step, both spectral entropy and influence can be tracked via conditional expectations—yielding the chain sum decomposition

H(f)=i=1nH(SiSi1).H(f) = \sum_{i=1}^n H(S_i|S_{i-1}).

Inequalities such as hypercontractivity (Bonami–Beckner) play a critical role in bounding high-level Fourier mass along the reduction flow.

5. Explicit Examples, Lower Bounds, and Tightness

Sharpness and extremal behavior are studied via explicit constructions. For instance, lexicographic functions and iterated composition yield sequences with

H[f]I[f]6.45,\frac{H[f]}{I[f]} \approx 6.45,

suggesting that the universal constant CC in FEI, if it exists, must satisfy C6.45C\geq 6.45 for arbitrary Boolean functions (Hod, 2017). Lower bounds in the biased hypercube, such as

Entp(f)4p(1p)(2p1)2i=1nInfi(p)(f)2,\mathrm{Ent}_p(f) \geq 4p(1-p)(2p-1)^2 \sum_{i=1}^n \mathrm{Inf}_i^{(p)}(f)^2,

provide a complementary constraint, with equality achieved (up to logarithmic factors) by dictators and parity functions (Chang, 11 Nov 2025).

For functions restricted to low-degree Fourier support or with exponentially decaying tails, upper and lower bounds are nearly tight, but the conjecture remains open for general functions without such spectral regularity.

6. Philosophical and Open Problems

Several central open directions emerge:

  • Generalization of Bourgain–Kalai Bounds: Existing arguments for entropy concentration under exponential tail decay have not yet been extended to handle merely polynomial decay, which would likely bridge the gap to a full proof of the conjecture (Keller et al., 2011).
  • Necessity of the iInfi(f)ln(1/Infi(f))\sum_i \mathrm{Inf}_i(f)\ln(1/\mathrm{Inf}_i(f)) Term: The best current general upper bound is

Ent(f)I(f)+i=1nInfi(f)ln1Infi(f).\mathrm{Ent}(f) \leq I(f) + \sum_{i=1}^n \mathrm{Inf}_i(f)\ln \frac{1}{\mathrm{Inf}_i(f)}.

Removing the second term in favor of O(I(f))O(I(f)) is tantamount to resolving the FEI conjecture in the strict form (Li et al., 2 Dec 2025).

  • Combinatorial or Hypercontractive Approaches: The search for a "purely combinatorial" or "energy-based" proof covering all Boolean functions—without the chain-rule or hypercontractive machinery that introduces logarithmic losses—remains open.
  • Product Measures and Biased Extensions: For general pp-biased product measures, FEI-type statements with the optimal dependence on pp (i.e., O(plog(1/p)Ip(f))O(p\log(1/p)I_p(f))) remain conjectural except in restricted settings.

7. Connections to Broader Theory

The FEI framework interfaces deeply with:

  • Threshold phenomena and random graph properties: For certain monotone properties, the entropy/influence ratio governs the sharpness of random thresholds (Keller et al., 2011).
  • Mansour's Conjecture: If FEI holds, it implies that any Boolean function with small influence is well-approximated by a Fourier polynomial of bounded sparsity, yielding major consequences for learning theory and DNF-approximation (Keller et al., 2011).
  • Junta theory and learning: Functions with concentrated spectral entropy generalize junta-type behavior, centrally important for agnostic learning algorithms.

The current corpus establishes the FEI conjecture for numerous structured function classes—read-once formulas, symmetric functions, bounded-depth decision trees, random linear threshold functions—but the full generality for arbitrary Boolean functions remains an open frontier. The trajectory of research continues to blend combinatorial analysis, information theory, and spectral methods to elucidate the delicate balance between Fourier entropy and sensitivity.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Fourier Entropy of Boolean Functions.