Papers
Topics
Authors
Recent
2000 character limit reached

Fourier-Entropy-Influence Conjecture

Updated 18 November 2025
  • Fourier-Entropy-Influence Conjecture is a central topic in Boolean function analysis, asserting a universal linear relation between spectral entropy and total influence.
  • Research demonstrates that structured functions such as symmetric, read-once formulas, and polynomial-size DNFs satisfy the conjecture through explicit bounds.
  • Recent progress includes improved upper bounds like O(I log I) and communication protocol approaches, yet challenges remain in bridging worst-case and average-case behaviors.

The Fourier-Entropy-Influence (FEI) Conjecture is a central open problem in the analysis of Boolean functions, positing a universal linear relation between the spectral entropy and the total influence (average sensitivity) of a Boolean function. Stemming from the work of Friedgut and Kalai (1996), the conjecture asserts deep connections between the “spread” of the Fourier spectrum and the combinatorial sensitivity to input perturbations. The FEI conjecture has significant implications for complexity theory, learning theory, and combinatorics.

1. Formal Statement and Definitions

Let f:{1,1}n{1,1}f:\{-1,1\}^n \to \{-1,1\} denote a Boolean function. Its unique Fourier expansion is

f(x)=S[n]f^(S)χS(x),f(x) = \sum_{S\subseteq[n]} \widehat{f}(S) \chi_S(x)\,,

where χS(x)=iSxi\chi_S(x) = \prod_{i\in S} x_i, and Parseval's identity ensures Sf^(S)2=1\sum_S \widehat{f}(S)^2 = 1, so {f^(S)2}\{\widehat{f}(S)^2\} is a probability distribution on 2n2^n subsets. The spectral entropy, encoding the “spread” of this distribution, is

H(f)=Sf^(S)2logf^(S)2.H(f) = -\sum_{S} \widehat{f}(S)^2 \log \widehat{f}(S)^2\,.

The (total) influence of ff is

I(f)=i=1nInfi(f)=S[n]Sf^(S)2,I(f) = \sum_{i=1}^n \mathrm{Inf}_i(f) = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2\,,

with Infi(f)=Px[f(x)f(xi)]\mathrm{Inf}_i(f) = \mathbb{P}_x[f(x) \neq f(x^{\oplus i})].

The Fourier-Entropy-Influence Conjecture states (Das et al., 2011, Wan et al., 2013, Hod, 2017, O'Donnell et al., 2013):

There exists a universal constant C>0C>0 such that for all Boolean functions f:{1,1}n{1,1}f:\{-1,1\}^n\to\{-1,1\}, >H(f)CI(f).>> H(f) \leq C \cdot I(f)\,. >

2. Proven Cases and Explicit Lower Bounds

Significant progress has been achieved for specific function classes:

  • Symmetric and dd-part symmetric functions: O'Donnell, Wright, Zhou established FEI for all symmetric functions with C4.6C\approx 4.6 and extended to dd-part symmetric families for constant dd (Das et al., 2011).
  • Read-once formulas and decision trees: The FEI conjecture holds for read-once formulas of bounded arity, as shown using a composition theorem (O'Donnell et al., 2013).
  • Polynomial-size DNF: A large fraction of polynomial-sized DNF formulas satisfy FEI, contingent on Mansour's conjecture (Das et al., 2011).
  • Extremal classes: Explicit bounds are known for extremely low-influence and high-entropy functions. For I[f]2cnI[f] \le 2^{-cn}, H[f]4c+1cI[f]H[f] \leq 4\frac{c+1}{c} I[f], and for H[f]cnH[f]\geq cn, H[f]1+ch1(c2)I[f]H[f] \leq \frac{1+c}{h^{-1}(c^2)} I[f] (Shalev, 2018).

Sharp lower bounds on the optimal constant CC in the conjecture have been constructed via explicit functions:

  • Lexicographic and composition constructions: The largest known ratio C=H(f)/I(f)6.454784C = H(f)/I(f) \geq 6.454784 is achieved with monotone lexicographic functions using recursive and composition-based techniques (Hod, 2017).
  • Stability: Spectral entropy and influence are stable under perturbing a single output; H(f)H(g)12n/2n|H(f) - H(g)| \leq 12n / \sqrt{2^n} if f,gf,g differ at one position (Hod, 2017).

A summary of FEI status in explicit classes:

Class Provable C Notes
Symmetric functions \sim4.6 (Das et al., 2011)
dd-part symmetric $4-5$ dd constant (Das et al., 2011)
Monotone, read-once 10\leq 10 Upper bound (Hod, 2017)
Lexicographic functions 6.45\geq6.45 Lower bound; extremal constructions

3. Average-Case, Random, and Structured Boolean Functions

For a random Boolean function f:{1,1}n{1,1}f:\{-1,1\}^n \to \{-1,1\}, the FEI conjecture holds with overwhelming probability and with essentially optimal constant C=2+δC=2+\delta for any fixed δ>0\delta>0 as nn\to\infty. This is proven by calculating the mean and variance of I(f)I(f), showing sharp concentration around n/2n/2, and using the trivial entropy bound H(f)nH(f)\leq n (Das et al., 2011). This indicates that counterexamples, if they exist, are extremely rare; most functions saturate FEI with C2C\approx2.

However, structured functions arising in applications (e.g., DNFs, decision trees, threshold functions) do not resemble random functions. For these, the best known upper bounds are substantially larger, and the challenge is to bridge the gap between worst-case and average-case behavior.

4. Improved General Upper Bounds and Technical Progress

Recent work has pushed the best general upper bounds closer to the conjectured linear regime:

  • O(IlogII\log I) Bound: A weakened form is H(f)CI(f)log(1+I(f))H(f) \leq C' I(f) \log(1+I(f)), valid for all Boolean ff (Kelman et al., 2019). This improves over older quadratic-influence bounds and applies to arbitrary ff.
  • O(I(f)+kIk(f)log(1/Ik(f))I(f) + \sum_{k} I_k(f)\log(1/I_k(f))) Bound: Han has established

H(f)C1I(f)+C2k=1nIk(f)log1Ik(f)H(f) \leq C_1 I(f) + C_2 \sum_{k=1}^n I_k(f)\log \frac{1}{I_k(f)}

for universal C1,C2C_1, C_2 (Han, 2023, Han, 2023). This bound interpolates between the conjectured linear regime and O(Ilogn)O(I\log n), becoming tight (O(I)O(I)) when all influences are comparable and only as large as O(IlogI)O(I\log I) in the worst case.

These results draw on moment-methods applied to the restricted spectrum, random restrictions, and combinatorial lemmas tracking the incremental change in entropy as more variables are fixed.

5. Communication and Coding Interpretations

The FEI conjecture is equivalent to the existence of efficient communication protocols: Given access to a sample SS from the Fourier spectral distribution, there is a prefix-free protocol to transmit SS using at most CI(f)C\cdot I(f) expected symbols over a constant-sized alphabet (Wan et al., 2013). This aligns the FEI conjecture with source coding in information theory and provides a route to proving upper bounds on entropy via the construction of communication schemes with cost proportional to total influence.

Additionally, these techniques underlie composition theorems showing that if the FEI inequality holds for constituent functions, it also holds for their composition, greatly enlarging the provable domain (e.g., read-once formulas) (O'Donnell et al., 2013).

6. Min-Entropy Variant, Structural Results, and Open Directions

A weaker version, the Fourier Min-Entropy/Influence (FMEI) conjecture, relates the minimal possible uncertainty in the spectral distribution (min-entropy) to influence. The current best-known lower bound on the universal constant is D128/452.8444D \geq 128/45 \approx 2.8444 via palindromic extension and disjoint composition techniques (Biswas et al., 2022).

Open problems and structural questions include:

  • Tightening bounds in structured function classes, such as read-kk DNFs, where even for regular cases, only O(logk)O(\log k)-factor violations can be ruled out (Shalev, 2018, Arunachalam et al., 2018).
  • FEI for linear threshold functions: For random LTFs, FEI holds with high probability and explicit constants, but the case for all (not random) LTFs remains unresolved (Chakraborty et al., 2019).
  • Certificate- and composition-based methods: Improved entropy upper bounds in terms of unambiguous certificate complexity and minimum parity-certificate complexity have been obtained (Arunachalam et al., 2018). Toward proving FEI in full, finer structural analysis or new combinatorial/probabilistic methods may be required.

The main bottleneck remains the existence of explicit functions saturating or violating FEI, especially with extremely imbalanced influence profiles, and the challenge of closing the gap between linear and almost-linear bounds in the entropy-influence relationship.

7. Implications, Limitations, and Future Work

FEI implies sharp results in learning theory, such as agnostic learnability of low-influence Boolean classes in time 2O(IlogI)2^{O(I\log I)} (Kelman et al., 2019). If FEI is resolved in the affirmative, it would refute the possibility of "overly flat" polynomial approximators (large-sparsity, low-degree) for Boolean functions, connecting to long-standing questions in circuit complexity and functional analysis (Arunachalam et al., 2018).

Limitations of current techniques include:

  • The best generic bounds are still off by logarithmic factors in I(f)I(f).
  • Lexicographic functions appear extremal, with structural properties suggesting any proof or counterexample must engage with such "low-influence, high-entropy" constructions (Hod, 2017).

Ongoing directions:

  • Improving constants and extending FEI to broader function classes (e.g., higher read formulas, arbitrary LTFs).
  • Exploiting random restriction, high-order influence, and advanced isoperimetric/pseudorandomness tools.
  • Seeking functional-analytic or information-theoretic approaches that could circumvent known combinatorial bottlenecks.

The FEI conjecture continues to serve as a focal point for highly technical research in discrete Fourier analysis, Boolean complexity, and probabilistic combinatorics, with substantial progress made but a full resolution still elusive.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Fourier-Entropy-Influence (FEI) Conjecture.