Fourier-Entropy-Influence Conjecture
- Fourier-Entropy-Influence Conjecture is a central topic in Boolean function analysis, asserting a universal linear relation between spectral entropy and total influence.
- Research demonstrates that structured functions such as symmetric, read-once formulas, and polynomial-size DNFs satisfy the conjecture through explicit bounds.
- Recent progress includes improved upper bounds like O(I log I) and communication protocol approaches, yet challenges remain in bridging worst-case and average-case behaviors.
The Fourier-Entropy-Influence (FEI) Conjecture is a central open problem in the analysis of Boolean functions, positing a universal linear relation between the spectral entropy and the total influence (average sensitivity) of a Boolean function. Stemming from the work of Friedgut and Kalai (1996), the conjecture asserts deep connections between the “spread” of the Fourier spectrum and the combinatorial sensitivity to input perturbations. The FEI conjecture has significant implications for complexity theory, learning theory, and combinatorics.
1. Formal Statement and Definitions
Let denote a Boolean function. Its unique Fourier expansion is
where , and Parseval's identity ensures , so is a probability distribution on subsets. The spectral entropy, encoding the “spread” of this distribution, is
The (total) influence of is
with .
The Fourier-Entropy-Influence Conjecture states (Das et al., 2011, Wan et al., 2013, Hod, 2017, O'Donnell et al., 2013):
There exists a universal constant such that for all Boolean functions ,
2. Proven Cases and Explicit Lower Bounds
Significant progress has been achieved for specific function classes:
- Symmetric and -part symmetric functions: O'Donnell, Wright, Zhou established FEI for all symmetric functions with and extended to -part symmetric families for constant (Das et al., 2011).
- Read-once formulas and decision trees: The FEI conjecture holds for read-once formulas of bounded arity, as shown using a composition theorem (O'Donnell et al., 2013).
- Polynomial-size DNF: A large fraction of polynomial-sized DNF formulas satisfy FEI, contingent on Mansour's conjecture (Das et al., 2011).
- Extremal classes: Explicit bounds are known for extremely low-influence and high-entropy functions. For , , and for , (Shalev, 2018).
Sharp lower bounds on the optimal constant in the conjecture have been constructed via explicit functions:
- Lexicographic and composition constructions: The largest known ratio is achieved with monotone lexicographic functions using recursive and composition-based techniques (Hod, 2017).
- Stability: Spectral entropy and influence are stable under perturbing a single output; if differ at one position (Hod, 2017).
A summary of FEI status in explicit classes:
| Class | Provable C | Notes |
|---|---|---|
| Symmetric functions | 4.6 | (Das et al., 2011) |
| -part symmetric | $4-5$ | constant (Das et al., 2011) |
| Monotone, read-once | Upper bound (Hod, 2017) | |
| Lexicographic functions | Lower bound; extremal constructions |
3. Average-Case, Random, and Structured Boolean Functions
For a random Boolean function , the FEI conjecture holds with overwhelming probability and with essentially optimal constant for any fixed as . This is proven by calculating the mean and variance of , showing sharp concentration around , and using the trivial entropy bound (Das et al., 2011). This indicates that counterexamples, if they exist, are extremely rare; most functions saturate FEI with .
However, structured functions arising in applications (e.g., DNFs, decision trees, threshold functions) do not resemble random functions. For these, the best known upper bounds are substantially larger, and the challenge is to bridge the gap between worst-case and average-case behavior.
4. Improved General Upper Bounds and Technical Progress
Recent work has pushed the best general upper bounds closer to the conjectured linear regime:
- O() Bound: A weakened form is , valid for all Boolean (Kelman et al., 2019). This improves over older quadratic-influence bounds and applies to arbitrary .
- O() Bound: Han has established
for universal (Han, 2023, Han, 2023). This bound interpolates between the conjectured linear regime and , becoming tight () when all influences are comparable and only as large as in the worst case.
These results draw on moment-methods applied to the restricted spectrum, random restrictions, and combinatorial lemmas tracking the incremental change in entropy as more variables are fixed.
5. Communication and Coding Interpretations
The FEI conjecture is equivalent to the existence of efficient communication protocols: Given access to a sample from the Fourier spectral distribution, there is a prefix-free protocol to transmit using at most expected symbols over a constant-sized alphabet (Wan et al., 2013). This aligns the FEI conjecture with source coding in information theory and provides a route to proving upper bounds on entropy via the construction of communication schemes with cost proportional to total influence.
Additionally, these techniques underlie composition theorems showing that if the FEI inequality holds for constituent functions, it also holds for their composition, greatly enlarging the provable domain (e.g., read-once formulas) (O'Donnell et al., 2013).
6. Min-Entropy Variant, Structural Results, and Open Directions
A weaker version, the Fourier Min-Entropy/Influence (FMEI) conjecture, relates the minimal possible uncertainty in the spectral distribution (min-entropy) to influence. The current best-known lower bound on the universal constant is via palindromic extension and disjoint composition techniques (Biswas et al., 2022).
Open problems and structural questions include:
- Tightening bounds in structured function classes, such as read- DNFs, where even for regular cases, only -factor violations can be ruled out (Shalev, 2018, Arunachalam et al., 2018).
- FEI for linear threshold functions: For random LTFs, FEI holds with high probability and explicit constants, but the case for all (not random) LTFs remains unresolved (Chakraborty et al., 2019).
- Certificate- and composition-based methods: Improved entropy upper bounds in terms of unambiguous certificate complexity and minimum parity-certificate complexity have been obtained (Arunachalam et al., 2018). Toward proving FEI in full, finer structural analysis or new combinatorial/probabilistic methods may be required.
The main bottleneck remains the existence of explicit functions saturating or violating FEI, especially with extremely imbalanced influence profiles, and the challenge of closing the gap between linear and almost-linear bounds in the entropy-influence relationship.
7. Implications, Limitations, and Future Work
FEI implies sharp results in learning theory, such as agnostic learnability of low-influence Boolean classes in time (Kelman et al., 2019). If FEI is resolved in the affirmative, it would refute the possibility of "overly flat" polynomial approximators (large-sparsity, low-degree) for Boolean functions, connecting to long-standing questions in circuit complexity and functional analysis (Arunachalam et al., 2018).
Limitations of current techniques include:
- The best generic bounds are still off by logarithmic factors in .
- Lexicographic functions appear extremal, with structural properties suggesting any proof or counterexample must engage with such "low-influence, high-entropy" constructions (Hod, 2017).
Ongoing directions:
- Improving constants and extending FEI to broader function classes (e.g., higher read formulas, arbitrary LTFs).
- Exploiting random restriction, high-order influence, and advanced isoperimetric/pseudorandomness tools.
- Seeking functional-analytic or information-theoretic approaches that could circumvent known combinatorial bottlenecks.
The FEI conjecture continues to serve as a focal point for highly technical research in discrete Fourier analysis, Boolean complexity, and probabilistic combinatorics, with substantial progress made but a full resolution still elusive.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free