Papers
Topics
Authors
Recent
2000 character limit reached

Cross-Entropy Quantum Threshold (XQUATH)

Updated 10 December 2025
  • XQUATH is a complexity-theoretic assumption asserting that no polynomial-time classical algorithm can significantly outperform a trivial cross-entropy estimator for random quantum circuits.
  • It underpins Linear XEB benchmarks by linking improved fidelity scores with the classical hardness of simulating random circuit outputs.
  • Empirical analyses show that XQUATH fails for shallow-depth circuits, prompting a re-evaluation of quantum verification protocols and the search for more robust benchmarks.

The Cross-Entropy Quantum Threshold (XQUATH) assumption occupies a central position in the theoretical foundation of quantum supremacy claims that employ cross-entropy metrics, specifically Linear Cross-Entropy Benchmarking (Linear XEB). XQUATH postulates the average-case hardness, for polynomial-time classical algorithms, of generating samples or estimates from random quantum circuits that exceed a minuscule baseline in cross-entropy fidelity. This assumption directly informs the asymptotic security of linear-XEB–based benchmarks, with implications extending into practical and theoretical quantum verification protocols.

1. Formal Definition and Mathematical Structure

XQUATH (Aaronson–Gunn 2019) is formalized for an nn-qubit random quantum circuit UU of depth dd, with output probability distribution p(U,x)=xU0n2p(U,x) = |\langle x|U|0^n\rangle|^2. Consider any classical estimator qC(U,0n)q_C(U,0^n), and define the classical gain: XQ=22n{EU[(p(U,0n)2n)2]EU[(p(U,0n)qC(U,0n))2]}XQ = 2^{2n} \left\{ \mathbb{E}_U \left[ (p(U,0^n) - 2^{-n})^2 \right] - \mathbb{E}_U \left[ (p(U,0^n) - q_C(U,0^n))^2 \right] \right\} The XQUATH assertion is that no polynomial-time classical algorithm CC can, on input (U,0n)(U,0^n), output qC(U,0n)q_C(U,0^n) such that XQ=Ω(1/2n)XQ = \Omega(1/2^n) (Tanggara et al., 2024).

This threshold is compared against the trivial estimator q=1/2nq = 1/2^n, which yields the corresponding mean squared error bound. The granular requirement is that improvement over the baseline must remain exponentially suppressed for efficient classical algorithms.

2. Motivational Context in Quantum Benchmarking

XQUATH is motivated by the complexity-theoretic gap asserted between classical and quantum sampling in Random Circuit Sampling (RCS) experiments. Specifically, the Linear XEB score is defined: XEB(U)=2nxp(U,x)pexp(U,x)1\mathrm{XEB}(U) = 2^n \sum_x p(U,x) p_{\mathrm{exp}}(U,x) - 1 where pexp(U,x)p_{\mathrm{exp}}(U,x) is the empirically observed sample frequency (Tanggara et al., 2024). Quantum supremacy claims hinge on demonstrating that the observed fidelity, as measured by XEB, cannot be emulated by classical polynomial-time algorithms.

Aaronson and Gunn establish a two-step reduction: (a) high-XEB samplers enable solutions to the “Heavy Output Generation” (XHOG) problem; (b) XHOG is classically hard contingent upon XQUATH. Thus, XQUATH operates as a complexity-theoretic keystone for demonstrating the impossibility of classical spoofing in cross-entropy–based protocols (Tanggara et al., 2024).

3. Classical and Quantum Lower Bounds: Black-Box Models

In oracle or black-box models, rigorous bounds are established for both classical and quantum algorithms. Sampling uniformly from {0,1}n\{0,1\}^n yields an average cross-entropy score Ez[XC(z)]=1/2n\mathbb{E}_z[X_C(z)] = 1/2^n, whereas naive quantum sampling from the circuit distribution for Haar-random circuits exhibits EC,z[XC(z)]2/2n\mathbb{E}_{C,z}[X_C(z)] \approx 2/2^n (Kretschmer, 2020).

The “Quantum Supremacy Tsirelson Inequality” establishes a lower bound on quantum algorithms: any quantum algorithm making qq queries to a black-box Haar-random circuit must satisfy

qΩ(2n/4/poly(n))q \geq \Omega\left(2^{n/4} / \mathrm{poly}(n)\right)

to exceed an average cross-entropy score (2+ε)/2n(2 + \varepsilon)/2^n for ε1/poly(n)\varepsilon \geq 1/\mathrm{poly}(n). A matching O(2n/3)O(2^{n/3}) query upper bound is shown for quantum collision-finding protocols (Kretschmer, 2020). Thus, even quantum black-box access cannot rule out, except with exponential effort, the linear-XEB threshold supplied by naive quantum samplers.

4. Empirical Spoofing Algorithms and Regimes of Validity

When circuit depth dd is sublinear in nn, classical algorithms can exploit the light-cone structure: in $1$D, L=O(d)L = O(d); in $2$D, L=O(d2)L = O(d^2). The spoofing algorithm for linear XEB operates by sampling mn/Lm \leq \lfloor n/L \rfloor output bits whose light cones are disjoint and approximating their exact marginal distributions, sampling the remainder uniformly (Barak et al., 2020).

The expected linear XEB fidelity for such samples is given by

EC[FC(AC)](1+15d)m1\mathbb{E}_C [\mathcal{F}_C(A_C)] \geq (1 + 15^{-d})^m - 1

Setting m=n/Lm = \lfloor n / L \rfloor yields

EC[FC(AC)]=Ω((n/L)15d)\mathbb{E}_C [\mathcal{F}_C(A_C)] = \Omega\bigl( (n/L) \cdot 15^{-d} \bigr)

For shallow circuits (depth O(logn)O(\log n) in $1$D or O(logn)O(\sqrt{\log n}) in $2$D), this expectation remains polynomially large (Barak et al., 2020). Therefore, XQUATH is falsified in these regimes—classical polynomial-time spoofing of linear XEB is effective.

Empirical and theoretical results reveal that XQUATH fails for Haar-random two-qubit gate circuits of sublinear depth d=o(n)d=o(n); a Pauli-path algorithm can approximate p(U,0n)p(U,0^n) with mean-squared error better than the uniform baseline by Ω(23n)\Omega(2^{-3n}). Explicitly,

XQ=22n(E[(p1/2n)2]E[(pq)2])=Ω(2n)XQ = 2^{2n}\left( \mathbb{E}\left[(p - 1/2^n)^2\right] - \mathbb{E}\left[(p - q)^2\right] \right) = \Omega(2^{-n})

thereby disproving the assumption for shallow circuits (Tanggara et al., 2024). In parallel, the System Linear Cross Entropy Quantum Threshold Assumption (sXQUATH) for sXES metrics in mQSVT circuits rests on fourth- and higher-moment analyses; sXQUATH is independently falsified for sublinear-depth circuits by a related class of Pauli-path spoofing algorithms (Tanggara et al., 2024).

The implication is that threshold-based benchmarking via XEB or sXES is fragile against classical simulators in shallow-depth or high-noise regimes.

6. Implications for Quantum Verification Protocols and Open Directions

The conceptual and practical consequences of XQUATH's fragility underscore the limitations of current random circuit sampling benchmarks for “quantum advantage” claims. Passing the cross-entropy threshold test is strictly easier than full-distribution simulation and is vulnerable at depths below the linear regime (Barak et al., 2020).

The need for new verification metrics is evident. A plausible implication is the requirement for benchmarks that go beyond single-sample correlators (e.g., multi-sample statistics or interactive proofs), or those based on stronger cryptographic assumptions such as learning-with-errors. The pursuit of average-case hardness for structured Hamiltonians and resistance to Pauli-path and tensor-network decompositions remains an open research direction (Tanggara et al., 2024).

7. Summary Table: XQUATH, Spoofing, and Benchmarks

Assumption/Metric Regime of Validity Classical Spoofing Feasibility
XQUATH (Linear XEB) Deep, generic circuits Not feasible (if depth \gg linear)
XQUATH (Linear XEB) Shallow (d=o(n)d=o(n)) Efficient spoofing possible
sXQUATH (sXES) Shallow (dU=o(n)d_U=o(n)) Efficient spoofing possible

The XQUATH assumption, while providing a foundational complexity-theoretic framework for cross-entropy–based quantum verification, is now understood to fail generically in shallow circuit regimes. This suggests quantum supremacy proofs must rely on deeper circuits or new benchmarking methodologies that are demonstrably robust against efficient classical spoofing.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Cross-Entropy Quantum Threshold (XQUATH).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube