Average Sequence Fidelity in Quantum Processes
- Average Sequence Fidelity (ASF) is the averaged measure of how closely a quantum process approximates its ideal behavior, evaluated over a complete set of input states or random gate sequences.
- It enables efficient experimental benchmarking by employing techniques like two-designs and classical fidelities to bypass the cost of full quantum process tomography.
- ASF informs practical fault-tolerance assessments by linking average-case fidelity to worst-case error estimates and revealing the impact of non-Markovian noise on quantum operations.
Average Sequence Fidelity (ASF) is a central figure of merit quantifying the aggregate performance of a quantum process—typically a sequence of quantum gates—by averaging the fidelity over all possible input states or over random gate sequences. ASF and closely associated metrics inform both the theoretical analysis and experimental benchmarking of quantum protocols, especially in contexts such as randomized benchmarking, quantum error correction, and quantum process certification. ASF's rigorous quantification and optimal estimation strategies provide the foundation for scalable and reliable quantum device verification.
1. Mathematical Definition and Operational Significance
ASF is mathematically defined as the expected fidelity between the ideal and the experimentally realized quantum process, averaged over an ensemble of input states or gate sequences. For an ideal process and an actual process , the standard fidelity is
ASF generalizes this by taking an average (over input states, process realizations, or both):
- In gate benchmarking, ASF is the average probability of “survival” (return to the initial state) after applying a randomly chosen sequence of gates and its inverse, analyzed over many random sequences.
ASF provides a concise, scalable figure of merit that bypasses the prohibitive cost of full quantum process tomography, giving experimental access to the accuracy of quantum operations and serving as a critical indicator for the feasibility of fault-tolerant quantum computation (Reich et al., 2013, Lu et al., 2014).
2. Theoretical Frameworks for ASF Estimation
Multiple optimal strategies for estimating ASF have emerged, differing in resource requirements, experimental demands, and statistical properties.
Channel–State Isomorphism and Monte Carlo Methods
In the conventional approach, the average fidelity is related to the entanglement fidelity via
By expanding the fidelity using an orthonormal operator basis , one constructs a relevance distribution , enabling the average to be rewritten as
Two-Designs and Classical Fidelity Approaches
More efficient protocols leverage unitary or state $2$-designs, or "classical fidelities" using two mutually unbiased bases:
- Two-design method: The ASF is averaged over a set of states forming a $2$-design, requiring fewer (possibly entangled) inputs.
- Classical fidelities: Requires only product states from two mutually unbiased bases, providing bounds for the ASF and reducing experimental overhead (Reich et al., 2013, Lu et al., 2014).
Monte Carlo sampling over these distributions yields statistically robust estimates with rigorous error bounds, where the number of required samples and repetitions is determined by inequalities such as Chebychev’s and Hoeffding’s.
3. Resource Scaling and Experimental Complexity
Estimation protocols vary dramatically in their scaling behavior:
| Protocol | Classical Cost () | Input States | |
|---|---|---|---|
| Channel–state MC | |||
| Two-design MC | |||
| Classical fidelities |
The classic channel–state isomorphism protocol scales poorly with system size. In contrast, two-design and classical fidelity methods reduce both the number of input states and experimental settings by , significantly lowering both experimental and classical computational overhead for arbitrary unitary gates (Reich et al., 2013).
For Clifford gates, due to the stabilization property (eigenstates of Pauli operators mapped into eigenstates), the number of settings and experiments required for characterization becomes independent of system size (i.e., for both experimental and classical resources) (Reich et al., 2013, Lu et al., 2014).
4. ASF and Non-Markovian Noise: Temporal Correlations and Robustness
ASF not only quantifies average gate fidelity but also encodes information regarding temporal noise correlations (non-Markovianity). For randomized benchmarking under non-Markovian noise, the ASF generalizes to
where represents the noisy channel sequence. In non-Markovian scenarios, the decay of ASF may deviate markedly from a single exponential:
- For non-Markovian processes with classical memory (e.g., classical common cause, CCC), ASF is a sum of exponentials:
Rather than a single decay constant, one observes multi-exponential behavior; the weights and decay rates provide information about memory effects (Srivastava et al., 15 Oct 2025, Figueroa-Romero et al., 2022).
Operational criteria for witnessing genuine quantum memory include the presence of non-monotonic decay or fitted decay constants exceeding $1$ in the ASF; such features are direct indicators of non-classical temporal correlations. In contrast, strictly monotonic ASF curves with all decay rates can be blind to temporal classical correlations, even when these alter the underlying worst-case error rates.
ASF remains stable under small gate-dependent perturbations, with the forms of error corrections differing between Markovian and non-Markovian regimes. In all cases, robustness relies on the magnitude of gate-dependent deviations being suitably small (Figueroa-Romero et al., 2022).
5. ASF Versus Worst-Case Error: Fault-Tolerant Thresholds
ASF, while a meaningful average-case metric, does not directly dictate the worst-case error rate relevant for fault tolerance. Upper bounds for the worst-case error rate (as measured by the diamond norm) in terms of the reported average gate fidelity (and system dimension ) are (Sanders et al., 2015):
For non-Pauli noise (e.g., coherent or unitary errors), this square-root scaling can imply that the worst-case error rate is orders of magnitude larger than naïvely inferred from .
To bridge this gap, the "Pauli distance" is introduced:
quantifying how far the actual noise is from an ideal Pauli channel. Tighter bounds for in terms of the Pauli-twirl error rate and are
Experimental estimation of therefore enables more informative error assessments from ASF measurements, improving the credibility of claims regarding fault-tolerance thresholds (Sanders et al., 2015, Srivastava et al., 15 Oct 2025).
6. Experimental Protocols and Practical Applications
Protocols for estimating ASF span from scalable, twirling-based sampling (Reich et al., 2013, Lu et al., 2014) to Monte Carlo-based optimizations:
- Unitary 2-designs (notably the Clifford group) symmetrize the noise process and convert the fidelity estimation into a manageable measurement of a single probability parameter (e.g., no-error probability ), making scaling to many qubits feasible (Lu et al., 2014).
- Statistical error control is provided by Hoeffding’s or Chebychev’s inequalities, assigning sample numbers required for a prescribed confidence level.
- Product state twirling and survival probability protocols allow for ancilla-free characterization of bipartite channels, reducing experimental complexity while enabling the extraction of selected process matrix elements (Huang et al., 2014).
In realistic device operation, these methods allow benchmarking with substantially fewer experiments (e.g., 1,656 for 7-qubit Clifford gates at 99% confidence, compared to for full tomography (Lu et al., 2014)).
7. Advanced Computational Methods for ASF Maximization
Recent algorithmic advances have yielded efficient tools for maximizing ASF over state ensembles, essential in Bayesian tomography and quantum mean state inference. By reformulating ASF maximization as a semidefinite program (SDP), and deriving fast fixed-point algorithms,
the optimal average state can be rapidly determined, even for large systems. Analytical upper and lower bounds, as well as near-optimal estimators for commuting ensembles, supplement these computational techniques (Afham et al., 2022).
Such algorithmic strategies enable not just theoretical studies of ASF, but practical, high-throughput implementation in a range of benchmarking and state estimation contexts.
ASF is a rigorously defined, scalable, and operationally significant metric at the intersection of quantum device benchmarking, noise characterization, and fault-tolerance assessment. Its evaluation has evolved through optimal protocols, robust probabilistic error bounds, and advanced computational algorithms, underpinning modern practices in quantum verification and certification. Critical distinctions between average-case (ASF) and worst-case error metrics highlight the importance of supplementary diagnostics, such as the Pauli distance, for credible fault-tolerant quantum computing benchmarks. Robustness of ASF in the presence of temporal noise correlations and gate-dependent errors cements its continued relevance as quantum processors scale to higher complexity.