Blind-spots of Randomized Benchmarking Under Temporal Correlations (2510.13051v1)
Abstract: Randomized benchmarking (RB) is a widely adopted protocol for estimating the average gate fidelity in quantum hardware. However, its standard formulation relies on the assumption of temporally uncorrelated noise, an assumption often violated in current devices. In this work, we derive analytic expressions for the average sequence fidelity (ASF) in the presence of temporally correlated (non-Markovian) noise with classical memory, including cases where such correlations originate from interactions with a quantum environment. We show how the ASF can be interpreted to extract meaningful benchmarking parameters under such noise and identify classes of interaction Hamiltonians that render temporal correlations completely invisible to RB. We further provide operational criteria for witnessing temporal correlations due to quantum memory through RB experiments. Importantly, while classical correlations may remain undetectable in the ASF data, they can nonetheless significantly affect worst-case errors quantified by the diamond norm, a metric central to fault tolerant quantum computing. In particular, we demonstrate that temporal correlations may suppress worst-case errors highlighting that temporal correlations may not always have detrimental effects on gate performance.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.