Papers
Topics
Authors
Recent
2000 character limit reached

Benchmarking Quantum Processor Performance at Scale (2311.05933v1)

Published 10 Nov 2023 in quant-ph

Abstract: As quantum processors grow, new performance benchmarks are required to capture the full quality of the devices at scale. While quantum volume is an excellent benchmark, it focuses on the highest quality subset of the device and so is unable to indicate the average performance over a large number of connected qubits. Furthermore, it is a discrete pass/fail and so is not reflective of continuous improvements in hardware nor does it provide quantitative direction to large-scale algorithms. For example, there may be value in error mitigated Hamiltonian simulation at scale with devices unable to pass strict quantum volume tests. Here we discuss a scalable benchmark which measures the fidelity of a connecting set of two-qubit gates over $N$ qubits by measuring gate errors using simultaneous direct randomized benchmarking in disjoint layers. Our layer fidelity can be easily related to algorithmic run time, via $\gamma$ defined in Ref.\cite{berg2022probabilistic} that can be used to estimate the number of circuits required for error mitigation. The protocol is efficient and obtains all the pair rates in the layered structure. Compared to regular (isolated) RB this approach is sensitive to crosstalk. As an example we measure a $N=80~(100)$ qubit layer fidelity on a 127 qubit fixed-coupling "Eagle" processor (ibm_sherbrooke) of 0.26(0.19) and on the 133 qubit tunable-coupling "Heron" processor (ibm_montecarlo) of 0.61(0.26). This can easily be expressed as a layer size independent quantity, error per layered gate (EPLG), which is here $1.7\times10{-2}(1.7\times10{-2})$ for ibm_sherbrooke and $6.2\times10{-3}(1.2\times10{-2})$ for ibm_montecarlo.

Citations (40)

Summary

Overview of Layer Fidelity as a Benchmark for Quantum Processors

The paper "Benchmarking Quantum Processor Performance at Scale" by David C. McKay et al. addresses the need for robust benchmarks in assessing the performance of quantum processors, particularly as these devices grow in qubit count and complexity. Traditional benchmarks, such as quantum volume, while useful, are limited because they capture only the performance of the highest quality subset of a quantum device. As quantum systems expand, it is crucial to have benchmarks that offer a holistic view of performance across all qubits, including middle-of-the-pack qubits, while being sensitive to detailed effects like crosstalk.

Proposed Benchmark: Layer Fidelity

The authors introduce a new benchmark named Layer Fidelity (LF), which assesses the fidelity of a connecting set of two-qubit gates over N qubits. This evaluation is performed using direct randomized benchmarking in disjoint layers. The methodology differs from isolated benchmarking approaches by measuring simultaneous gate errors, capturing crosstalk effects between qubits. Simultaneous benchmarking allows LF to reflect average performance appropriate for full-scale applications, rather than the idealized performance indicators targeted by traditional protocols.

Numerical Results and Analysis

The paper reports LF measurements on two different quantum processors, ibm_sherbrooke and ibm_montecarlo. The results showcased significant differences in layer fidelities between the two devices, with ibm_montecarlo demonstrating better performance metrics consistent with its design for tunable-coupling gates, which help mitigate crosstalk. Key findings indicate a layer fidelity of 0.61 for 80 qubits on ibm_montecarlo versus 0.26 on ibm_sherbrooke, underscoring the importance of device architecture in determining scalability and overall performance.

Comparison with Other Benchmarks

The LF protocol shares some features with mirror randomized benchmarking but offers superior information granularity by revealing errors for individual operations. This is achieved without dependence on high-weight measurements, improving scalability. Another advantage is that LF runs fewer circuits compared to protocols like Pauli benchmarking, offering a more efficient yet comprehensive representation of processor performance.

Implications and Future Developments

The layer fidelity metric provides insights that are crucial for the development of scalable quantum computers. By relating LF to algorithm runtimes via parameters like γ, defined in the paper, the benchmark offers practical value in estimating error mitigation requirements. This linkage is critical for deploying quantum algorithms on near-term quantum hardware and for guiding hardware development towards improving performance on larger quantum systems.

Future work suggested by the authors includes extending LF to include mid-circuit measurements, refining data fitting to capture nuanced details of quantum processes, and exploring LF’s applicability to a variety of quantum circuit structures beyond those tested. Such developments would further solidify LF’s position as a versatile, reliable benchmark for quantum processors, enabling nuanced performance characterization pivotal to both hardware advancement and algorithm optimization.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.