Robust design under uncertainty in quantum error mitigation (2307.05302v2)
Abstract: Error mitigation techniques are crucial to achieving near-term quantum advantage. Classical post-processing of quantum computation outcomes is a popular approach for error mitigation, which includes methods such as Zero Noise Extrapolation, Virtual Distillation, and learning-based error mitigation. However, these techniques have limitations due to the propagation of uncertainty resulting from a finite shot number of the quantum measurement. In this work, we introduce general and unbiased methods for quantifying the uncertainty and error of error-mitigated observables, based on the strategic sampling of error mitigation outcomes. We then extend our approach to demonstrate the optimization of performance and robustness of error mitigation under uncertainty. To illustrate our methods, we apply them to Clifford Data Regression and Zero Noise Extrapolation in the ground state of the XY model simulated using IBM's Toronto and depolarizing noise models, respectively. In particular, we optimize the distribution of the training circuits for Clifford Data Regression, while for Zero Noise Extrapolation we optimize the choice of the noise levels and the allocation of shots. While our methods are readily applicable to any post-processing-based error mitigation approach, in practice they must not be prohibitively expensive - even though they perform optimizations of the error mitigation hyperparameters requiring sampling of a statistical distribution of error mitigation outcomes. By leveraging surrogate-based optimization, we show our methods can efficiently perform optimal design for a Zero Noise Extrapolation implementation. We then further demonstrate the transferability of learned Zero Noise Extrapolation hyperparameters to other similar circuits.