Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 105 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 45 tok/s
GPT-5 High 34 tok/s Pro
GPT-4o 108 tok/s
GPT OSS 120B 473 tok/s Pro
Kimi K2 218 tok/s Pro
2000 character limit reached

Quantum Natural Gradient with Efficient Backtracking Line Search (2211.00615v1)

Published 1 Nov 2022 in quant-ph

Abstract: We consider the Quantum Natural Gradient Descent (QNGD) scheme which was recently proposed to train variational quantum algorithms. QNGD is Steepest Gradient Descent (SGD) operating on the complex projective space equipped with the Fubini-Study metric. Here we present an adaptive implementation of QNGD based on Armijo's rule, which is an efficient backtracking line search that enjoys a proven convergence. The proposed algorithm is tested using noisy simulators on three different models with various initializations. Our results show that Adaptive QNGD dynamically adapts the step size and consistently outperforms the original QNGD, which requires knowledge of optimal step size to {perform competitively}. In addition, we show that the additional complexity involved in performing the line search in Adaptive QNGD is minimal, ensuring the gains provided by the proposed adaptive strategy dominates any increase in complexity. Additionally, our benchmarking demonstrates that a simple SGD algorithm (implemented in the Euclidean space) equipped with the adaptive scheme above, can yield performances similar to the QNGD scheme with optimal step size. Our results are yet another confirmation of the importance of differential geometry in variational quantum computations. As a matter of fact, we foresee advanced mathematics to play a prominent role in the NISQ era in guiding the design of faster and more efficient algorithms.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.