Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 100 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 29 tok/s
GPT-5 High 29 tok/s Pro
GPT-4o 103 tok/s
GPT OSS 120B 480 tok/s Pro
Kimi K2 215 tok/s Pro
2000 character limit reached

Improving quantum linear system solvers via a gradient descent perspective (2109.04248v1)

Published 9 Sep 2021 in quant-ph and math.OC

Abstract: Solving systems of linear equations is one of the most important primitives in quantum computing that has the potential to provide a practical quantum advantage in many different areas, including in optimization, simulation, and machine learning. In this work, we revisit quantum linear system solvers from the perspective of convex optimization, and in particular gradient descent-type algorithms. This leads to a considerable constant-factor improvement in the runtime (or, conversely, a several orders of magnitude smaller error with the same runtime/circuit depth). More precisely, we first show how the asymptotically optimal quantum linear system solver of Childs, Kothari, and Somma is related to the gradient descent algorithm on the convex function $|A\vec x - \vec b|_22$: their linear system solver is based on a truncation in the Chebyshev basis of the degree-$(t-1)$ polynomial (in $A$) that maps the initial solution $\vec{x}_1 := \vec{b}$ to the $t$-th iterate $\vec{x}_t$ in the basic gradient descent algorithm. Then, instead of starting from the basic gradient descent algorithm, we use the optimal Chebyshev iteration method (which can be viewed as an accelerated gradient descent algorithm) and show that this leads to considerable improvements in the quantum solver.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.