Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 218 tok/s Pro
GPT OSS 120B 460 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Exponentially Better Bounds for Quantum Optimization via Dynamical Simulation (2502.04285v1)

Published 6 Feb 2025 in quant-ph

Abstract: We provide several quantum algorithms for continuous optimization that do not require any gradient estimation. Instead, we encode the optimization problem into the dynamics of a physical system and coherently simulate the time evolution. This allows us, in certain cases, to obtain exponentially better query upper bounds relative to the best known upper bounds for gradient-based optimization schemes which utilize quantum computers only for the evaluation of gradients. Our first two algorithms can find local optima of a differentiable function $f: \mathbb{R}N \rightarrow \mathbb{R}$ by simulating either classical or quantum dynamics with friction via a time-dependent Hamiltonian. We show that these methods require $O(N\kappa2/h_x2\epsilon)$ queries to a phase oracle to find an $\epsilon$-approximate local optimum of a locally quadratic objective function, where $\kappa$ is the condition number of the Hessian matrix and $h_x$ is the discretization spacing. In contrast, we show that gradient-based methods require $O(N(1/\epsilon){\kappa \log(3)/4})$ queries. Our third algorithm can find the global optimum of $f$ by preparing a classical low-temperature thermal state via simulation of the classical Liouvillian operator associated with the Nos\'e Hamiltonian. We use results from the quantum thermodynamics literature to bound the thermalization time for the discrete system. Additionally, we analyze barren plateau effects that commonly plague quantum optimization algorithms and observe that our approach is vastly less sensitive to this problem than standard gradient-based optimization. Our results suggests that these dynamical optimization approaches may be far more scalable for future quantum machine learning, optimization and variational experiments than was widely believed.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.