Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimizing a Polynomial Function on a Quantum Simulator (1804.05231v2)

Published 14 Apr 2018 in quant-ph

Abstract: Gradient descent method, as one of the major methods in numerical optimization, is the key ingredient in many machine learning algorithms. As one of the most fundamental way to solve the optimization problems, it promises the function value to move along the direction of steepest descent. For the vast resource consumption when dealing with high-dimensional problems, a quantum version of this iterative optimization algorithm has been proposed recently[arXiv:1612.01789]. Here, we develop this protocol and implement it on a quantum simulator with limited resource. Moreover, a prototypical experiment was shown with a 4-qubit Nuclear Magnetic Resonance quantum processor, demonstrating a optimization process of polynomial function iteratively. In each iteration, we achieved an average fidelity of 94\% compared with theoretical calculation via full-state tomography. In particular, the iterative point gradually converged to the local minimum. We apply our method to multidimensional scaling problem, further showing the potentially capability to yields an exponentially improvement compared with classical counterparts. With the onrushing tendency of quantum information, our work could provide a subroutine for the application of future practical quantum computers.

Citations (3)

Summary

We haven't generated a summary for this paper yet.