Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilevel leapfrogging initialization for quantum approximate optimization algorithm (2306.06986v4)

Published 12 Jun 2023 in quant-ph

Abstract: Recently, Zhou et al. have proposed a novel Interpolation-based (INTERP) strategy to generate the initial parameters for the Parameterized Quantum Circuit (PQC) in Quantum Approximate Optimization Algorithm (QAOA). INTERP produces the guess of the initial parameters at level $i+1$ by applying linear interpolation to the optimized parameters at level $i$, achieving better performance than random initialization (RI). Nevertheless, INTERP consumes extensive running costs for deep QAOA because it necessitates optimization at each level of the PQC. To address this problem, a Multilevel Leapfrogging Interpolation (MLI) strategy is proposed. MLI can produce the guess of the initial parameters from level $i+1$ to $i+l$ ($l>1$) at level $i$, omitting the optimization rounds from level $i+1$ to $(i+l-1)$. The final result is that MLI executes optimization at few levels rather than each level, and this operation is referred to as Multilevel Leapfrogging optimization (M-Leap). The performance of MLI is investigated on the Maxcut problem. Compared with INTERP, MLI reduces most optimization rounds. Remarkably, the simulation results demonstrate that MLI can achieve the same quasi-optima as INTERP while consuming only 1/2 of the running costs required by INTERP. In addition, for MLI, where there is no RI except for level $1$, the greedy-MLI strategy is presented. The simulation results suggest that greedy-MLI has better stability (i.e., a higher average approximation ratio) than INTERP and MLI beyond obtaining the same quasi-optima as INTERP. According to the efficiency of finding the quasi-optima, the idea of M-Leap might be extended to other training tasks, especially those requiring numerous optimizations, such as training adaptive quantum circuits.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. L. K. Grover, Phys. Rev. Lett. 79, 325 (1997).
  2. J. Preskill, Quantum 2, 79 (2018).
  3. C. Bravo-Prieto, Machine Learning: Science and Technology 2, 035028 (2021).
  4. G. E. Crooks, “Performance of the quantum approximate optimization algorithm on the maximum cut problem,”  (2018), arXiv:1811.08419 [quant-ph] .
  5. E. Farhi, J. Goldstone,  and S. Gutmann, “A quantum approximate optimization algorithm,”  (2014), arXiv:1411.4028 [quant-ph] .
  6. S. H. Sack and M. Serbyn, Quantum 5, 491 (2021).
  7. F. G. S. L. Brandao, M. Broughton, E. Farhi, S. Gutmann,  and H. Neven, “For fixed control parameters the quantum approximate optimization algorithm’s objective function value concentrates for typical instances,”  (2018), arXiv:1812.04170 [quant-ph] .
  8. J. Wurtz and D. Lykov, Phys. Rev. A 104, 052419 (2021).
  9. R. M. Karp, 50 Years of Integer Programming (2010).
  10. J. Wurtz and P. Love, Phys. Rev. A 103, 042612 (2021).
  11. T. Albash and D. A. Lidar, Rev. Mod. Phys. 90, 015002 (2018).
  12. M. X. Goemans and D. P. Williamson, Journal of the ACM (JACM) 42, 1115 (1995).
  13. G. G. Guerreschi and A. Y. Matsuura, Scientific reports 9, 6903 (2019).
  14. M. H. Devoret and R. J. Schoelkopf, Science 339, 1169 (2013).

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com