Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning Assisted Recursive QAOA (2207.06294v2)

Published 13 Jul 2022 in quant-ph, cs.AI, and cs.LG

Abstract: Variational quantum algorithms such as the Quantum Approximation Optimization Algorithm (QAOA) in recent years have gained popularity as they provide the hope of using NISQ devices to tackle hard combinatorial optimization problems. It is, however, known that at low depth, certain locality constraints of QAOA limit its performance. To go beyond these limitations, a non-local variant of QAOA, namely recursive QAOA (RQAOA), was proposed to improve the quality of approximate solutions. The RQAOA has been studied comparatively less than QAOA, and it is less understood, for instance, for what family of instances it may fail to provide high quality solutions. However, as we are tackling $\mathsf{NP}$-hard problems (specifically, the Ising spin model), it is expected that RQAOA does fail, raising the question of designing even better quantum algorithms for combinatorial optimization. In this spirit, we identify and analyze cases where RQAOA fails and, based on this, propose a reinforcement learning enhanced RQAOA variant (RL-RQAOA) that improves upon RQAOA. We show that the performance of RL-RQAOA improves over RQAOA: RL-RQAOA is strictly better on these identified instances where RQAOA underperforms, and is similarly performing on instances where RQAOA is near-optimal. Our work exemplifies the potentially beneficial synergy between reinforcement learning and quantum (inspired) optimization in the design of new, even better heuristics for hard problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Classical algorithms and quantum limitations for maximum cut on high-girth graphs.
  2. Parameterized quantum circuits as machine learning models. Quantum Science and Technology, 4(4):043001.
  3. Training variational quantum algorithms is NP-hard. Physical Review Letters, 127(12).
  4. Iterative quantum algorithms for maximum independent set: A tale of low-depth quantum algorithms.
  5. For fixed control parameters the quantum approximate optimization algorithm’s objective function value concentrates for typical instances. arXiv preprint arXiv:1812.04170.
  6. Classical algorithms for forrelation. arXiv preprint arXiv:2102.06963.
  7. Obstacles to variational quantum optimization from symmetry protection. Physical Review Letters, 125(26).
  8. Hybrid quantum-classical algorithms for approximate graph colouring. Quantum, 6:678.
  9. Limitations of local quantum algorithms on random max-k-xor and beyond. arXiv preprint arXiv:2108.06049.
  10. Quantum-enhanced greedy combinatorial optimization solver. Science Advances, 9(45):eadi0487.
  11. Quantum phases of matter on a 256-atom programmable quantum simulator. Nature, 595(7866):227–232.
  12. The quantum approximate optimization algorithm needs to see the whole graph: A typical case. arXiv preprint arXiv:2004.09002.
  13. The quantum approximate optimization algorithm needs to see the whole graph: Worst case examples. arXiv preprint arXiv:2005.08747.
  14. A quantum approximate optimization algorithm. arXiv preprint arXiv:1411.4028.
  15. Quantum-informed recursive optimization algorithms. arXiv preprint arXiv:2308.13607.
  16. Quantum walks on a programmable two-dimensional 62-qubit superconducting processor. Science, 372(6545):948–952.
  17. Håstad, J. (2001). Some optimal inapproximability results. Journal of the ACM (JACM), 48(4):798–859.
  18. Hastings, M. B. (2019). Classical and quantum bounded depth approximation algorithms. arXiv preprint arXiv:1905.07047.
  19. Parametrized quantum policies for reinforcement learning. Advances in Neural Information Processing Systems, 34.
  20. Demonstration of quantum volume 64 on a superconducting quantum computing system. Quantum Science and Technology, 6(2):025020.
  21. Kakade, S. M. (2003). On the sample complexity of reinforcement learning.
  22. Learning to optimize variational quantum circuits to solve combinatorial problems. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03):2367–2375.
  23. Optimal inapproximability results for max-cut and other 2-variable csps? SIAM Journal on Computing, 37(1):319–357.
  24. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  25. Actor-critic algorithms. Advances in Neural Information Processing Systems, 12.
  26. Empirical performance bounds for quantum approximate optimization. Quantum Information Processing, 20(12).
  27. Marwaha, K. (2021). Local classical max-cut algorithm outperforms p=2𝑝2p=2italic_p = 2 qaoa on high-girth regular graphs. Quantum, 5:437.
  28. Bounds on approximating max k𝑘kitalic_k xor with quantum and classical local algorithms. arXiv preprint arXiv:2109.10833.
  29. Low-depth mechanisms for quantum optimization. PRX Quantum, 2(3).
  30. Quantum optimization using variational algorithms on near-term quantum devices. Quantum Science and Technology, 3(3):030503.
  31. Unsupervised strategies for identifying optimal parameters in quantum approximate optimization algorithm. EPJ Quantum Technology, 9(1).
  32. Expectation values from the single-layer quantum approximate optimization algorithm on ising problems. arXiv preprint arXiv:2012.03421.
  33. Hartree-fock on a superconducting qubit quantum computer. Science, 369(6507):1084–1089.
  34. Parameter transfer for quantum approximate optimization of weighted maxcut. arXiv preprint arXiv:2201.11785.
  35. Using models to improve optimizers for variational quantum algorithms. Quantum Science and Technology, 5(4):044008.
  36. Reinforcement learning: An introduction.
  37. Policy gradient methods for reinforcement learning with function approximation. Advances in Neural Information Processing Systems, 12.
  38. Reinforcement-learning-assisted quantum optimization. Physical Review Research, 2(3).
  39. Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256.
  40. Fixed-angle conjectures for the quantum approximate optimization algorithm on regular MaxCut graphs. Physical Review A, 104(5).
  41. Policy gradient based quantum approximate optimization algorithm. In Mathematical and Scientific Machine Learning, pages 605–634. PMLR.
  42. Noise-robust end-to-end quantum control using deep autoregressive policy networks. In Mathematical and Scientific Machine Learning, pages 1044–1081. PMLR.
  43. Reinforcement learning for many-body ground-state preparation inspired by counterdiabatic driving. Physical Review X, 11(3).
  44. Rl-qaoa: A reinforcement learning approach to many-body ground state preparation. Bulletin of the American Physical Society, 66.
Citations (13)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com