Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 39 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 428 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Iterative Layerwise Training for Quantum Approximate Optimization Algorithm (2309.13552v1)

Published 24 Sep 2023 in quant-ph

Abstract: The capability of the quantum approximate optimization algorithm (QAOA) in solving the combinatorial optimization problems has been intensively studied in recent years due to its application in the quantum-classical hybrid regime. Despite having difficulties that are innate in the variational quantum algorithms (VQA), such as barren plateaus and the local minima problem, QAOA remains one of the applications that is suitable for the recent noisy intermediate scale quantum (NISQ) devices. Recent works have shown that the performance of QAOA largely depends on the initial parameters, which motivate parameter initialization strategies to obtain good initial points for the optimization of QAOA. On the other hand, optimization strategies focus on the optimization part of QAOA instead of the parameter initialization. Instead of having absolute advantages, these strategies usually impose trade-offs to the performance of the optimization problems. One of such examples is the layerwise optimization strategy, in which the QAOA parameters are optimized layer-by-layer instead of the full optimization. The layerwise strategy costs less in total compared to the full optimization, in exchange of lower approximation ratio. In this work, we propose the iterative layerwise optimization strategy and explore the possibility for the reduction of optimization cost in solving problems with QAOA. Using numerical simulations, we found out that by combining the iterative layerwise with proper initialization strategies, the optimization cost can be significantly reduced in exchange for a minor reduction in the approximation ratio. We also show that in some cases, the approximation ratio given by the iterative layerwise strategy is even higher than that given by the full optimization.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.