DyPP: Dynamic Parameter Prediction to Accelerate Convergence of Variational Quantum Algorithms (2307.12449v3)
Abstract: The exponential run time of quantum simulators on classical machines and long queue times and high costs of real quantum devices present significant challenges in the efficient optimization of Variational Quantum Algorithms (VQAs) like Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA) and Quantum Neural Networks (QNNs). To address these limitations, we propose a new approach, DyPP (Dynamic Parameter Prediction), which accelerates the convergence of VQAs by exploiting regular trends in the parameter weights to update parameters. We introduce two techniques for optimal prediction performance namely, Naive Prediction (NaP) and Adaptive Prediction (AdaP). Through extensive experimentation and training of multiple QNN models on various datasets, we demonstrate that DyPP offers a speedup of approximately $2.25\times$ compared to standard training methods, while also providing improved accuracy (up to $2.3\%$ higher) and loss (up to $6.1\%$ lower) with low storage and computational overheads. We also evaluate DyPP's effectiveness in VQE for molecular ground-state energy estimation and in QAOA for graph MaxCut. Our results show that on average, DyPP leads to speedup of up to $3.1\times$ for VQE and $2.91\times$ for QAOA, compared to traditional optimization techniques, while using up to $3.3\times$ lesser shots (i.e., repeated circuit executions). Even under hardware noise, DyPP outperforms existing optimization techniques, delivering upto $3.33\times$ speedup and $2.5\times$ fewer shots, thereby enhancing efficiency of VQAs.
- A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner, “The power of quantum neural networks,” Nature Computational Science, vol. 1, no. 6, pp. 403–409, 2021.
- M. Alam, A. Ash-Saki, and S. Ghosh, “Addressing temporal variations in qubit quality metrics for parameterized quantum circuits,” in 2019 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED). IEEE, 2019, pp. 1–6.
- M. Alam, S. Kundu, R. O. Topaloglu, and S. Ghosh, “Quantum-classical hybrid machine learning for image classification (iccad special session paper),” in 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE, 2021, pp. 1–7.
- R. Ayanzadeh, N. Alavisamani, P. Das, and M. Qureshi, “Frozenqubits: Boosting fidelity of qaoa by skipping hotspot nodes,” in Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, 2023, pp. 311–324.
- J. J. Bao, S. S. Dong, L. Gagliardi, and D. G. Truhlar, “Automatic selection of an active space for calculating electronic excitation spectra by ms-caspt2 or mc-pdft,” Journal of Chemical Theory and Computation, vol. 14, no. 4, pp. 2017–2025, 2018.
- V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, S. Ahmed, V. Ajith, M. S. Alam, G. Alonso-Linaje, B. AkashNarayanan, A. Asadi et al., “Pennylane: Automatic differentiation of hybrid quantum-classical computations,” arXiv preprint arXiv:1811.04968, 2018.
- K. Bharti, A. Cervera-Lierta, T. H. Kyaw, T. Haug, S. Alperin-Lea, A. Anand, M. Degroote, H. Heimonen, J. S. Kottmann, T. Menke et al., “Noisy intermediate-scale quantum algorithms,” Reviews of Modern Physics, vol. 94, no. 1, p. 015004, 2022.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- M. C. Caro, H.-Y. Huang, M. Cerezo, K. Sharma, A. Sornborger, L. Cincio, and P. J. Coles, “Generalization in quantum machine learning from few training data,” Nature communications, vol. 13, no. 1, p. 4919, 2022.
- M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio et al., “Variational quantum algorithms,” Nature Reviews Physics, vol. 3, no. 9, pp. 625–644, 2021.
- S. Y.-C. Chen, C.-H. H. Yang, J. Qi, P.-Y. Chen, X. Ma, and H.-S. Goan, “Variational quantum circuits for deep reinforcement learning,” IEEE Access, vol. 8, pp. 141 007–141 024, 2020.
- T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv:1812.01718, 2018.
- G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “Emnist: Extending mnist to handwritten letters,” in 2017 international joint conference on neural networks (IJCNN). IEEE, 2017, pp. 2921–2926.
- A. Delgado, J. M. Arrazola, S. Jahangiri, Z. Niu, J. Izaac, C. Roberts, and N. Killoran, “Variational quantum algorithm for molecular geometry optimization,” Physical Review A, vol. 104, no. 5, p. 052402, 2021.
- J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization.” Journal of machine learning research, vol. 12, no. 7, 2011.
- P. Echenique and J. L. Alonso, “A mathematical and computational review of hartree–fock scf methods in quantum chemistry,” Molecular Physics, vol. 105, no. 23-24, pp. 3057–3098, 2007.
- E. Farhi, J. Goldstone, and S. Gutmann, “A quantum approximate optimization algorithm,” arXiv preprint arXiv:1411.4028, 2014.
- B. T. Gard, L. Zhu, G. S. Barron, N. J. Mayhall, S. E. Economou, and E. Barnes, “Efficient symmetry-preserving state preparation circuits for the variational quantum eigensolver algorithm,” npj Quantum Information, vol. 6, no. 1, p. 10, 2020.
- A. Gu, A. Lowe, P. A. Dub, P. J. Coles, and A. Arrasmith, “Adaptive shot allocation for fast convergence in variational quantum algorithms,” arXiv preprint arXiv:2108.10434, 2021.
- A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, “Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets,” nature, vol. 549, no. 7671, pp. 242–246, 2017.
- S. Kim, P. A. Thiessen, E. E. Bolton, J. Chen, G. Fu, A. Gindulyte, L. Han, J. He, S. He, B. A. Shoemaker et al., “Pubchem substance and compound databases,” Nucleic acids research, vol. 44, no. D1, pp. D1202–D1213, 2016.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- W. Lavrijsen, A. Tudor, J. Müller, C. Iancu, and W. De Jong, “Classical optimizers for noisy intermediate-scale quantum devices,” in 2020 IEEE international conference on quantum computing and engineering (QCE). IEEE, 2020, pp. 267–277.
- J. Li, Z. Wang, Z. Hu, A. Li, W. Jiang et al., “A novel spatial-temporal variational quantum circuit to enable deep learning on nisq devices,” arXiv preprint arXiv:2307.09771, 2023.
- J. Li, M. Alam, M. S. Congzhou, J. Wang, N. V. Dokholyan, and S. Ghosh, “Drug discovery approaches using quantum machine learning,” in 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 2021, pp. 1356–1359.
- J. Li, R. O. Topaloglu, and S. Ghosh, “Quantum generative models for small molecule drug discovery,” IEEE transactions on quantum engineering, vol. 2, pp. 1–8, 2021.
- S. Lloyd, “Quantum approximate optimization is computationally universal,” arXiv preprint arXiv:1812.11075, 2018.
- S. Lloyd, M. Mohseni, and P. Rebentrost, “Quantum algorithms for supervised and unsupervised machine learning,” arXiv preprint arXiv:1307.0411, 2013.
- S. Maurya, C. N. Mude, W. D. Oliver, B. Lienhard, and S. Tannu, “Scaling qubit readout with hardware efficient machine learning architectures,” in Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023, pp. 1–13.
- K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, “Quantum circuit learning,” Physical Review A, vol. 98, no. 3, p. 032309, 2018.
- P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul, M. I. Jordan et al., “Ray: A distributed framework for emerging {{\{{AI}}\}} applications,” in 13th {normal-{\{{USENIX}normal-}\}} Symposium on Operating Systems Design and Implementation ({normal-{\{{OSDI}normal-}\}} 18), 2018, pp. 561–577.
- P. D. Nation and M. Treinish, “Suppressing quantum circuit errors due to system variability,” PRX Quantum, vol. 4, no. 1, p. 010327, 2023.
- OpenAI, “Gpt-4 technical report,” arXiv, 2023.
- G. Pagano, A. Bapat, P. Becker, K. S. Collins, A. De, P. W. Hess, H. B. Kaplan, A. Kyprianidis, W. L. Tan, C. Baldwin et al., “Quantum approximate optimization of the long-range ising model with a trapped-ion quantum simulator,” Proceedings of the National Academy of Sciences, vol. 117, no. 41, pp. 25 396–25 401, 2020.
- A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’brien, “A variational eigenvalue solver on a photonic quantum processor,” Nature communications, vol. 5, no. 1, p. 4213, 2014.
- K. Phalak and S. Ghosh, “Shot optimization in quantum machine learning architectures to accelerate training,” IEEE Access, 2023.
- G. S. Ravi, K. Smith, J. M. Baker, T. Kannan, N. Earnest, A. Javadi-Abhari, H. Hoffmann, and F. T. Chong, “Navigating the dynamic noise landscape of variational quantum algorithms with qismet,” in Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, 2023, pp. 515–529.
- P. Rebentrost, M. Mohseni, and S. Lloyd, “Quantum support vector machine for big data classification,” Physical review letters, vol. 113, no. 13, p. 130503, 2014.
- J. Romero, R. Babbush, J. R. McClean, C. Hempel, P. J. Love, and A. Aspuru-Guzik, “Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz,” Quantum Science and Technology, vol. 4, no. 1, p. 014008, 2018.
- S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, 2016.
- M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, and N. Killoran, “Evaluating analytic gradients on quantum hardware,” Physical Review A, vol. 99, no. 3, p. 032331, 2019.
- M. Schuld, A. Bocharov, K. M. Svore, and N. Wiebe, “Circuit-centric quantum classifiers,” Physical Review A, vol. 101, no. 3, p. 032308, 2020.
- M. Schuld, R. Sweke, and J. J. Meyer, “Effect of data encoding on the expressive power of variational quantum-machine-learning models,” Physical Review A, vol. 103, no. 3, p. 032430, 2021.
- A. Sergeev and M. Del Balso, “Horovod: fast and easy distributed deep learning in tensorflow,” arXiv preprint arXiv:1802.05799, 2018.
- S. Sim, P. D. Johnson, and A. Aspuru-Guzik, “Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms,” Advanced Quantum Technologies, vol. 2, no. 12, p. 1900070, 2019.
- J. C. Spall, “Implementation of the simultaneous perturbation algorithm for stochastic optimization,” IEEE Transactions on aerospace and electronic systems, vol. 34, no. 3, pp. 817–823, 1998.
- S. Stein, N. Wiebe, Y. Ding, J. Ang, and A. Li, “Q-beep: Quantum bayesian error mitigation employing poisson modeling over the hamming spectrum,” in Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023, pp. 1–13.
- A. Szava and D. Wierichs, “Optimization using spsa,” https://pennylane.ai/qml/demos/tutorial_spsa, 02 2023, date Accessed: 2023-08-01.
- S. Tannu, P. Das, R. Ayanzadeh, and M. Qureshi, “Hammer: boosting fidelity of noisy quantum circuits by exploiting hamming behavior of erroneous outcomes,” in Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2022, pp. 529–540.
- J. Tilly, H. Chen, S. Cao, D. Picozzi, K. Setia, Y. Li, E. Grant, L. Wossnig, I. Rungger, G. H. Booth et al., “The variational quantum eigensolver: a review of methods and best practices,” Physics Reports, vol. 986, pp. 1–128, 2022.
- P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright et al., “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nature methods, vol. 17, no. 3, pp. 261–272, 2020.
- H. Wang, Z. Li, J. Gu, Y. Ding, D. Z. Pan, and S. Han, “Qoc: quantum on-chip training with parameter shift and gradient pruning,” in Proceedings of the 59th ACM/IEEE Design Automation Conference, 2022, pp. 655–660.
- D. Wecker, M. B. Hastings, and M. Troyer, “Progress towards practical quantum variational algorithms,” Physical Review A, vol. 92, no. 4, p. 042303, 2015.
- M. Wiedmann, M. Hölle, M. Periyasamy, N. Meyer, C. Ufrecht, D. D. Scherer, A. Plinge, and C. Mutschler, “An empirical comparison of optimizers for quantum machine learning with spsa-based gradients,” arXiv preprint arXiv:2305.00224, 2023.
- J. Winick, S. Hamaker, and M. Hansen, “Amazon braket pricing,” accessed: 2023-04-27. [Online]. Available: https://aws.amazon.com/braket/pricing/
- W. K. Wootters and W. H. Zurek, “A single quantum cannot be cloned,” Nature, vol. 299, no. 5886, pp. 802–803, 1982.
- H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
- L. Yan, C. Corinna, and C. Burges, “The mnist dataset of handwritten digits,” 1998.
- L. Zhou, S.-T. Wang, S. Choi, H. Pichler, and M. D. Lukin, “Quantum approximate optimization algorithm: Performance, mechanism, and implementation on near-term devices,” Physical Review X, vol. 10, no. 2, p. 021067, 2020.