Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 90 tok/s
Gemini 2.5 Pro 57 tok/s Pro
GPT-5 Medium 27 tok/s
GPT-5 High 22 tok/s Pro
GPT-4o 101 tok/s
GPT OSS 120B 467 tok/s Pro
Kimi K2 163 tok/s Pro
2000 character limit reached

Multidimensional Fourier series with quantum circuits (2302.03389v3)

Published 7 Feb 2023 in quant-ph

Abstract: Quantum machine learning is the field that aims to integrate machine learning with quantum computation. In recent years, the field has emerged as an active research area with the potential to bring new insights to classical machine learning problems. One of the challenges in the field is to explore the expressibility of parametrized quantum circuits and their ability to be universal function approximators, as classical neural networks are. Recent works have shown that with a quantum supervised learning model, we can fit any one-dimensional Fourier series, proving their universality. However, models for multidimensional functions have not been explored in the same level of detail. In this work, we study the expressibility of various types of circuit ansatzes that generate multidimensional Fourier series. We found that, for some ansatzes, the degrees of freedom required for fitting such functions grow faster than the available degrees in the Hilbert space generated by the circuits. For example, single-qudit models have limited power to represent arbitrary multidimensional Fourier series. Despite this, we show that we can enlarge the Hilbert space of the circuit by using more qudits or higher local dimensions to meet the degrees of freedom requirements, thus ensuring the universality of the models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. T. Goto, Q. H. Tran, and K. Nakajima, Universal approximation property of quantum machine learning models in quantum-enhanced feature spaces, Physical Review Letters 127, 10.1103/physrevlett.127.090506 (2021).
  2. F. J. G. Vidal and D. O. Theis, Input redundancy for parameterized quantum circuits, Frontiers in Physics 8, 10.3389/fphy.2020.00297 (2020).
  3. M. Schuld, R. Sweke, and J. J. Meyer, Effect of data encoding on the expressive power of variational quantum-machine-learning models, Physical Review A 103, 10.1103/physreva.103.032430 (2021).
  4. D. Heimann, G. Schönhoff, and F. Kirchner, Learning capability of parametrized quantum circuits, arXiv 10.48550/ARXIV.2209.10345 (2022).
  5. S. Shin, Y. S. Teo, and H. Jeong, Exponential data encoding for quantum supervised learning, Physical Review A 107, 10.1103/physreva.107.012422 (2023).
  6. A. Daskin, A walk through of time series analysis on quantum computers, arXiv 10.48550/ARXIV.2205.00986 (2022).
  7. B. Y. Gan, D. Leykam, and D. G. Angelakis, Fock state-enhanced expressivity of quantum machine learning models, EPJ Quantum Technology 9, 16 (2022).
  8. J. Preskill, Quantum Computing in the NISQ era and beyond, Quantum 2, 79 (2018).
  9. M. Erhard, M. Krenn, and A. Zeilinger, Advances in high-dimensional quantum entanglement, Nature Reviews Physics 2, 365 (2020).
  10. C. M. Caves and G. J. Milburn, Qutrit entanglement, Optics Communications 179, 439 (2000).
  11. V. Vapnik, E. Levin, and Y. L. Cun, Measuring the vc-dimension of a learning machine, Neural Computation 6, 851 (1994).
  12. A. Gratsea and P. Huembeli, The effect of the processing and measurement operators on the expressive power of quantum models, arXiv 10.48550/ARXIV.2211.03101 (2022).
  13. J. Kübler, S. Buchholz, and B. Schölkopf, The inductive bias of quantum kernels, Advances in Neural Information Processing Systems 34, 12661 (2021).
  14. E. Peters and M. Schuld, Generalization despite overfitting in quantum machine learning models, arXiv 10.48550/arXiv.2209.05523 (2022).
  15. F. J. Schreiber, J. Eisert, and J. J. Meyer, Classical surrogates for quantum learning models, arXiv 10.48550/arXiv.2206.11740 (2022).
Citations (18)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com