Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantum machine learning beyond kernel methods (2110.13162v3)

Published 25 Oct 2021 in quant-ph, cs.AI, cs.LG, and stat.ML

Abstract: Machine learning algorithms based on parametrized quantum circuits are prime candidates for near-term applications on noisy quantum computers. In this direction, various types of quantum machine learning models have been introduced and studied extensively. Yet, our understanding of how these models compare, both mutually and to classical models, remains limited. In this work, we identify a constructive framework that captures all standard models based on parametrized quantum circuits: that of linear quantum models. In particular, we show using tools from quantum information theory how data re-uploading circuits, an apparent outlier of this framework, can be efficiently mapped into the simpler picture of linear models in quantum Hilbert spaces. Furthermore, we analyze the experimentally-relevant resource requirements of these models in terms of qubit number and amount of data needed to learn. Based on recent results from classical machine learning, we prove that linear quantum models must utilize exponentially more qubits than data re-uploading models in order to solve certain learning tasks, while kernel methods additionally require exponentially more data points. Our results provide a more comprehensive view of quantum machine learning models as well as insights on the compatibility of different models with NISQ constraints.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sofiene Jerbi (19 papers)
  2. Lukas J. Fiderer (16 papers)
  3. Hendrik Poulsen Nautrup (22 papers)
  4. Jonas M. Kübler (10 papers)
  5. Hans J. Briegel (67 papers)
  6. Vedran Dunjko (97 papers)
Citations (130)

Summary

We haven't generated a summary for this paper yet.