Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement learning-based architecture search for quantum machine learning (2406.02717v3)

Published 4 Jun 2024 in quant-ph and cs.LG

Abstract: Quantum machine learning models use encoding circuits to map data into a quantum Hilbert space. While it is well known that the architecture of these circuits significantly influences core properties of the resulting model, they are often chosen heuristically. In this work, we present a novel approach using reinforcement learning techniques to generate problem-specific encoding circuits to improve the performance of quantum machine learning models. By specifically using a model-based reinforcement learning algorithm, we reduce the number of necessary circuit evaluations during the search, providing a sample-efficient framework. In contrast to previous search algorithms, our method uses a layered circuit structure that significantly reduces the search space. Additionally, our approach can account for multiple objectives such as solution quality, hardware restrictions and circuit depth. We benchmark our tailored circuits against various reference models, including models with problem-agnostic circuits and classical models. Our results highlight the effectiveness of problem-specific encoding circuits in enhancing QML model performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Frederic Rapp (3 papers)
  2. David A. Kreplin (4 papers)
  3. Marco Roth (15 papers)
  4. Marco F. Huber (47 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com