Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
50 tokens/sec
GPT-5 Medium
15 tokens/sec
GPT-5 High Premium
23 tokens/sec
GPT-4o
97 tokens/sec
DeepSeek R1 via Azure Premium
87 tokens/sec
GPT OSS 120B via Groq Premium
466 tokens/sec
Kimi K2 via Groq Premium
201 tokens/sec
2000 character limit reached

Performance analysis of a hybrid agent for quantum-accessible reinforcement learning (2107.14001v1)

Published 29 Jul 2021 in quant-ph

Abstract: In the last decade quantum machine learning has provided fascinating and fundamental improvements to supervised, unsupervised and reinforcement learning. In reinforcement learning, a so-called agent is challenged to solve a task given by some environment. The agent learns to solve the task by exploring the environment and exploiting the rewards it gets from the environment. For some classical task environments, such as deterministic strictly epochal environments, an analogue quantum environment can be constructed which allows to find rewards quadratically faster by applying quantum algorithms. In this paper, we analytically analyze the behavior of a hybrid agent which combines this quadratic speedup in exploration with the policy update of a classical agent. This leads to a faster learning of the hybrid agent compared to the classical agent. We demonstrate that if the classical agent needs on average $\langle J \rangle$ rewards and $\langle T \rangle_c$ epochs to learn how to solve the task, the hybrid agent will take $\langle T \rangle_q \leq \alpha \sqrt{\langle T \rangle_c \langle J \rangle}$ epochs on average. Here, $\alpha$ denotes a constant which is independent of the problem size. Additionally, we prove that if the environment allows for maximally $\alpha_o k_\text{max}$ sequential coherent interactions, e.g. due to noise effects, an improvement given by $\langle T \rangle_q \approx \alpha_o\langle T \rangle_c/4 k_\text{max}$ is still possible.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.