Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics (2107.01677v1)

Published 4 Jul 2021 in cs.LG and cs.AI

Abstract: Deep Reinforcement Learning has shown its ability in solving complicated problems directly from high-dimensional observations. However, in end-to-end settings, Reinforcement Learning algorithms are not sample-efficient and requires long training times and quantities of data. In this work, we proposed a framework for sample-efficient Reinforcement Learning that take advantage of state and action representations to transform a high-dimensional problem into a low-dimensional one. Moreover, we seek to find the optimal policy mapping latent states to latent actions. Because now the policy is learned on abstract representations, we enforce, using auxiliary loss functions, the lifting of such policy to the original problem domain. Results show that the novel framework can efficiently learn low-dimensional and interpretable state and action representations and the optimal latent policy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nicolò Botteghi (19 papers)
  2. Mannes Poel (7 papers)
  3. Beril Sirmacek (17 papers)
  4. Christoph Brune (56 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.