Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Video Game to Real Robot: The Transfer between Action Spaces (1905.00741v2)

Published 2 May 2019 in cs.LG, cs.AI, and cs.RO

Abstract: Deep reinforcement learning has proven to be successful for learning tasks in simulated environments, but applying same techniques for robots in real-world domain is more challenging, as they require hours of training. To address this, transfer learning can be used to train the policy first in a simulated environment and then transfer it to physical agent. As the simulation never matches reality perfectly, the physics, visuals and action spaces by necessity differ between these environments to some degree. In this work, we study how general video games can be directly used instead of fine-tuned simulations for the sim-to-real transfer. Especially, we study how the agent can learn the new action space autonomously, when the game actions do not match the robot actions. Our results show that the different action space can be learned by re-training only part of neural network and we obtain above 90% mean success rate in simulation and robot experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Janne Karttunen (5 papers)
  2. Anssi Kanervisto (32 papers)
  3. Ville Kyrki (102 papers)
  4. Ville Hautamäki (30 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.