Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NEARL: Non-Explicit Action Reinforcement Learning for Robotic Control (2011.01046v1)

Published 2 Nov 2020 in cs.RO, cs.AI, and cs.LG

Abstract: Traditionally, reinforcement learning methods predict the next action based on the current state. However, in many situations, directly applying actions to control systems or robots is dangerous and may lead to unexpected behaviors because action is rather low-level. In this paper, we propose a novel hierarchical reinforcement learning framework without explicit action. Our meta policy tries to manipulate the next optimal state and actual action is produced by the inverse dynamics model. To stabilize the training process, we integrate adversarial learning and information bottleneck into our framework. Under our framework, widely available state-only demonstrations can be exploited effectively for imitation learning. Also, prior knowledge and constraints can be applied to meta policy. We test our algorithm in simulation tasks and its combination with imitation learning. The experimental results show the reliability and robustness of our algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Nan Lin (17 papers)
  2. Yuxuan Li (77 papers)
  3. Yujun Zhu (23 papers)
  4. Ruolin Wang (11 papers)
  5. Xiayu Zhang (1 paper)
  6. Jianmin Ji (55 papers)
  7. Keke Tang (22 papers)
  8. Xiaoping Chen (23 papers)
  9. Xinming Zhang (21 papers)

Summary

We haven't generated a summary for this paper yet.