Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Powerful Policies by Using Consistent Dynamics Model (1906.04355v1)

Published 11 Jun 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Model-based Reinforcement Learning approaches have the promise of being sample efficient. Much of the progress in learning dynamics models in RL has been made by learning models via supervised learning. But traditional model-based approaches lead to compounding errors' when the model is unrolled step by step. Essentially, the state transitions that the learner predicts (by unrolling the model for multiple steps) and the state transitions that the learner experiences (by acting in the environment) may not be consistent. There is enough evidence that humans build a model of the environment, not only by observing the environment but also by interacting with the environment. Interaction with the environment allows humans to carry out experiments: taking actions that help uncover true causal relationships which can be used for building better dynamics models. Analogously, we would expect such interactions to be helpful for a learning agent while learning to model the environment dynamics. In this paper, we build upon this intuition by using an auxiliary cost function to ensure consistency between what the agent observes (by acting in the real world) and what it imagines (by acting in thelearned' world). We consider several tasks - Mujoco based control tasks and Atari games - and show that the proposed approach helps to train powerful policies and better dynamics models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shagun Sodhani (33 papers)
  2. Anirudh Goyal (93 papers)
  3. Tristan Deleu (31 papers)
  4. Yoshua Bengio (601 papers)
  5. Sergey Levine (531 papers)
  6. Jian Tang (327 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.