Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learn Dynamic-Aware State Embedding for Transfer Learning (2101.02230v1)

Published 6 Jan 2021 in cs.LG and cs.AI

Abstract: Transfer reinforcement learning aims to improve the sample efficiency of solving unseen new tasks by leveraging experiences obtained from previous tasks. We consider the setting where all tasks (MDPs) share the same environment dynamic except reward function. In this setting, the MDP dynamic is a good knowledge to transfer, which can be inferred by uniformly random policy. However, trajectories generated by uniform random policy are not useful for policy improvement, which impairs the sample efficiency severely. Instead, we observe that the binary MDP dynamic can be inferred from trajectories of any policy which avoids the need of uniform random policy. As the binary MDP dynamic contains the state structure shared over all tasks we believe it is suitable to transfer. Built on this observation, we introduce a method to infer the binary MDP dynamic on-line and at the same time utilize it to guide state embedding learning, which is then transferred to new tasks. We keep state embedding learning and policy learning separately. As a result, the learned state embedding is task and policy agnostic which makes it ideal for transfer learning. In addition, to facilitate the exploration over the state space, we propose a novel intrinsic reward based on the inferred binary MDP dynamic. Our method can be used out-of-box in combination with model-free RL algorithms. We show two instances on the basis of \algo{DQN} and \algo{A2C}. Empirical results of intensive experiments show the advantage of our proposed method in various transfer learning tasks.

Citations (1)

Summary

We haven't generated a summary for this paper yet.