Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Provable Benefits of Representational Transfer in Reinforcement Learning (2205.14571v2)

Published 29 May 2022 in cs.LG and cs.AI

Abstract: We study the problem of representational transfer in RL, where an agent first pretrains in a number of source tasks to discover a shared representation, which is subsequently used to learn a good policy in a \emph{target task}. We propose a new notion of task relatedness between source and target tasks, and develop a novel approach for representational transfer under this assumption. Concretely, we show that given generative access to source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy in the target task. The sample complexity is close to knowing the ground truth features in the target task, and comparable to prior representation learning results in the source tasks. We complement our positive results with lower bounds without generative access, and validate our findings with empirical evaluation on rich observation MDPs that require deep exploration. In our experiments, we observe a speed up in learning in the target by pre-training, and also validate the need for generative access in source tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Alekh Agarwal (99 papers)
  2. Yuda Song (22 papers)
  3. Wen Sun (124 papers)
  4. Kaiwen Wang (24 papers)
  5. Mengdi Wang (199 papers)
  6. Xuezhou Zhang (36 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.