Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Offline Multitask Representation Learning for Reinforcement Learning (2403.11574v2)

Published 18 Mar 2024 in cs.LG

Abstract: We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Haque Ishfaq (7 papers)
  2. Thanh Nguyen-Tang (17 papers)
  3. Songtao Feng (13 papers)
  4. Raman Arora (46 papers)
  5. Mengdi Wang (199 papers)
  6. Ming Yin (70 papers)
  7. Doina Precup (206 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com