Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov Decision Processes (2201.11206v2)

Published 26 Jan 2022 in cs.LG and stat.ML

Abstract: Reward-free reinforcement learning (RL) considers the setting where the agent does not have access to a reward function during exploration, but must propose a near-optimal policy for an arbitrary reward function revealed only after exploring. In the the tabular setting, it is well known that this is a more difficult problem than reward-aware (PAC) RL -- where the agent has access to the reward function during exploration -- with optimal sample complexities in the two settings differing by a factor of $|\mathcal{S}|$, the size of the state space. We show that this separation does not exist in the setting of linear MDPs. We first develop a computationally efficient algorithm for reward-free RL in a $d$-dimensional linear MDP with sample complexity scaling as $\widetilde{\mathcal{O}}(d2 H5/\epsilon2)$. We then show a lower bound with matching dimension-dependence of $\Omega(d2 H2/\epsilon2)$, which holds for the reward-aware RL setting. To our knowledge, our approach is the first computationally efficient algorithm to achieve optimal $d$ dependence in linear MDPs, even in the single-reward PAC setting. Our algorithm relies on a novel procedure which efficiently traverses a linear MDP, collecting samples in any given feature direction'', and enjoys a sample complexity scaling optimally in the (linear MDP equivalent of the) maximal state visitation probability. We show that this exploration procedure can also be applied to solve the problem of obtainingwell-conditioned'' covariates in linear MDPs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Andrew Wagenmaker (20 papers)
  2. Yifang Chen (31 papers)
  3. Max Simchowitz (59 papers)
  4. Simon S. Du (120 papers)
  5. Kevin Jamieson (72 papers)
Citations (48)

Summary

We haven't generated a summary for this paper yet.