Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch (2007.01174v4)

Published 2 Jul 2020 in cs.LG and stat.ML

Abstract: We study the inverse reinforcement learning (IRL) problem under a transition dynamics mismatch between the expert and the learner. Specifically, we consider the Maximum Causal Entropy (MCE) IRL learner model and provide a tight upper bound on the learner's performance degradation based on the $\ell_1$-distance between the transition dynamics of the expert and the learner. Leveraging insights from the Robust RL literature, we propose a robust MCE IRL algorithm, which is a principled approach to help with this mismatch. Finally, we empirically demonstrate the stable performance of our algorithm compared to the standard MCE IRL algorithm under transition dynamics mismatches in both finite and continuous MDP problems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Luca Viano (16 papers)
  2. Yu-Ting Huang (13 papers)
  3. Parameswaran Kamalaruban (25 papers)
  4. Adrian Weller (150 papers)
  5. Volkan Cevher (216 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.