Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Reward Transferability in Adversarial Inverse Reinforcement Learning: Insights from Random Matrix Theory (2410.07643v2)

Published 10 Oct 2024 in stat.ML and cs.LG

Abstract: In the context of inverse reinforcement learning (IRL) with a single expert, adversarial inverse reinforcement learning (AIRL) serves as a foundational approach to providing comprehensive and transferable task descriptions. However, AIRL faces practical performance challenges, primarily stemming from the framework's overly idealized decomposability condition, the unclear proof regarding the potential equilibrium in reward recovery, or questionable robustness in high-dimensional environments. This paper revisits AIRL in \textbf{high-dimensional scenarios where the state space tends to infinity}. Specifically, we first establish a necessary and sufficient condition for reward transferability by examining the rank of the matrix derived from subtracting the identity matrix from the transition matrix. Furthermore, leveraging random matrix theory, we analyze the spectral distribution of this matrix, demonstrating that our rank criterion holds with high probability even when the transition matrices are unobservable. This suggests that the limitations on transfer are not inherent to the AIRL framework itself, but are instead related to the training variance of the reinforcement learning algorithms employed within it. Based on this insight, we propose a hybrid framework that integrates on-policy proximal policy optimization in the source environment with off-policy soft actor-critic in the target environment, leading to significant improvements in reward transfer effectiveness.

Summary

We haven't generated a summary for this paper yet.