Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimization Issues in KL-Constrained Approximate Policy Iteration (2102.06234v1)

Published 11 Feb 2021 in cs.LG and stat.ML

Abstract: Many reinforcement learning algorithms can be seen as versions of approximate policy iteration (API). While standard API often performs poorly, it has been shown that learning can be stabilized by regularizing each policy update by the KL-divergence to the previous policy. Popular practical algorithms such as TRPO, MPO, and VMPO replace regularization by a constraint on KL-divergence of consecutive policies, arguing that this is easier to implement and tune. In this work, we study this implementation choice in more detail. We compare the use of KL divergence as a constraint vs. as a regularizer, and point out several optimization issues with the widely-used constrained approach. We show that the constrained algorithm is not guaranteed to converge even on simple problem instances where the constrained problem can be solved exactly, and in fact incurs linear expected regret. With approximate implementation using softmax policies, we show that regularization can improve the optimization landscape of the original objective. We demonstrate these issues empirically on several bandit and RL environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Botao Hao (28 papers)
  2. Yasin Abbasi-Yadkori (35 papers)
  3. Dale Schuurmans (112 papers)
  4. Nevena Lazić (3 papers)
  5. Csaba Szepesvári (76 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.