Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributional Reinforcement Learning via Moment Matching (2007.12354v3)

Published 24 Jul 2020 in cs.LG, cs.AI, and stat.ML

Abstract: We consider the problem of learning a set of probability distributions from the empirical BeLLMan dynamics in distributional reinforcement learning (RL), a class of state-of-the-art methods that estimate the distribution, as opposed to only the expectation, of the total return. We formulate a method that learns a finite set of statistics from each return distribution via neural networks, as in (Bellemare, Dabney, and Munos 2017; Dabney et al. 2018b). Existing distributional RL methods however constrain the learned statistics to \emph{predefined} functional forms of the return distribution which is both restrictive in representation and difficult in maintaining the predefined statistics. Instead, we learn \emph{unrestricted} statistics, i.e., deterministic (pseudo-)samples, of the return distribution by leveraging a technique from hypothesis testing known as maximum mean discrepancy (MMD), which leads to a simpler objective amenable to backpropagation. Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its BeLLMan target. We establish sufficient conditions for the contraction of the distributional BeLLMan operator and provide finite-sample analysis for the deterministic samples in distribution approximation. Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines and sets a new record in the Atari games for non-distributed agents.

Citations (19)

Summary

We haven't generated a summary for this paper yet.