Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime (2010.11858v1)

Published 22 Oct 2020 in cs.LG and stat.ML

Abstract: We study the problem of policy optimization for infinite-horizon discounted Markov Decision Processes with softmax policy and nonlinear function approximation trained with policy gradient algorithms. We concentrate on the training dynamics in the mean-field regime, modeling e.g., the behavior of wide single hidden layer neural networks, when exploration is encouraged through entropy regularization. The dynamics of these models is established as a Wasserstein gradient flow of distributions in parameter space. We further prove global optimality of the fixed points of this dynamics under mild conditions on their initialization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Andrea Agazzi (18 papers)
  2. Jianfeng Lu (273 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.