Papers
Topics
Authors
Recent
Search
2000 character limit reached

Parameter Critic: a Model Free Variance Reduction Method Through Imperishable Samples

Published 28 Sep 2020 in eess.SY and cs.SY | (2009.13668v1)

Abstract: We consider the problem of finding a policy that maximizes an expected reward throughout the trajectory of an agent that interacts with an unknown environment. Frequently denoted Reinforcement Learning, this framework suffers from the need of large amount of samples in each step of the learning process. To this end, we introduce parameter critic, a formulation that allows samples to keep their validity even when the parameters of the policy change. In particular, we propose the use of a function approximator to directly learn the relationship between the parameters and the expected cumulative reward. Through convergence analysis, we demonstrate the parameter critic outperforms gradient-free parameter space exploration techniques as it is robust to noise. Empirically, we show that our method solves the cartpole problem which corroborates our claim as the agent can successfully learn an optimal policy while learning the relationship between the parameters and the cumulative reward.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.