Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Avoiding Confusion between Predictors and Inhibitors in Value Function Approximation (1312.5714v2)

Published 19 Dec 2013 in cs.AI

Abstract: In reinforcement learning, the goal is to seek rewards and avoid punishments. A simple scalar captures the value of a state or of taking an action, where expected future rewards increase and punishments decrease this quantity. Naturally an agent should learn to predict this quantity to take beneficial actions, and many value function approximators exist for this purpose. In the present work, however, we show how value function approximators can cause confusion between predictors of an outcome of one valence (e.g., a signal of reward) and the inhibitor of the opposite valence (e.g., a signal canceling expectation of punishment). We show this to be a problem for both linear and non-linear value function approximators, especially when the amount of data (or experience) is limited. We propose and evaluate a simple resolution: to instead predict reward and punishment values separately, and rectify and add them to get the value needed for decision making. We evaluate several function approximators in this slightly different value function approximation architecture and show that this approach is able to circumvent the confusion and thereby achieve lower value-prediction errors.

Summary

We haven't generated a summary for this paper yet.