Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Potential-Based Reward Shaping For Intrinsic Motivation (2402.07411v1)

Published 12 Feb 2024 in cs.LG

Abstract: Recently there has been a proliferation of intrinsic motivation (IM) reward-shaping methods to learn in complex and sparse-reward environments. These methods can often inadvertently change the set of optimal policies in an environment, leading to suboptimal behavior. Previous work on mitigating the risks of reward shaping, particularly through potential-based reward shaping (PBRS), has not been applicable to many IM methods, as they are often complex, trainable functions themselves, and therefore dependent on a wider set of variables than the traditional reward functions that PBRS was developed for. We present an extension to PBRS that we prove preserves the set of optimal policies under a more general set of functions than has been previously proven. We also present {\em Potential-Based Intrinsic Motivation} (PBIM), a method for converting IM rewards into a potential-based form that is useable without altering the set of optimal policies. Testing in the MiniGrid DoorKey and Cliff Walking environments, we demonstrate that PBIM successfully prevents the agent from converging to a suboptimal policy and can speed up training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Grant C. Forbes (6 papers)
  2. Nitish Gupta (27 papers)
  3. Leonardo Villalobos-Arias (4 papers)
  4. Colin M. Potts (2 papers)
  5. Arnav Jhala (10 papers)
  6. David L. Roberts (6 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.