Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 43 tok/s
GPT-5 High 37 tok/s Pro
GPT-4o 98 tok/s
GPT OSS 120B 466 tok/s Pro
Kimi K2 225 tok/s Pro
2000 character limit reached

Improving Experience Replay through Modeling of Similar Transitions' Sets (2111.06907v1)

Published 12 Nov 2021 in cs.LG

Abstract: In this work, we propose and evaluate a new reinforcement learning method, COMPact Experience Replay (COMPER), which uses temporal difference learning with predicted target values based on recurrence over sets of similar transitions, and a new approach for experience replay based on two transitions memories. Our objective is to reduce the required number of experiences to agent training regarding the total accumulated rewarding in the long run. Its relevance to reinforcement learning is related to the small number of observations that it needs to achieve results similar to that obtained by relevant methods in the literature, that generally demand millions of video frames to train an agent on the Atari 2600 games. We report detailed results from five training trials of COMPER for just 100,000 frames and about 25,000 iterations with a small experiences memory on eight challenging games of Arcade Learning Environment (ALE). We also present results for a DQN agent with the same experimental protocol on the same games set as the baseline. To verify the performance of COMPER on approximating a good policy from a smaller number of observations, we also compare its results with that obtained from millions of frames presented on the benchmark of ALE.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.