Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer Based Reinforcement Learning For Games (1912.03918v1)

Published 9 Dec 2019 in cs.LG and cs.NE

Abstract: Recent times have witnessed sharp improvements in reinforcement learning tasks using deep reinforcement learning techniques like Deep Q Networks, Policy Gradients, Actor Critic methods which are based on deep learning based models and back-propagation of gradients to train such models. An active area of research in reinforcement learning is about training agents to play complex video games, which so far has been something accomplished only by human intelligence. Some state of the art performances in video game playing using deep reinforcement learning are obtained by processing the sequence of frames from video games, passing them through a convolutional network to obtain features and then using recurrent neural networks to figure out the action leading to optimal rewards. The recurrent neural network will learn to extract the meaningful signal out of the sequence of such features. In this work, we propose a method utilizing a transformer network which have recently replaced RNNs in NLP, and perform experiments to compare with existing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Uddeshya Upadhyay (17 papers)
  2. Nikunj Shah (2 papers)
  3. Sucheta Ravikanti (3 papers)
  4. Mayanka Medhe (1 paper)
Citations (8)

Summary

We haven't generated a summary for this paper yet.