Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Attention Recurrent Q-Network (1512.01693v1)

Published 5 Dec 2015 in cs.LG

Abstract: A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind's team called the approach: Deep Q-Network (DQN). We present an extension of DQN by "soft" and "hard" attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of performance superior to that of DQN. Moreover, built-in attention mechanisms allow a direct online monitoring of the training process by highlighting the regions of the game screen the agent is focusing on when making decisions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ivan Sorokin (4 papers)
  2. Alexey Seleznev (1 paper)
  3. Mikhail Pavlov (15 papers)
  4. Aleksandr Fedorov (2 papers)
  5. Anastasiia Ignateva (1 paper)
Citations (144)