Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning with Stacked Hierarchical Attention for Text-based Games (2010.11655v3)

Published 22 Oct 2020 in cs.LG and cs.AI

Abstract: We study reinforcement learning (RL) for text-based games, which are interactive simulations in the context of natural language. While different methods have been developed to represent the environment information and language actions, existing RL agents are not empowered with any reasoning capabilities to deal with textual games. In this work, we aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure. We propose a stacked hierarchical attention mechanism to construct an explicit representation of the reasoning process by exploiting the structure of the knowledge graph. We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yunqiu Xu (9 papers)
  2. Meng Fang (100 papers)
  3. Ling Chen (144 papers)
  4. Yali Du (63 papers)
  5. Joey Tianyi Zhou (116 papers)
  6. Chengqi Zhang (74 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com