Generalization in Text-based Games via Hierarchical Reinforcement Learning (2109.09968v1)
Abstract: Deep reinforcement learning provides a promising approach for text-based games in studying natural language communication between humans and artificial agents. However, the generalization still remains a big challenge as the agents depend critically on the complexity and variety of training tasks. In this paper, we address this problem by introducing a hierarchical framework built upon the knowledge graph-based RL agent. In the high level, a meta-policy is executed to decompose the whole game into a set of subtasks specified by textual goals, and select one of them based on the KG. Then a sub-policy in the low level is executed to conduct goal-conditioned reinforcement learning. We carry out experiments on games with various difficulty levels and show that the proposed method enjoys favorable generalizability.
- Yunqiu Xu (9 papers)
- Meng Fang (100 papers)
- Ling Chen (144 papers)
- Yali Du (63 papers)
- Chengqi Zhang (74 papers)