Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Playing Text-Based Games with Common Sense (2012.02757v1)

Published 4 Dec 2020 in cs.AI and cs.CL

Abstract: Text based games are simulations in which an agent interacts with the world purely through natural language. They typically consist of a number of puzzles interspersed with interactions with common everyday objects and locations. Deep reinforcement learning agents can learn to solve these puzzles. However, the everyday interactions with the environment, while trivial for human players, present as additional puzzles to agents. We explore two techniques for incorporating commonsense knowledge into agents. Inferring possibly hidden aspects of the world state with either a commonsense inference model COMET, or a LLM BERT. Biasing an agents exploration according to common patterns recognized by a LLM. We test our technique in the 9to05 game, which is an extreme version of a text based game that requires numerous interactions with common, everyday objects in common, everyday scenarios. We conclude that agents that augment their beliefs about the world state with commonsense inferences are more robust to observational errors and omissions of common elements from text descriptions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sahith Dambekodi (2 papers)
  2. Spencer Frazier (11 papers)
  3. Prithviraj Ammanabrolu (39 papers)
  4. Mark O. Riedl (57 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.