Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mental Modeling of Reinforcement Learning Agents by Language Models (2406.18505v1)

Published 26 Jun 2024 in cs.LG, cs.AI, cs.CL, and cs.RO

Abstract: Can emergent LLMs faithfully model the intelligence of decision-making agents? Though modern LLMs exhibit already some reasoning ability, and theoretically can potentially express any probable distribution over tokens, it remains underexplored how the world knowledge these pretrained models have memorized can be utilized to comprehend an agent's behaviour in the physical world. This study empirically examines, for the first time, how well LLMs can build a mental model of agents, termed agent mental modelling, by reasoning about an agent's behaviour and its effect on states from agent interaction history. This research may unveil the potential of leveraging LLMs for elucidating RL agent behaviour, addressing a key challenge in eXplainable reinforcement learning (XRL). To this end, we propose specific evaluation metrics and test them on selected RL task datasets of varying complexity, reporting findings on agent mental model establishment. Our results disclose that LLMs are not yet capable of fully mental modelling agents through inference alone without further innovations. This work thus provides new insights into the capabilities and limitations of modern LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenhao Lu (17 papers)
  2. Xufeng Zhao (14 papers)
  3. Josua Spisak (6 papers)
  4. Jae Hee Lee (24 papers)
  5. Stefan Wermter (157 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com