Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling Probabilistic Social Inferences from Linguistic Inputs (2306.14325v2)

Published 25 Jun 2023 in cs.AI and cs.LG

Abstract: Human beings are social creatures. We routinely reason about other agents, and a crucial component of this social reasoning is inferring people's goals as we learn about their actions. In many settings, we can perform intuitive but reliable goal inference from language descriptions of agents, actions, and the background environments. In this paper, we study this process of language driving and influencing social reasoning in a probabilistic goal inference domain. We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios. The "neuro" part is a LLM that translates language descriptions to code representations, and the "symbolic" part is a Bayesian inverse planning engine. To test our model, we design and run a human experiment on a linguistic goal inference task. Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Lance Ying (14 papers)
  2. Katherine M. Collins (32 papers)
  3. Megan Wei (5 papers)
  4. Cedegao E. Zhang (8 papers)
  5. Tan Zhi-Xuan (22 papers)
  6. Adrian Weller (150 papers)
  7. Joshua B. Tenenbaum (257 papers)
  8. Lionel Wong (16 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.