Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Overcoming Referential Ambiguity in Language-Guided Goal-Conditioned Reinforcement Learning (2209.12758v2)

Published 26 Sep 2022 in cs.LG and cs.CL

Abstract: Teaching an agent to perform new tasks using natural language can easily be hindered by ambiguities in interpretation. When a teacher provides an instruction to a learner about an object by referring to its features, the learner can misunderstand the teacher's intentions, for instance if the instruction ambiguously refer to features of the object, a phenomenon called referential ambiguity. We study how two concepts derived from cognitive sciences can help resolve those referential ambiguities: pedagogy (selecting the right instructions) and pragmatism (learning the preferences of the other agents using inductive reasoning). We apply those ideas to a teacher/learner setup with two artificial agents on a simulated robotic task (block-stacking). We show that these concepts improve sample efficiency for training the learner.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Hugo Caselles-Dupré (19 papers)
  2. Olivier Sigaud (56 papers)
  3. Mohamed Chetouani (36 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.