Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Grounding through Social Interactions and Curiosity-Driven Multi-Goal Learning (1911.03219v1)

Published 8 Nov 2019 in cs.LG, cs.CL, and stat.ML

Abstract: Autonomous reinforcement learning agents, like children, do not have access to predefined goals and reward functions. They must discover potential goals, learn their own reward functions and engage in their own learning trajectory. Children, however, benefit from exposure to language, helping to organize and mediate their thought. We propose LE2 (Language Enhanced Exploration), a learning algorithm leveraging intrinsic motivations and natural language (NL) interactions with a descriptive social partner (SP). Using NL descriptions from the SP, it can learn an NL-conditioned reward function to formulate goals for intrinsically motivated goal exploration and learn a goal-conditioned policy. By exploring, collecting descriptions from the SP and jointly learning the reward function and the policy, the agent grounds NL descriptions into real behavioral goals. From simple goals discovered early to more complex goals discovered by experimenting on simpler ones, our agent autonomously builds its own behavioral repertoire. This naturally occurring curriculum is supplemented by an active learning curriculum resulting from the agent's intrinsic motivations. Experiments are presented with a simulated robotic arm that interacts with several objects including tools.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nicolas Lair (3 papers)
  2. Cédric Colas (27 papers)
  3. Rémy Portelas (19 papers)
  4. Jean-Michel Dussoux (3 papers)
  5. Peter Ford Dominey (8 papers)
  6. Pierre-Yves Oudeyer (95 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.