Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OCALM: Object-Centric Assessment with Language Models (2406.16748v1)

Published 24 Jun 2024 in cs.LG and cs.CL

Abstract: Properly defining a reward signal to efficiently train a reinforcement learning (RL) agent is a challenging task. Designing balanced objective functions from which a desired behavior can emerge requires expert knowledge, especially for complex environments. Learning rewards from human feedback or using LLMs to directly provide rewards are promising alternatives, allowing non-experts to specify goals for the agent. However, black-box reward models make it difficult to debug the reward. In this work, we propose Object-Centric Assessment with LLMs (OCALM) to derive inherently interpretable reward functions for RL agents from natural language task descriptions. OCALM uses the extensive world-knowledge of LLMs while leveraging the object-centric nature common to many environments to derive reward functions focused on relational concepts, providing RL agents with the ability to derive policies from task descriptions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Timo Kaufmann (5 papers)
  2. Jannis Blüml (11 papers)
  3. Antonia Wüst (9 papers)
  4. Quentin Delfosse (20 papers)
  5. Kristian Kersting (205 papers)
  6. Eyke Hüllermeier (129 papers)
Citations (1)