Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards A Unified Agent with Foundation Models (2307.09668v1)

Published 18 Jul 2023 in cs.RO, cs.AI, and cs.LG

Abstract: LLMs and Vision LLMs have recently demonstrated unprecedented capabilities in terms of understanding human intentions, reasoning, scene understanding, and planning-like behaviour, in text form, among many others. In this work, we investigate how to embed and leverage such abilities in Reinforcement Learning (RL) agents. We design a framework that uses language as the core reasoning tool, exploring how this enables an agent to tackle a series of fundamental RL challenges, such as efficient exploration, reusing experience data, scheduling skills, and learning from observations, which traditionally require separate, vertically designed algorithms. We test our method on a sparse-reward simulated robotic manipulation environment, where a robot needs to stack a set of objects. We demonstrate substantial performance improvements over baselines in exploration efficiency and ability to reuse data from offline datasets, and illustrate how to reuse learned skills to solve novel tasks or imitate videos of human experts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Norman Di Palo (15 papers)
  2. Arunkumar Byravan (27 papers)
  3. Leonard Hasenclever (33 papers)
  4. Markus Wulfmeier (46 papers)
  5. Nicolas Heess (139 papers)
  6. Martin Riedmiller (64 papers)
Citations (51)

Summary

We haven't generated a summary for this paper yet.