Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Autonomous Reinforcement Learning for Real-World Robotic Manipulation with Large Language Models (2503.04280v4)

Published 6 Mar 2025 in cs.RO, cs.AI, and cs.LG

Abstract: Recent advancements in LLMs and Visual LLMs (VLMs) have significantly impacted robotics, enabling high-level semantic motion planning applications. Reinforcement Learning (RL), a complementary paradigm, enables agents to autonomously optimize complex behaviors through interaction and reward signals. However, designing effective reward functions for RL remains challenging, especially in real-world tasks where sparse rewards are insufficient and dense rewards require elaborate design. In this work, we propose Autonomous Reinforcement learning for Complex Human-Informed Environments (ARCHIE), an unsupervised pipeline leveraging GPT-4, a pre-trained LLM, to generate reward functions directly from natural language task descriptions. The rewards are used to train RL agents in simulated environments, where we formalize the reward generation process to enhance feasibility. Additionally, GPT-4 automates the coding of task success criteria, creating a fully automated, one-shot procedure for translating human-readable text into deployable robot skills. Our approach is validated through extensive simulated experiments on single-arm and bi-manual manipulation tasks using an ABB YuMi collaborative robot, highlighting its practicality and effectiveness. Tasks are demonstrated on the real robot setup.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Niccolò Turcato (8 papers)
  2. Matteo Iovino (13 papers)
  3. Aris Synodinos (1 paper)
  4. Alberto Dalla Libera (20 papers)
  5. Ruggero Carli (59 papers)
  6. Pietro Falco (12 papers)

Summary

We haven't generated a summary for this paper yet.