Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Empowering Large Language Model Agents through Action Learning (2402.15809v2)

Published 24 Feb 2024 in cs.AI and cs.CL

Abstract: LLM Agents have recently garnered increasing interest yet they are limited in their ability to learn from trial and error, a key element of intelligent behavior. In this work, we argue that the capacity to learn new actions from experience is fundamental to the advancement of learning in LLM agents. While humans naturally expand their action spaces and develop skills through experiential learning, LLM agents typically operate within fixed action spaces, limiting their potential for growth. To address these challenges, our study explores open-action learning for language agents. We introduce a framework LearnAct with an iterative learning strategy to create and improve actions in the form of Python functions. In each iteration, LLM revises and updates the currently available actions based on the errors identified in unsuccessful training tasks, thereby enhancing action effectiveness. Our experimental evaluations across Robotic Planning and Alfworld environments reveal that after learning on a few training task instances, our approach to open-action learning markedly improves agent performance for the type of task (by 32 percent in AlfWorld compared to ReAct+Reflexion, for instance) highlighting the importance of experiential action learning in the development of more intelligent LLM agents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Haiteng Zhao (13 papers)
  2. Chang Ma (20 papers)
  3. Guoyin Wang (108 papers)
  4. Jing Su (47 papers)
  5. Lingpeng Kong (134 papers)
  6. Jingjing Xu (80 papers)
  7. Zhi-Hong Deng (39 papers)
  8. Hongxia Yang (130 papers)
Citations (5)