Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 41 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 115 tok/s Pro
Kimi K2 219 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

The Lighthouse of Language: Enhancing LLM Agents via Critique-Guided Improvement (2503.16024v1)

Published 20 Mar 2025 in cs.CL and cs.AI

Abstract: LLMs have recently transformed from text-based assistants to autonomous agents capable of planning, reasoning, and iteratively improving their actions. While numerical reward signals and verifiers can effectively rank candidate actions, they often provide limited contextual guidance. In contrast, natural language feedback better aligns with the generative capabilities of LLMs, providing richer and more actionable suggestions. However, parsing and implementing this feedback effectively can be challenging for LLM-based agents. In this work, we introduce Critique-Guided Improvement (CGI), a novel two-player framework, comprising an actor model that explores an environment and a critic model that generates detailed nature language feedback. By training the critic to produce fine-grained assessments and actionable revisions, and the actor to utilize these critiques, our approach promotes more robust exploration of alternative strategies while avoiding local optima. Experiments in three interactive environments show that CGI outperforms existing baselines by a substantial margin. Notably, even a small critic model surpasses GPT-4 in feedback quality. The resulting actor achieves state-of-the-art performance, demonstrating the power of explicit iterative guidance to enhance decision-making in LLM-based agents.

Summary

  • The paper demonstrates that integrating a critic model to provide detailed natural language feedback significantly enhances LLM agents' decision-making.
  • It outlines a two-stage methodology where an actor model refines actions based on structured critiques during iterative supervised learning.
  • Experimental results show that CGI outperforms traditional methods, highlighting its practical benefits in dynamic, interactive environments.

Enhancing LLM Agents via Critique-Guided Improvement

The paper "The Lighthouse of Language: Enhancing LLM Agents via Critique-Guided Improvement" introduces a novel framework termed Critique-Guided Improvement (CGI), aimed at enhancing the performance of LLM-based agents through iterative refinement using natural language critiques. This framework involves two main entities: an actor model that explores the environment and a critic model that provides detailed feedback. The research explores the effectiveness of CGI in interactive environments and demonstrates its superiority over existing methodologies.

Critique-Guided Improvement Framework

The CGI framework operates under a two-player paradigm where the actor model proposes multiple actions and the critic model evaluates these actions by providing structured feedback. The feedback consists of critiques and actionable revision suggestions, enabling the actor to refine its decision-making process iteratively. This approach involves two main stages:

  1. Critique Generation:

The critic model is trained to evaluate actions based on predefined dimensions such as contribution, feasibility, and efficiency. It generates critiques that guide the actor in improving its actions. Figure 1

Figure 1: An overview of CGI, illustrating the interaction of the actor and critic models.

  1. Action Refinement: In this stage, the actor employs the critiques to enhance its refinement capability through iterative supervised fine-tuning. This not only improves reasoning but also enhances the integration of external feedback.

Experimental Results

CGI was tested in three interactive environments, revealing its performance surpasses traditional approaches that rely primarily on numerical feedback or untrained self-critiques. The results demonstrated significant improvements in agent performance, with the trained critic model outperforming GPT-4 as a feedback provider.

Key Findings

  • Effectiveness of Verbal Feedback: The paper highlights that natural language feedback in the form of structured critiques is more effective than mere numerical signals. The critiques not only offer accurate assessments but also provide actionable insights, which are crucial for task execution in dynamic environments.
  • Challenges in Action Refinement: Fine-tuned models exhibit difficulty in fully leveraging the critiques, indicating a need for better alignment between feedback and model actions. CGI tackles this issue by refining actions iteratively, leading to improved task performance.
  • Continuous Performance Enhancement: CGI supports ongoing performance enhancements through its iterative refinement process, demonstrating superior adaptability to complex task requirements compared to baseline models. Figure 2

    Figure 2: Revision Ratio of actor model at different trajectory stages across three tasks.

Methodology and Implementation

The implementation of CGI involves training both critic and actor models using datasets generated from expert models in designated environments. The critic model employs a supervised learning approach to generate critiques, while the actor model undergoes multiple refinement iterations to optimize feedback utilization. Figure 3

Figure 3: Trajectory visualization highlighting the improved score trajectory of CGI compared with baseline models.

Implications and Future Work

The findings from this research underline the potential of integrating detailed natural language criticism into LLM-based agent frameworks, providing a robust mechanism for action refinement and reasoning enhancement. Future research efforts could focus on exploring further minimization of policy misalignment and optimizing critique integration strategies to further enhance agent adaptability in diverse environments.

Conclusion

By leveraging structured critiques and iterative action refinement, the CGI framework provides actionable, context-specific guidance that significantly enhances LLM agents' decision-making processes. These findings not only highlight the efficacy of CGI in improving agent performance across various tasks but also chart a new course for future developments in AI-driven agentic roles within complex environments.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.