Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcing Multi-Turn Reasoning in LLM Agents via Turn-Level Credit Assignment (2505.11821v1)

Published 17 May 2025 in cs.LG

Abstract: This paper investigates approaches to enhance the reasoning capabilities of LLM agents using Reinforcement Learning (RL). Specifically, we focus on multi-turn tool-use scenarios, which can be naturally modeled as Markov Decision Processes (MDPs). While existing approaches often train multi-turn LLM agents with trajectory-level advantage estimation in bandit settings, they struggle with turn-level credit assignment across multiple decision steps, limiting their performance on multi-turn reasoning tasks. To address this, we introduce a fine-grained turn-level advantage estimation strategy to enable more precise credit assignment in multi-turn agent interactions. The strategy is general and can be incorporated into various RL algorithms such as Group Relative Preference Optimization (GRPO). Our experimental evaluation on multi-turn reasoning and search-based tool-use tasks with GRPO implementations highlights the effectiveness of the MDP framework and the turn-level credit assignment in advancing the multi-turn reasoning capabilities of LLM agents in complex decision-making settings. Our method achieves 100% success in tool execution and 50% accuracy in exact answer matching, significantly outperforming baselines, which fail to invoke tools and achieve only 20-30% exact match accuracy.

Summary

  • The paper introduces a novel turn-level credit assignment strategy that provides precise feedback for each decision in multi-turn LLM interactions.
  • It models multi-turn tasks as Markov Decision Processes, effectively combining immediate and outcome rewards to guide LLM agents.
  • Experiments demonstrate that MT-GRPO outperforms baselines in tool execution, exact match accuracy, and training stability.

This paper (2505.11821) addresses the challenge of training LLMs to act as effective agents in multi-turn environments using Reinforcement Learning (RL). While RL has shown promise in improving LLM reasoning, applying it to tasks requiring sequential interaction with external tools (like search engines, calculators, etc.) faces a key hurdle: poor credit assignment. Existing methods often model multi-turn tasks as bandit problems, assigning credit based on the final outcome of an entire interaction trajectory. This makes it difficult for the agent to learn which specific steps or "turns" contributed positively or negatively to the result, hindering performance on complex, long-horizon reasoning tasks.

To overcome this, the authors propose two main contributions:

  1. Modeling Multi-Turn Interaction as an MDP: They frame multi-turn tool-use tasks as Markov Decision Processes (MDPs), which inherently capture the sequential nature of decisions and environmental feedback. This moves away from the bandit formulation used in many prior works.
  2. Turn-Level Credit Assignment: They introduce a fine-grained strategy for estimating advantages at the turn level, rather than just the trajectory level. This allows the agent to learn from feedback on individual steps, incorporating both turn-level (e.g., successful tool use) and outcome-level (e.g., correct final answer) rewards more effectively.

The proposed strategy is demonstrated by integrating it into the Group Relative Preference Optimization (GRPO) algorithm, resulting in Multi-Turn GRPO (MT-GRPO). The core idea of MT-GRPO's advantage estimation for a two-turn scenario is to combine turn-level advantages (A^T\hat{A}^T) derived from immediate rewards and outcome-level advantages (A^O\hat{A}^O) derived from the final rewards. Specifically, the advantage for the first turn (which includes reasoning and tool calling) is a combination of turn and outcome advantages, while the advantage for the second turn (which includes final reasoning and answer generation) is based solely on the outcome advantage.

For an interaction trajectory ii with turn reward RiTR^T_i and outcome reward RiOR^O_i, the turn-level advantages in MT-GRPO are calculated as:

A^i,1MT-GRPO=A^iT+λA^iO\hat{A}^{\text{MT-GRPO}}_{i, 1} = \hat{A}^{T}_{i} + \lambda \hat{A}^{O}_{i}

A^i,2MT-GRPO=A^iO\hat{A}^{\text{MT-GRPO}}_{i, 2} = \hat{A}^{O}_{i}

where λ\lambda is a scaling coefficient, and A^iT\hat{A}^{T}_{i} and A^iO\hat{A}^{O}_{i} are calculated using GRPO's group-relative approach:

A^iT=RiTmean({RiT}i=1G)std({RiT}i=1G)\hat{A}^{T}_{i} = \frac{R^{T}_{i} - \text{mean}(\{R^{T}_{i}\}_{i=1}^{G})}{\text{std}(\{R^{T}_{i}\}_{i=1}^{G})}

A^iO=RiOmean({RiO}i=1G)std({RiO}i=1G)\hat{A}^{O}_{i} = \frac{R^{O}_{i} - \text{mean}(\{R^{O}_{i}\}_{i=1}^{G})}{\text{std}(\{R^{O}_{i}\}_{i=1}^{G})}

This formulation ensures that the agent gets direct feedback for its initial decision (tool use) while also considering the overall success of the trajectory for both turns. The authors note that this turn-level advantage estimation strategy can be adapted to other RL algorithms beyond GRPO.

To evaluate their approach, the authors implement a simplified two-turn agent using the Qwen2.5-7B model that interacts with a Wikipedia search tool to answer questions from the TriviaQA dataset. The interaction flow is defined as reasoning -> search -> result -> reasoning -> answer, enforced by strict XML tagging in the system prompt and environment parsing. The environment provides verifiable rewards:

  • Turn-Level: Tool Execution (checking correct tool call and no environment error), Search Result Answer Presence (checking if the ground truth appears in search results).
  • Outcome-Level: Final Answer Presence, Exact Match (comparing agent's answer to ground truth), XML Format, XML Tag Usage (checking output structure and tag correctness).

Experiments compare MT-GRPO against baseline GRPO variants: GRPO-OR (using only outcome rewards) and GRPO-MR (merging outcome and turn rewards at the trajectory level, Ri=RiO+RiTR_i = R^O_i + R^T_i).

The results demonstrate the practical benefits of turn-level credit assignment:

  • Tool Execution: MT-GRPO achieves 100% success in correctly invoking the search tool during training and evaluation. GRPO-MR incorporating turn rewards also performs well, but GRPO-OR, lacking specific turn feedback, often fails to use the tool correctly.
  • Answer Accuracy: MT-GRPO significantly outperforms baselines in exact match accuracy (50% vs. 33.46% for GRPO-MR and 0% for GRPO-OR on validation).
  • Training Stability: MT-GRPO shows more stable training curves and lower variance across multiple runs compared to the baselines, indicating more reliable learning of the desired multi-turn behavior.

The implementation relies on verifiable rewards and structured interaction via XML tags, defining a clear state and action space within the multi-turn sequence. The use of vLLM for efficient rollouts and Huggingface TRL for training demonstrates a practical setup for applying RL to LLMs. The code for the project is available, which is a valuable resource for practitioners wanting to implement similar multi-turn RL agents.

While the current work focuses on a two-turn environment, the authors highlight that the core idea of turn-level credit assignment is general and crucial for scaling RL to more complex, longer multi-turn agent tasks. Future work will explore extending the approach to these more complex scenarios and potentially moving beyond predefined verifiable rewards.