Papers
Topics
Authors
Recent
2000 character limit reached

Unlocking the Power of Multi-Agent LLM for Reasoning: From Lazy Agents to Deliberation (2511.02303v1)

Published 4 Nov 2025 in cs.AI and cs.CL

Abstract: LLMs trained with reinforcement learning and verifiable rewards have achieved strong results on complex reasoning tasks. Recent work extends this paradigm to a multi-agent setting, where a meta-thinking agent proposes plans and monitors progress while a reasoning agent executes subtasks through sequential conversational turns. Despite promising performance, we identify a critical limitation: lazy agent behavior, in which one agent dominates while the other contributes little, undermining collaboration and collapsing the setup to an ineffective single agent. In this paper, we first provide a theoretical analysis showing why lazy behavior naturally arises in multi-agent reasoning. We then introduce a stable and efficient method for measuring causal influence, helping mitigate this issue. Finally, as collaboration intensifies, the reasoning agent risks getting lost in multi-turn interactions and trapped by previous noisy responses. To counter this, we propose a verifiable reward mechanism that encourages deliberation by allowing the reasoning agent to discard noisy outputs, consolidate instructions, and restart its reasoning process when necessary. Extensive experiments demonstrate that our framework alleviates lazy agent behavior and unlocks the full potential of multi-agent framework for complex reasoning tasks.

Summary

  • The paper introduces Dr. MAMR, which redefines multi-agent LLM reasoning by countering lazy-agent behavior with novel reward restructuring.
  • It integrates a Shapley-inspired causal influence model and adaptive deliberation to assign granular rewards and ensure balanced agent participation.
  • Empirical tests on benchmarks like MATH500 and GSM8K show enhanced pass rates and stable multi-turn interactions compared to single-agent strategies.

Unlocking the Power of Multi-Agent LLM for Reasoning: From Lazy Agents to Deliberation

Introduction

The paper focuses on enhancing the reasoning capabilities of multi-agent systems using LLMs through a framework called ReMA, which incorporates meta-thinking and reasoning agents. A core issue identified in this framework is the emergence of lazy agents, particularly when one agent dominates the reasoning process, leading to inefficiencies. The paper proposes solutions to mitigate lazy agent behavior and optimize multi-agent collaboration for complex reasoning tasks.

Multi-Agent LLM Reasoning Framework

ReMA employs a dual-agent approach for problem-solving. The meta-thinking agent sets goals and adapts to feedback, while the reasoning agent executes step-by-step computations. The agents work sequentially on tasks, but lazy behavior can cause one to rely excessively on the other, undermining collaborative potential.

The framework is built on Multi-Turn Group Relative Preference Optimization (GRPO), enabling granular credit assignment for dialog turns. Despite these optimized mechanisms, lazy behavior persists due to inherent biases in GRPO's objective function, which favors shorter trajectories, inadvertently promoting minimal interaction between agents. Figure 1

Figure 1: Mean number of turns comparing reasoning processes with and without lazy-agent behavior.

Identifying and Addressing Lazy Agents

The paper proposes Dr. MAMR, an advanced method to tackle lazy-agent issues. It includes:

  1. Theoretical Insights: The analysis reveals that GRPO's normalization bias incentivizes reduced turns. The paper demonstrates that unless longer rollouts are substantially more rewarding, the system favors shorter interactions, leading to superficial dialogue exchanges dominated by one agent.
  2. Shapley-inspired Causal Influence: ReMA's reliance on single trajectory estimation obscures turn-specific contributions. To rectify this, the authors introduce a Shapley-inspired model, which aggregates contributions across similar rollout steps, thus mitigating phrasal biases and creating robust causality estimates. Figure 2

Figure 2

Figure 2

Figure 2: Causal influence across training steps.

  1. Adaptive Reasoning Agent Deliberation: Longer dialogues can lead agents astray due to context fragmentation. To counteract this, the reasoning agent can restart reasoning, discarding prior outputs when needed to achieve coherent problem-solving. This action is guided by a verifiable reward mechanism that assesses the agent's final probability of output correctness upon restart.

Empirical Evaluation and Results

Experiments on benchmarks like MATH500 and GSM8K demonstrate Dr. MAMR's efficacy. The multi-agent system clearly outpaces single-agent models like GRPO, showing increased pass rates and improved reasoning process stability across various tests.

The robust training curves of Dr. MAMR contrast sharply with ReMA, indicating sustained reward accumulation and stable learning, whereas ReMA frequently collapses due to reward-hacking phenomena, which are prevalent in reinforcement learning settings. Figure 3

Figure 3: Training curve of ReMA with process reward assigned for each turn.

Forward-Looking Implications

Dr. MAMR enables balanced agent involvement by refining credit assignment and ensuring agents actively engage throughout problem-solving processes. The architecture vastly improves handling of complex tasks and fosters collaboration among specialized LLM agents. Future research can explore further refining causal inference and promoting synchronization in agent systems to avoid inefficiencies rooted in biased reward structures. Figure 4

Figure 4: Pass@K performance.

Conclusion

The investigation into multi-agent LLM reasoning showcases critical improvements necessary to overcome the lazy-agent problem. Through Dr. MAMR, the paper successfully realigns GRPO's reward structure to support a cooperative agent ecosystem, unlocking the full potential of multi-agent setups in intricate reasoning scenarios. By advancing reward mechanisms and fostering active agent roles, Dr. MAMR lays a foundational pathway for future explorations into AI-driven collaborative reasoning systems.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Sign up for free to view the 12 tweets with 30 likes about this paper.