Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Large Language Model Multi-Agent Systems

Updated 12 October 2025
  • Large Language Model Multi-Agent Systems are distributed AI architectures that use LLM-backed agents to collaboratively solve complex tasks.
  • They integrate modules for memory, planning, and adaptive communication, enhancing coordination through structured natural language interactions.
  • LLM-MAS find applications in complex problem-solving, simulated environments, and robotics while addressing challenges in scalability, security, and dynamic adaptation.

LLM Multi-Agent Systems (LLM-MAS) are distributed artificial intelligence architectures in which multiple agents, each anchored by a LLM, collaborate, compete, or negotiate via natural language to solve complex tasks or simulate emergent behaviors. This paradigm integrates the collective reasoning, planning, and communication capacities of LLMs into traditionally modular multi-agent system (MAS) infrastructures, yielding new forms of autonomy, adaptability, and emergent intelligence.

1. Core Architectures and Agent Models

LLM-MAS architectures structure agents around a core LLM, typically supplemented by explicit modules for memory, planning, tool invocation, and feedback integration. One canonical agent formulation is:

V=(L,O,M,A,R)V = (\mathcal{L}, \mathcal{O}, \mathcal{M}, \mathcal{A}, \mathcal{R})

where L\mathcal{L} is the LLM backbone, %%%%1%%%% encodes the agent's current objective, M\mathcal{M} is the memory module, A\mathcal{A} denotes the available actions (including internal deliberation, tool use, or message passing), and R\mathcal{R} covers rethinking or self-evaluation processes (Cheng et al., 7 Jan 2024).

System-level architectures exhibited in the literature span flat decentralized organizations, hierarchical and team-based topologies, society- and hybrid-models, each with distinct communication and control flows (Yan et al., 20 Feb 2025). Increasingly, dynamic or adaptive topological organization is recognized as central for robust and high-performing LLM-MAS (Leong et al., 2 Oct 2025).

A notable design is the adaptation of autonomic computing’s MAPE-K loop—Monitor, Analyze, Plan, Execute, Knowledge—wherein analysis and planning are collapsed into a combined LLM pass, and knowledge is implicitly updated by prompt history and external memory (Nascimento et al., 2023):

$\begin{array}{rcl} M: & s(t) \rightarrow p(t) \ A\;\{\text{data}\}\;P: & \text{LLM-based Analysis %%%%15%%%% Planning} & GPT(p(t)) \rightarrow a^*(t) \ E: & \text{Execution} & a^*(t) \ K: & \text{Knowledge Update} & \text{Update context for future iterations} \end{array}$

2. Communication, Collaboration, and Coherence

Robust multi-agent collaboration in LLM-MAS rests on the sophistication of its communication. Agents coordinate primarily via natural language, either in free-form or with structured overlays (e.g., JSON serialization) (Cheng et al., 7 Jan 2024, Yan et al., 20 Feb 2025). Recent communication-centric frameworks categorize the interactions at multiple levels:

  • System architectures: flat, hierarchical, team, society, hybrid (Yan et al., 20 Feb 2025)
  • Goals: cooperation (including direct and debate paradigms), competition, or mixed
  • Internal strategies: one-by-one (sequential, chain-of-thought-like), simultaneous-talk (parallel), or simultaneous-with-summarizer (centralized global state)

Micro-level paradigms include standard message-passing, speech acts (commands, queries), and blackboard (shared memory) structures. Communication objects are classified as: self (internal deliberation), others (agent messages), environment (feedback/sensors), and humans (Yan et al., 20 Feb 2025).

Communication protocols inspired by KQML, FIPA-ACL, or mediator models help regulate when and what is communicated, reducing unnecessary traffic and hallucination amplification. Adaptive or dynamic communication topology, as in AMAS (Leong et al., 2 Oct 2025), is emerging as a key requirement: agent networks and pathways are selected on a per-instance basis using lightweight LLM adaptation and learned policy modules, optimized via differentiable architecture search.

3. Adaptation, Learning, and Optimization

LLM-MAS leverage multiple forms of adaptation. Feedback-driven loops, explicit memory, and dynamic agent spawning are frequently employed (Guo et al., 21 Jan 2024, Wang et al., 29 Sep 2025). Recent work emphasizes:

  • Dynamic configuration: MAS² (Wang et al., 29 Sep 2025) introduces a tri-agent meta-system (generator, implementor, rectifier) that self-generates agent structures recursively, configures them adaptively (role/backbone/tool assignment), and rectifies them in response to runtime performance, all coordinated via Collaborative Tree Optimization for policy learning.
  • Critic-free reinforcement learning: To circumvent instability from value-critic networks in Multi-Agent Proximal Policy Optimization (MAPPO), MHGPO (Chen et al., 3 Jun 2025) estimates relative reward advantages within heterogeneous groups, allowing for stable, efficient, and credit-sensitive optimization without maintaining explicit critics.
  • Lightweight adaptation for topology selection: AMAS (Leong et al., 2 Oct 2025) minimizes computational overhead by adapting a small fraction of LLM parameters per task, enabling online selection of optimal communication graphs.

In all cases, continual feedback from the environment, other agents, or human input shapes policy updates, enabling agents to improve over time and adapt to unforeseen scenarios.

4. Applications, Benchmarks, and Performance

LLM-MAS demonstrate versatility across domains (Guo et al., 21 Jan 2024, Cheng et al., 7 Jan 2024), including:

  • Complex problem-solving and software engineering: Role-specialized agents coordinate on end-to-end programming pipelines, each assuming titles like product manager, developer, or reviewer.
  • World and economic simulation: Agents simulate social, economic, or policy interactions in environments like Minecraft or synthetic societies.
  • Collaborative robotics and embodied tasks: Simulated or real agents coordinate to achieve physical manipulation, warehouse management, or multi-robot tasks.
  • Digital twins and simulation parameterization: LLM agents control the iterative tuning of simulation parameters for optimal process outcomes (Xia et al., 28 May 2024).
  • Organizational decision support: In strategic business settings, role-play and guided discussion scenarios foster emergent, explainable recommendations (Cruz, 12 Mar 2024).

Benchmarks now reflect these complexities. Collab-Overcooked (Sun et al., 27 Feb 2025) enforces mandatory collaboration by restricting resource access and asymmetric knowledge, and introduces process-oriented metrics (TES, ITES, IC, RC) beyond end-to-end success. X-MAS-Bench (Ye et al., 22 May 2025) offers multi-function, multi-domain evaluations for heterogeneous LLM agent selection.

Performance insights are nuanced: collaboration strengths are often found in natural language goal interpretation and instruction following, with weaknesses in self-initiated collaboration and multi-step context tracking, especially as task complexity grows (Sun et al., 27 Feb 2025). Heterogeneous MAS, where agents use diverse LLMs, universally outperform homogeneous baselines, with reported boosts of 8.4% on MATH and up to 47% on AIME datasets without architectural redesign (Ye et al., 22 May 2025). Context-sensitive, dynamically routed agent topologies as in AMAS further enhance performance by matching agent pathways to task requirements (Leong et al., 2 Oct 2025).

5. Security, Vulnerability, and Robustness

The inter-agent communication backbone of LLM-MAS introduces novel security surfaces (He et al., 20 Feb 2025, He et al., 2 Jun 2025, Yan et al., 5 Aug 2025). Systemic vulnerabilities—arising from message tampering (e.g., AiTM, MAST), cascading failures from compromised agent prompts, or compositional effects—demand dedicated defensive measures.

Representative attacks include:

Vulnerability analysis frameworks (He et al., 2 Jun 2025) now formalize attack surfaces:

argmaxSΘSEvaluator(Sma,Q,G)\arg\max_{S \in \Theta_S} Evaluator(S_{ma}, Q, G)

where GG is a malicious goal, SS a target component (agent/prompt/tool), ΘS\Theta_S is the adversarial search space, and Evaluator computes attack efficacy, permitting systematic quantification and benchmark development.

Robustness interventions include trust management (message validation, provenance monitoring), human-centered active moderation (dynamic intervention, agreement metrics), and formal uncertainty quantification (Hu et al., 3 Feb 2025). Chaos engineering (Owotogbe, 6 May 2025) complements this by simulating agent failures, hallucinations, and communication breakdowns to stress-test resilience and guide recovery design.

6. Task Complexity, Context Adaptivity, and When LLM-MAS Succeed

Performance gains from LLM-MAS are not uniform across tasks. Theoretical and empirical analyses (Tang et al., 5 Oct 2025) argue that “task complexity” must be considered along two axes:

  • Depth (dd): Sequential reasoning length; single-agent error risk grows exponentially with dd, while multi-agent redundancy can mitigate compounding failures.
  • Width (ww): Diversity of capabilities or micro-operations required at each reasoning step.

For single-agent and multi-agent success probabilities:

Ssingle(d,w)=(qw)d Smulti(d,w,N,r)=r[1(1qw)N]d Δ(d,w,N,r)=(r[1(1qw)N]qw)d1\begin{align*} S_{single}(d, w) &= (q^w)^d \ S_{multi}(d, w, N, r) &= r \cdot [1 - (1 - q^w)^N]^d \ \Delta(d, w, N, r) &= \left( \frac{r [1 - (1 - q^w)^N]}{q^w} \right)^d - 1 \end{align*}

where qq is per-capability success, NN the number of agents, rr the aggregator correctness.

Findings confirm that LLM-MAS gains are enhanced primarily with increasing depth, and to a lesser extent width; the former can yield unbounded advantage in deep, error-accumulating tasks, while the latter offers diminishing returns (Tang et al., 5 Oct 2025). Practical implication is that LLM-MAS deployment is most justified for deeply layered reasoning and highly composite problems, with careful cost–benefit (e.g., token, compute) considerations (Wang et al., 29 Sep 2025).

7. Open Problems and Future Directions

Current limitations and research challenges include:

  • Dynamic scaling and self-evolution: Scalable frameworks (e.g., MAS²) enabling self-generation, configuration, and rectification are needed for non-brittle, real-world deployments (Wang et al., 29 Sep 2025).
  • Communication bottlenecks: As the agent pool grows, efficient synchronization and summarization strategies, benchmarking, and hybrid architectures (adaptive layers of centralization/decentralization) are essential (Yan et al., 20 Feb 2025).
  • Collective intelligence and multimodality: Mechanisms to effectively synthesize agent learning and memory into system-wide intelligence, and to extend beyond textual modalities (vision, audio), remain largely unsolved (Guo et al., 21 Jan 2024, Cheng et al., 7 Jan 2024).
  • Trust, security, and governance: Comprehensive vulnerability analysis, adversarial robustness, and real-time human-in-the-loop moderation are critical for high-stakes use (He et al., 2 Jun 2025, Hu et al., 3 Feb 2025).
  • Benchmarks and standardization: Multi-agent-specific evaluation metrics, datasets, and open-source frameworks (e.g., Collab-Overcooked, X-MAS-Bench, G-Safeguard) are being developed, but further work is needed to standardize and scale them to broader contexts (Sun et al., 27 Feb 2025, Ye et al., 22 May 2025, Wang et al., 16 Feb 2025).

Advances in adaptive topology design (Leong et al., 2 Oct 2025), efficient optimization (Chen et al., 3 Jun 2025), and cross-backbone agent integration (Wang et al., 29 Sep 2025, Ye et al., 22 May 2025) suggest a progressively more flexible, robust, and scalable future for LLM-MAS.


LLM-MAS represent a significant synthesis of natural language intelligence with autonomous distributed architectures. They are distinguished by emergent collective reasoning, context-adaptive organization, advanced self-adaptation capacities, and complex security, scalability, and evaluation challenges, underscoring both their promise and their engineering demands in next-generation AI systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Large Language Model Multi-Agent Systems (LLM-MAS).