Papers
Topics
Authors
Recent
2000 character limit reached

RAG Orchestrated Multi-Agent System

Updated 30 November 2025
  • RAG Orchestrated Multi-Agent System is a modular framework where specialized agents collaborate for efficient retrieval, evidence ranking, and answer synthesis.
  • The system employs an orchestration layer that dynamically assigns tasks to resource-specific and domain-specific agents, ensuring context-sensitive and accurate outputs.
  • Empirical evaluations demonstrate improved accuracy, reduced latency and cost, and enhanced performance in applications like legal QA, software testing, and online learning.

A Retrieval-Augmented Generation (RAG) Orchestrated Multi-Agent System is a modular architecture in which multiple specialized agents coordinate via an explicit orchestration protocol to perform information retrieval, selection, and response synthesis for tasks such as question answering, software testing, knowledge distillation, and interactive decision support. Unlike monolithic or single-agent RAG architectures, multi-agent RAG systems explicitly decompose resource access, evidence ranking, reasoning, and generation across autonomous units, leveraging RAG as a compositional primitive for robust, context-sensitive, and extensible AI workflows (Srivastav et al., 6 Feb 2025, Hariharan et al., 12 Oct 2025, Zhang et al., 20 Jun 2025, Nguyen et al., 26 May 2025).

1. System Architecture and Agent Typologies

At the core of a RAG-orchestrated multi-agent system resides an orchestration layer, often implemented as a Manager or Orchestrator Agent, which dispatches user queries to a set of specialized agents tailored to heterogeneous resources or sub-tasks. Canonical agent typologies include:

  • Resource-specific retrievers: Agents customized for modalities such as YouTube (Video Agent), GitHub (Code Agent), documentation (Documentation Agent), or general search engines (Internet Agent) (Srivastav et al., 6 Feb 2025).
  • Domain-specific planners and validators: Planner Agents for requirement decomposition, Compliance Agents for auditing outputs, and Change Impact Analyzers in software testing (Hariharan et al., 12 Oct 2025).
  • Meta-agents: Orchestrators, routers, or schedulers, deciding agent invocation, task assignment, and fusion policy (Nguyen et al., 26 May 2025, Seabra et al., 23 Dec 2024).

System architectures often reflect parallel or sequential agent invocation. For instance, the Multi-Agent RAG system for online learning broadcasts queries in parallel to all resource-specialized agents, collects top-k scored passages from each, and fuses results via a LLM (Srivastav et al., 6 Feb 2025). More intricate designs introduce sequential decision policies or tree-structured rollouts for iterative reasoning (Wang et al., 17 Sep 2025).

2. Retrieval, Fusion, and Coordination Workflows

Retrieval Workflow: An initial user query is embedded (commonly via GPT-4o, Sentence-Transformer, or similar encoder), and similarity search is performed in resource-specific corpora. Each agent preprocesses raw content into passages, computes embeddings, and scores passages against the query using cosine similarity:

score(q,da,i)=cos(eq,ea,i)\mathrm{score}(q, d_{a,i}) = \cos(\mathbf{e}_q,\mathbf{e}_{a,i})

where eq\mathbf{e}_q and ea,i\mathbf{e}_{a,i} are the embedding vectors for the query and passage, respectively (Srivastav et al., 6 Feb 2025, Hariharan et al., 12 Oct 2025).

Fusion/Generation Workflow: Once per-agent candidates are returned, a fusion function selects a global top-N context for the LLM to generate the final answer. Fusion can weight each agent’s contribution by tunable parameters waw_a: global_score(d)=wa×score(q,d)\mathrm{global\_score}(d) = w_a \times \mathrm{score}(q, d) These weights may be set heuristically or updated via reinforcement signals from user feedback (Srivastav et al., 6 Feb 2025).

Advanced Coordination: In systems addressing multi-hop reasoning or complex orchestration (e.g., software testing (Hariharan et al., 12 Oct 2025), multi-hop QA (Nguyen et al., 26 May 2025), legal QA (Wang et al., 31 Aug 2025)), agent invocation is often iterative and stateful. Decision makers or judge agents evaluate the sufficiency, jurisdiction, and consistency of intermediate outputs, triggering further retrieval and evidence aggregation as necessary (Wang et al., 31 Aug 2025).

Communication Protocols and Data Formats: Inter-agent interaction typically employs structured message passing (JSON, protobufs), conveying queries, context, passage sets, and metadata. This facilitates parallel execution, provenance tracking, and modularity (Srivastav et al., 6 Feb 2025).

3. Formal Orchestration Logic and Policies

The orchestration policy π\pi governs:

  • Agent selection: which resource agents to invoke per query
  • Fusion weighting: relative influence of agents in answer synthesis
  • Iteration stopping: whether to continue retrieval/generation or return an answer

Policies may be static (invoke all agents), dynamic (intent-based routing, e.g., based on query type (Seabra et al., 23 Dec 2024)), or learned via reinforcement learning. Optimization objectives include maximizing answer accuracy or F1 score, minimizing latency or cost, and balancing coverage versus redundancy (Iannelli et al., 7 Dec 2024, Wang et al., 17 Sep 2025, Nguyen et al., 26 May 2025).

FUSE({Ra})=LLM(concat(topn(Ra)))\mathrm{FUSE}(\{R_a\}) = \text{LLM}\left(\text{concat}(\mathrm{top}_n(R_a))\right)

4. Domain-Specific Multi-Agent Extensions

RAG-orchestrated multi-agent systems have been adapted for distinct domains and workflows, often with domain-specialized agents and downstream result handling:

  • Online learning: Seamless integration of video, code, documentation, and web knowledge sources for student queries, with empirical validation via Technology Acceptance Model user studies (Srivastav et al., 6 Feb 2025).
  • Software engineering: Hybrid vector-graph knowledge store, contextualized retrieval, and agentic traceability for test case generation and compliance validation. Orchestrated multi-agent execution reduced project timelines and improved artifact accuracy and defect detection (Hariharan et al., 12 Oct 2025).
  • Supply chain management: Offline multi-agent pipelines for knowledge base distillation from support tickets (category discovery, ticket categorization, knowledge synthesis), resulting in compact, higher-quality retrieval corpora and automated query resolution (Zhang et al., 20 Jun 2025).
  • Legal QA: Iterative, judge-supervised retrieval and reasoning agents for authority, jurisdiction, temporal validity, and contradiction resolution, substantially improving accuracy and lowering uncertainty on legal benchmarks (Wang et al., 31 Aug 2025).

5. Evaluation Metrics and Empirical Outcomes

Evaluation protocols span quantitative accuracy, efficiency, and user-centric metrics:

Metric Reported Value(s) Context
Accuracy (QA/Testing) Up to 94.8% (Hariharan et al., 12 Oct 2025) Software testing, Legal QA, Admissions
Hallucination/Utility 73% ↓ hallucinations (Yu et al., 14 Mar 2025) Health QA, RAG-KG-IL
Time/cost reductions 85% time, 35% cost (Hariharan et al., 12 Oct 2025) Enterprise software testing, supply chain
User TAM Score 83.1 (mean) (Srivastav et al., 6 Feb 2025) Online learning system usability paper
F1-score improvement +2~3 pts vs baselines Multi-hop QA, Software testing, RAG-Evals

In legal QA, the L-MARS system achieved 0.96–0.98 accuracy and reduced U-Score (uncertainty) compared to pure LLM and standard RAG approaches, at the expense of increased latency (13.6s–55.7s). In supply chain support, multi-agent categorization and synthesis improved the helpful answer rate from 38.60% to 48.74%, with 77.4% fewer unhelpful responses (Zhang et al., 20 Jun 2025).

6. Design Principles, Modularity, and Extensibility

Design principles emerging from multi-agent RAG deployments include:

Multi-agent RAG systems are straightforwardly extensible to new domains with heterogeneous or evolving data sources (e.g., legal, medical, admissions), and support iterative improvement by swapping agents, adapting orchestration policies, or integrating newly indexed resource types (Srivastav et al., 6 Feb 2025, Zhang et al., 20 Jun 2025, Wang et al., 31 Aug 2025).

7. Future Directions and Open Challenges

Open technical directions include:

In summary, RAG-orchestrated multi-agent systems establish a compositional, modular paradigm for leveraging distributed resource access, robust evidence fusion, and coordinated reasoning in retrieval-augmented AI applications. This architecture offers significant improvements in accuracy, efficiency, and adaptability compared to monolithic or fixed-pipeline alternatives, with empirical validation across diverse, practical, and mission-critical tasks (Srivastav et al., 6 Feb 2025, Hariharan et al., 12 Oct 2025, Zhang et al., 20 Jun 2025, Wang et al., 31 Aug 2025, Nguyen et al., 26 May 2025).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to RAG Orchestrated Multi-Agent System.