Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 92 TPS
Gemini 2.5 Pro 50 TPS Pro
GPT-5 Medium 32 TPS
GPT-5 High 30 TPS Pro
GPT-4o 67 TPS
GPT OSS 120B 452 TPS Pro
Kimi K2 190 TPS Pro
2000 character limit reached

Agentic Neural Networks: Self-Evolving Multi-Agent Systems via Textual Backpropagation (2506.09046v2)

Published 10 Jun 2025 in cs.LG, cs.AI, and cs.MA

Abstract: Leveraging multiple LLMs(LLMs) has proven effective for addressing complex, high-dimensional tasks, but current approaches often rely on static, manually engineered multi-agent configurations. To overcome these constraints, we present the Agentic Neural Network(ANN), a framework that conceptualizes multi-agent collaboration as a layered neural network architecture. In this design, each agent operates as a node, and each layer forms a cooperative "team" focused on a specific subtask. Agentic Neural Network follows a two-phase optimization strategy: (1) Forward Phase-Drawing inspiration from neural network forward passes, tasks are dynamically decomposed into subtasks, and cooperative agent teams with suitable aggregation methods are constructed layer by layer. (2) Backward Phase-Mirroring backpropagation, we refine both global and local collaboration through iterative feedback, allowing agents to self-evolve their roles, prompts, and coordination. This neuro-symbolic approach enables ANN to create new or specialized agent teams post-training, delivering notable gains in accuracy and adaptability. Across four benchmark datasets, ANN surpasses leading multi-agent baselines under the same configurations, showing consistent performance improvements. Our findings indicate that ANN provides a scalable, data-driven framework for multi-agent systems, combining the collaborative capabilities of LLMs with the efficiency and flexibility of neural network principles. We plan to open-source the entire framework.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces Agentic Neural Networks, a framework that models multi-agent systems as layered networks with self-evolving capabilities.
  • It employs a two-phase optimization strategy with forward dynamic team selection and backward textual refinement to enhance agent collaboration.
  • Experimental results on datasets like HumanEval (72.7%-87.8% accuracy) validate its superiority over static multi-agent configurations.

Agentic Neural Networks: Self-Evolving Multi-Agent Systems via Textual Backpropagation

The paper "Agentic Neural Networks: Self-Evolving Multi-Agent Systems via Textual Backpropagation" (2506.09046) introduces the Agentic Neural Network (ANN\mathcal{ANN}), a novel framework that applies neural network principles to multi-agent systems (MAS). The ANN\mathcal{ANN} framework aims to address the limitations of static, manually engineered multi-agent configurations by conceptualizing multi-agent collaboration as a layered neural network architecture, where each agent acts as a node and each layer forms a cooperative team focused on a specific subtask.

Core Methodology

The ANN\mathcal{ANN} methodology draws inspiration from classic neural networks, replacing numerical weight optimizations with dynamic agent-based team selection and iterative textual refinement. It employs a two-phase optimization strategy: a forward phase for dynamic team selection and a backward phase for optimization.

Forward Dynamic Team Selection

In the forward phase, the framework decomposes a complex task into subtasks, assigning each to a layer of specialized agents. This process involves:

  1. Defining the ANN\mathcal{ANN} structure: The architecture mimics neural networks, where each layer consists of agent nodes connected in a sequence to facilitate information flow.
  2. Selecting Layer-wise Aggregation Functions: A mechanism dynamically determines the most appropriate aggregation function at each layer, combining outputs from multiple agents based on subtask requirements.

The aggregation function selection is determined by

f=DynamicRoutingSelect(F,,I,I),f_\ell = \text{DynamicRoutingSelect}(\mathcal{F}_\ell, \ell, I_\ell, I),

where F\mathcal{F}_\ell is the set of candidate aggregation functions, II_\ell is the input to the layer, and II is the task-specific information. Figure 1

Figure 1: Comparison of static and dynamic agentic teams, illustrating the adaptability of the ANN\mathcal{ANN} framework.

Backward Optimization

If the predefined performance thresholds are not met after the forward pass, the backward optimization phase is triggered to refine agent interactions and aggregation functions at both global (system-wide) and local (layer-specific) levels.

  1. Global Optimization: Analyzes inter-layer coordination, refining interconnections and data flow to improve overall system performance. The global gradient is computed as:

Gglobal=ComputeGlobalGradient(S,τ),\mathcal{G}_{\text{global}} = \text{ComputeGlobalGradient}(S, \tau),

where SS represents the global workflow and τ\tau denotes the execution trajectory.

  1. Local Optimization: Fine-tunes agents and aggregation functions within each layer, adjusting their parameters based on detailed performance feedback. The local gradient for each layer is computed as:

Glocal,t=βGglobal+(1β)×ComputeLocalGradient(,f,τ),\mathcal{G}_{\text{local},\ell}^{t} = \beta \mathcal{G}_{\text{global}} + (1 - \beta) \times \text{ComputeLocalGradient}(\ell, f_{\ell}, \tau),

where β\beta is a weighting factor that balances the influence of global optimization and layer-specific gradients.

To improve stability, ANN\mathcal{ANN} employs momentum-based optimization.

Experimental Validation

The ANN\mathcal{ANN} framework was evaluated on four challenging datasets: MATH (mathematical reasoning), DABench (data analysis), Creative Writing, and HumanEval (code generation). The experimental results indicate that ANN\mathcal{ANN} simplifies MAS design by automating prompt tuning, role assignment, and agent collaboration, outperforming existing baselines in accuracy. For instance, on HumanEval, ANN\mathcal{ANN} achieved 72.7\% and 87.8\% for GPT-3.5 and GPT-4, respectively. Figure 2

Figure 2

Figure 2

Figure 2

Figure 2: Ablation paper results on HumanEval, Creative Writing, MATH, and DABench, demonstrating the impact of various components of the ANN\mathcal{ANN} framework.

The paper also presents ablation studies to demonstrate the contribution of each component of the ANN\mathcal{ANN} framework. The ablation paper compares four variants: the full ANN\mathcal{ANN} approach, a variant without momentum-based optimization, a variant without validation-based performance checks, and a variant without backward optimization. The results indicate that each component contributes significantly to performance, and combining them yields the most reliable and robust improvements.

Implications and Future Directions

The ANN\mathcal{ANN} framework introduces a paradigm shift in multi-agent systems, moving from static, manually designed architectures to more data-driven, automated approaches. The framework's self-evolving capabilities, dynamically reconfiguring its agent teams and coordination strategies, offer a promising direction for creating more robust and flexible multi-agent systems.

Future work may focus on automating the generation of initial layouts from accumulated agent experience using meta-prompt learning, integrating advanced pruning techniques to enhance efficiency, introducing a dynamic role adjustment mechanism, and integrating multi-agent fine-tuning with global and local tuning of the multi-agentic workflow. Figure 3

Figure 3

Figure 3

Figure 3: Prompt-evolution trajectory for the HumanEval dataset.

Figure 4

Figure 4

Figure 4: Prompt-evolution trajectory for the DABench dataset.

Conclusion

The Agentic Neural Network (ANN\mathcal{ANN}) presents a novel approach to multi-agent systems by integrating neural network principles with LLMs. The framework's dynamic agent team formation, two-phase optimization pipeline, and self-evolving capabilities demonstrate its potential for orchestrating complex multi-agent workflows. The ANN\mathcal{ANN} framework effectively combines symbolic coordination with connectionist optimization, paving the way for fully automated and self-evolving multi-agent systems.