Papers
Topics
Authors
Recent
2000 character limit reached

AnalogSAGE: Multi-Agent Analog Circuit Design

Updated 30 December 2025
  • AnalogSAGE is an open-source, self-evolving multi-agent framework that automates analog circuit design using a stratified memory hierarchy and closed-loop simulation feedback.
  • It orchestrates three specialized agents—topology selection, refinement, and parameter optimization—to translate design specs into high-performance analog circuits.
  • By integrating Bayesian optimization with ngspice simulations, AnalogSAGE significantly improves design pass rates and efficiently compresses the parameter search space.

AnalogSAGE is an open-source, self-evolving multi-agent design framework for analog circuit automation, introducing a stratified memory hierarchy and simulation-grounded, closed-loop learning to achieve human-like reliability and efficiency in analog topology synthesis and device parameter optimization. By orchestrating three distinct agents—each tasked with topology selection, topology refinement, and parameter optimization, respectively—AnalogSAGE addresses the intrinsic complexity and knowledge-intensitivity of analog design, surpassing existing LLM-driven frameworks in specification-driven operational amplifier (op-amp) design and simulation verification under industry-standard conditions (Wang et al., 27 Dec 2025).

1. System Architecture and Multi-Agent Design

AnalogSAGE employs a three-stage multi-agent pipeline, where each specialized agent operates in tandem with a global stratified memory system:

  • Stage I: Topology Selection Agent translates textual design specifications into candidate netlists. This is accomplished by querying the stratified memory for past insights, composing a natural language description of the target architecture, and invoking the Candidates Module to select topologies from a validated vector database.
  • Stage II: Topology Refinement Agent performs localized structural edits—such as compensation network adjustments or bias tuning—to iteratively address specification violations. It leverages failure analysis from the Introspective Optimization layer, together with suggestions from the Knowledge Module, to propose incremental topology mutations.
  • Stage III: Parameter Optimization Agent determines device sizes and passive component values to meet design requirements. It first infers parameter ranges via LLM-driven analysis, then executes @@@@1@@@@ using the Numerical Module, which combines surrogate modeling with ngspice simulation and updates parameter bounds based on empirical performance feedback.

The agents interact in a forward progression—producing, refining, and optimizing candidate circuits—but also receive compressed simulation results and reflection-based feedback in a backward pass, enabling continuous context enrichment and learning. At each step, prompts are dynamically assembled from long-term insights, real-time reflections, compressed design traces, and externally grounded design knowledge.

2. Stratified Memory Hierarchy

AnalogSAGE’s memory subsystem comprises four coordinated layers, each regulating information retention, retrieval, and consolidation to maximize both design reliability and token efficiency:

Memory Layer Scope Key Contents
Evolution Memory Global, cross-task Successful design insights—spec, topology, metrics, strategies
Introspective Optimization Mid-term, intra-task Reflections on failed iterations: cause + corrective actions
Stage Context Fusion Short-term, per-iter Compressed summary of reasoning and simulation history
Analog Design Experience External modules Candidates, Knowledge, and Numerical modules for facts/tools

Evolution Memory stores bullet-list summaries of the entire reasoning path and key choices from successful designs. Retrieval for new tasks is based on top-k similarity between current task embeddings and stored specifications, enabling knowledge transfer.

Introspective Optimization accumulates failure reflections during ongoing design attempts, producing a growing list [Ref₁, Ref₂, …] that is reintegrated at each iteration to prevent repeated errors. Lists are compressed when exceeding a threshold.

Stage Context Fusion effectively summarises the complete history of LLM queries, netlist diffs, and simulation outputs for each iteration. Compression modules distill these traces into fixed-size context summaries, supporting long-horizon reasoning without exceeding context budgets.

Analog Design Experience encompasses external knowledge sources and verification tools, including a topology database with vector search (Candidates Module), retrieval-augmented generation over an extensive design paper corpus (Knowledge Module), and a Bayesian optimization engine interfaced with ngspice for direct simulation.

Memory layers are regularly pruned, summarized, or replaced according to token limits and task relevance, supporting both knowledge consolidation and effective “forgetting” of low-utility or obsolete context.

3. Simulation-Grounded Feedback and Optimization Loop

Simulation-grounded learning is central to AnalogSAGE. Parameter values and topology modifications are validated through ngspice simulations using the SKY130 PDK, with tight coupling to iterative Bayesian optimization:

  • Parameter Range Definition: Feasible ranges Ri=[mini,maxi]R_i = [min_i, max_i] are inferred at each iteration based on LLM analysis of the refined netlist and design context.
  • Bayesian Optimization: For k=1Kk = 1 \dots K, parameter candidates θk\theta_k (device width/length, current, resistance, capacitance, etc.) are proposed by acquisition maximization, simulated with ngspice, and evaluated for metrics such as:
    • AvA_v: 20log10(Vout/Vin)20 \log_{10}(|V_{out}/V_{in}|)
    • GBWGBW: frequency where Av|A_v| drops by 1/21/\sqrt{2}
    • ϕm\phi_m: phase margin, 180+L(jωc)180^\circ + \angle L(j\omega_c)
    • P=VDDIsupplyP = V_{DD} \cdot I_{supply}
    • CMRR, PSRR, PSRN in dB

Designs are accepted only if all specifications (e.g., AvAvA_v \geq A_v^*, GBWGBWGBW \geq GBW^*, ϕmϕm\phi_m \geq \phi_m^*, PPP \leq P^*) are satisfied. Each failure generates a targeted “reflection” (e.g., “phase margin too low; add Miller cap with nulling R”), appended to the Introspective Optimization layer and used to inform subsequent refinement and memory updates.

4. Algorithmic Details and Pseudocode

The design process is codified in a closed-loop algorithm, with each agent and memory layer interleaved:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Input: Spec S, max_iters T_max
insights  Retrieve(EvolutionMemory, S)
reflections  []
summary  ""

for iter = 1 to T_max do
  # Stage I: Topology Selection
  prompt_I  [S, insights, summary]
  desc_topo  LLM(prompt_I)
  candidates  CandidatesModule(desc_topo)
  netlist_0  candidates[1]

  # Stage II: Topology Refinement
  prompt_II  [netlist_0, reflections, summary]
  netlist_refined  LLM(prompt_II)

  # Stage III: Parameter Optimization
  param_ranges  inferRanges(LLM, netlist_refined, summary)
  θ*, metrics  NumericalModule(netlist_refined, param_ranges)

  if metrics satisfy S then
    Insight  InsightModule(prompt_Iprompt_II, θ*, metrics)
    EvolutionMemory.add((S, netlist_refined, metrics, Insight))
    return (netlist_refined, θ*)
  else
    Ref  ReflectionModule(prompt_II, metrics)
    reflections.append(Ref)
    summary  Compress(summary  Ref  metrics)
    continue # next iteration
  end if
end for

return Failure

Key update rules include iterative tightening of parameter ranges

Ri(t+1)=shrinkRange(Ri(t),θt,metricst)R_i^{(t+1)} = shrinkRange(R_i^{(t)}, \theta^*_t, metrics_t)

and localized topology mutations restricted to single-component edits. The loop proceeds until a valid design is found or the maximum number of iterations is reached.

5. Benchmark Suite, Metrics, and Quantitative Performance

AnalogSAGE is benchmarked across ten op-amp design problems spanning a variety of difficulty levels:

  • Easy Tasks: P103P \leq 10^3 μW; Av45A_v \geq 45 dB; CMRR,PSRR,PSRN20CMRR, PSRR, PSRN \geq 20 dB; GBW105GBW \geq 10^5 Hz; PM60PM \geq 60^\circ, etc.
  • Medium/Hard Tasks: Progressively tighter constraints on power, bandwidth, and noise, with Task 10 featuring P5P \leq 5 μW, Av80A_v \geq 80 dB, and GBW106GBW \geq 10^6 Hz.

Metrics reported include:

  • Pass@k: Pass@k=1C(nc,k)C(n,k)Pass@k = 1 - \frac{C(n-c, k)}{C(n, k)}, where n=5n=5 trials, k=1k=1, c=c= number of passes
  • Average Iterations until first valid design
  • Normalized Parameter Search Space: ratio of predicted search volume to full parameter space ([0,1]\in [0,1])

Tabulated results demonstrate substantial improvements:

Metric AnalogSAGE Prior LLM-Only Relative Gain
Pass Rate 100% ~10% 10×
Pass@1 96% ~2% 48×
Normalized Search Space 0.26 ~1.0 4× reduction

These outcomes are underpinned by the combined effect of stratified memory, closed-loop refinement, and ngspice-anchored parameter selection.

6. Analysis, Limitations, and Future Scope

Stratified memory significantly improves design reliability by enabling knowledge transfer between tasks (Evolution Memory), persistent intra-task learning (Introspective Optimization), and efficient, context-aware prompt construction (Stage Context Fusion). This structure, alongside simulation-grounded feedback, reduces repeated failure modes, accelerates convergence, and compresses the effective parameter space.

Limitations of AnalogSAGE include its current domain restriction (the candidate topology database and knowledge corpus are primarily op-amp–focused), simulation bottlenecks imposed by ngspice runtime, and the refinement stage's confinement to local topology edits rather than global architectural exploration. A plausible implication is that extending AnalogSAGE to broader analog/analog-mixed signal classes will require both new libraries and scalable memory strategies. Integrating surrogate simulation or hybrid differentiable models may address simulation velocity challenges.

Future directions highlighted include multi-objective optimization with Pareto fronts, layout-level co-design, and deeper integration of differentiable SPICE or domain-specific graph neural networks to further bridge the gap between human expertise and automated analog design (Wang et al., 27 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to AnalogSAGE.