Papers
Topics
Authors
Recent
2000 character limit reached

Human-Agent Collaborative AMS IC Design

Updated 1 January 2026
  • The paper introduces human–agent collaborative AMS IC design, integrating AI-driven LLMs with human expertise to automate and optimize IC design processes.
  • It employs modular architectures featuring human-in-the-loop interfaces, agentic cores, and simulation engines to ensure precision in sizing and performance analysis.
  • Case studies reveal significant reductions in design iterations—achieving 100% success rates with up to 18.2 iterations—and enhanced transparency in performance tuning.

Human–agent collaborative AMS (Analog and Mixed-Signal) IC design integrates advanced AI-driven agents—often based on LLMs—with human expertise to automate, optimize, and interpret the complex workflows inherent in modern analog/mixed-signal integrated circuit design. These approaches combine the generative, reasoning, and data-driven capabilities of LLM-based frameworks with direct human oversight and intervention at specification, design space exploration, reasoning audit, and performance tuning steps.

1. System Architectures for Human–Agent Collaboration

Human–agent frameworks for AMS IC design rely on modular architectures that partition responsibilities across user-facing interfaces, agentic cores, simulators, and analysis backends. Fundamental building blocks comprise:

  • Human-in-the-loop UI: Web or Jupyter-based dashboards for inputting SPICE netlists and performance specifications, viewing iterative results, and providing manual overrides.
  • Agentic Core: LLM-based modules orchestrating task decomposition, prompt engineering (with Chain-of-Thought [CoT] reasoning), and function-calling interfaces for simulation and analysis.
  • Simulation Engines: Integration with simulators such as Ngspice or vendor-specific tools for DC, AC, and transient analysis, often with pre-configured and dynamically augmented netlists.
  • Analysis and Spec Checking: Automated calculation and validation of performance metrics (gain, bandwidth, phase margin, THD, noise, power), with constraint checkers for domain-specific requirements (e.g., rail-to-rail operation, device region).
  • Result Logging, Visualization, and Feedback: Persistent storage of history, iteration-by-iteration result tracking, and graphical or JSON-based feedback routes for rapid corrective interaction by the human designer.

The system data flow is typified by agents parsing specifications, forming context-enriched prompts, proposing sizing updates (e.g., ΔW/L and bias voltages), running simulation/analysis cycles, and updating both automated and human interfaces. Iteration continues until all specification flags are satisfied with no transistors in the subthreshold region or a maximum-iteration/oscillation threshold triggers escalation to the human layer (Liu et al., 29 Sep 2025).

2. Agentic Reasoning and Prompt Engineering

LLM-based agents encapsulate task-specific reasoning via prompt templates and structured Chain-of-Thought designs. System-level prompts direct agents to:

  • Verify that all MOSFETs operate in the correct regime (e.g., VGSVth>0V_{GS} - V_{th} > 0).
  • Identify the most significant specification violation.
  • Relate affected device parameters via explicit analytic equations (e.g., gm(W/L)(VGSVth)g_m \propto (W/L)(V_{GS}-V_{th}) for gain/BW).
  • Propose minimal, constrained parameter updates (typically ≤20%).

Prompts force agents to enumerate reasoning steps, provide symbolic and numeric calculations (e.g., gm1=2ID1/(VGS1Vth)g_{m1}=2I_{D1}/(V_{GS1}-V_{th}) in LaTeX), and explicitly relate device-level changes to system-level objectives (Liu et al., 29 Sep 2025, Liu et al., 14 Apr 2025).

In multi-agent frameworks (e.g., AnaFlow), specialized agents carry out role-partitioned dialog—such as topology explanation, symmetry/matching enforcement, bias-region constraint verification, and sizing critiques. Human-interpretable reasoning traces accompany each output, enabling full auditability and correctability (Ahmadzadeh et al., 5 Nov 2025).

3. Mathematical and Optimization Foundations

The optimization core in human–agent AMS design is formalized as constrained black-box or explicit function minimization:

minxf(x)s.t.gi(x)0,i=1,,m\min_x f(x) \quad \text{s.t.}\quad g_i(x) \le 0, \quad i = 1, \ldots, m

Where xx is the parameter vector (device sizes, biases), f(x)f(x) is a scalarized cost (e.g., weighted sum of spec deviations), and gi(x)g_i(x) encode hard requirements—for example, DC biasing, device matching, and performance specs (gain, bandwidth, phase margin, power, output range). Performance metrics and constraints are computed using classical long-channel MOSFET equations, small-signal gain (Av=gmroA_v = g_m r_o), frequency response formulas, and near-threshold/saturation constraints enforced using technology-specific VDD and VthV_{th} at modern nodes (e.g., 180 nm and 90 nm) (Liu et al., 29 Sep 2025, Ahmadzadeh et al., 5 Nov 2025).

Adaptive agentic frameworks (e.g., AnaFlow) interleave cheap DC bias-point checks with full AC/transient simulations, invoking Bayesian optimization only at stagnation. Expected Improvement (EI) acquisition functions are standard for sample-efficient surrogate-driven steps:

αEI(x)=(fbestμ(x))Φ(z)+σ(x)ϕ(z),z=fbestμ(x)σ(x)\alpha_{EI}(x) = (f_{best}-\mu(x)) \Phi(z) + \sigma(x) \phi(z), \quad z = \frac{f_{best} - \mu(x)}{\sigma(x)}

resulting in large reductions (10–100×) in required simulation calls versus pure RL/BO methods (Ahmadzadeh et al., 5 Nov 2025).

4. Human–Agent Collaborative Workflows

The collaborative paradigm structures the workflow as an iterative loop; each cycle typically includes:

  • Human entry of netlist and target specs.
  • Agent-led task decomposition, simulation scheduling, sizing update, and reasoned justification.
  • Presentation of the agent’s suggested updates, flagged spec violations, and Chain-of-Thought rationale to the human.
  • Human review, with options to approve, reject, or modify suggestions (often via direct JSON-structured edits).
  • Feedback on trade-offs; for instance, if power increases to meet bandwidth, the human may choose to adjust power limits or relax bandwidth targets.
  • Agents adapt subsequent proposals based on revised constraints or direct guidance.

Human-in-the-loop interfaces are critical for handling specification negotiation, correcting oscillatory agent proposals, and ensuring the preservation of domain-specific intent (e.g., matching, corner-case robustness) (Liu et al., 29 Sep 2025, Liu et al., 14 Apr 2025, Ahmadzadeh et al., 5 Nov 2025).

5. Case Studies and Empirical Results

Concrete benchmarks demonstrate the convergence, robustness, and effectiveness of human–agent frameworks.

EEsizer: For a 20-transistor CMOS op-amp, OpenAI o3 achieved 100% success at both 180 nm and 90 nm, with convergence in 13.4 and 18.2 iterations (mean), respectively. Key performance metrics—gain, unity-gain bandwidth, and phase margin—were met within ±5% tolerance. Monte Carlo variation analysis (σ=5 nm for W/L, σ=10 mV for VthV_{th}) yielded post-fix pass rates of 90% for gain after targeted agent re-iterations (Liu et al., 29 Sep 2025).

AnaFlow: In sizing a two-stage Miller opamp and a 20-knob folded-cascode OTA, AnaFlow (Gemini 2.5 Pro) converged in under 10 and 64 total simulations, respectively, compared to >1000 for RL baselines, achieving full spec compliance. Design traces enable stepwise audit, with human designers empowered to intervene between automated reasoning and optimizer invocation (Ahmadzadeh et al., 5 Nov 2025).

Collaborative Advantages: Across all reported cases, the collaborative model demonstrates qualitative acceleration of the design cycle (manual days reduced to minutes/iterations), improved sample efficiency, and a new standard for transparent, auditable design decision reasoning.

Framework Circuit Node Success % Avg Iterations Metrics (Gain±, UGBW±, PM±)
EEsizer Op-Amp (20T) 180 nm 100 13.4 68±2 dB, 25±4 MHz, 64±5°
EEsizer Op-Amp (20T) 90 nm 100 18.2 66±3 dB, 18±5 MHz, 58±7°
AnaFlow 2Stg OA 100 9 [see text]

6. Limits, Failure Modes, and Future Directions

Current frameworks report several recognized limitations:

  • LLMs may oscillate between candidate solutions. Integrating convex surrogate models (e.g., Gaussian Process) in the loop is recommended to stabilize proposals.
  • Context window limitations in LLMs necessitate role-based prompts and retrieval strategies for large designs.
  • Some edge cases require human intervention for topology-level changes, bias network redesign, or strict area constraints.
  • Robustness under process/voltage/temperature (PVT) variation is partly addressed via Monte Carlo loops; more direct integration with layout-aware metrics or corner-case simulation is advised.

Emerging directions include multi-agent ensembles with specialization (e.g., noise- or speed-optimized agents), uncertainty quantification with automated “request for help” triggers, and tighter integration with post-layout parasitic feedback and area/power/yield optimization (Liu et al., 29 Sep 2025, Liu et al., 14 Apr 2025, Ahmadzadeh et al., 5 Nov 2025).

7. Broader Context: Agentic Human–AI Design in the AMS IC Flow

Human–agent collaborative AMS IC design represents a convergence of symbolic, data-driven, and interpretive AI methodologies with expert-driven flows. The combination of prompt-engineered LLM agents, human-in-the-loop corrective feedback, and explicit mathematical formalism enables both dramatic reductions in manual effort and increased trustworthiness over black-box automation approaches. This synthesis is rapidly transitioning from academic demonstration to real silicon verification and deployment in advanced technology nodes (Liu et al., 29 Sep 2025, Ahmadzadeh et al., 5 Nov 2025).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Human-Agent Collaborative AMS IC Design.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube