Genie-ASI: LLM for Analog Subcircuits
- Genie Architecture is a training-free, LLM-based methodology that identifies analog subcircuits in SPICE netlists using in-context procedural abstraction.
- It employs a two-phase process: first generating natural language instructions from examples, then translating them into executable Python code with iterative error repair.
- The approach achieves robust performance across abstraction levels, with perfect detection at the device level and strong results for complex functional blocks.
Genie-ASI (Generative Instruction and Executable Code for Analog Subcircuit Identification) is a training-free, LLM-based methodology designed for rapid, modular, and human-interpretable analog subcircuit identification in electronic design automation (EDA) contexts. It specifically targets the challenge of structure identification within SPICE netlists—a task foundational for simulation, topology recognition, schematic checking, sizing, and layout in analog and mixed-signal design—without relying on hand-crafted rulebases, graph ML/GNN detectors, or large labeled datasets.
1. Methodological Framework and Workflow
GENIE-ASI redefines subcircuit identification as a two-phase process executed entirely by LLM inference, not traditional model training or rule classification:
- In-Context Instruction Generation:
- The LLM is prompted using several annotated demonstration SPICE netlists, each with target subcircuit labels.
- Few-shot, in-context learning is used to extract procedural rules, output as structured natural language instructions for subcircuit detection (e.g., clustering rules for MOSFET connectivity, topological constraints for subcircuit types).
- If multiple demonstrations are supplied, LLMs are guided to merge and generalize instructions, producing a robust intermediate representation per subcircuit type.
- Translation to Executable Code:
- Instructions are then translated, in a second LLM prompt, to concrete Python code modules (with built-in test assertions against labeled reference netlists).
- An automated feedback loop for code repair leverages assertion or runtime errors: failures are included in subsequent LLM prompts to iteratively patch logical or syntactic errors, forming a REPL-like refinement system. The loop halts upon test success or exhaustion of retries.
The produced code can be directly reused for batch inference on new, unseen SPICE netlists.
Workflow Pseudocode (from paper):
1 2 3 4 5 6 7 |
for i in 1...N_examples: instruction_i = LLM(prompt="Given labeled example x_i, output subcircuit sc identification procedure") merged_instruction = LLM(prompt="Merge instruction_1, ..., instruction_N into a unified procedure") code = LLM(prompt="Translate merged_instruction into Python code for identifying sc") while test_fail(code): code = LLM(prompt="Fix code given error message E") |
2. Application: Targets and Generalization
GENIE-ASI evaluates analog subcircuit structure extraction at three discrete abstraction levels:
- HL1 (Device-level): Basic device patterns such as diode-connected MOSFETs, load capacitors.
- HL2 (Structural-level): Canonical analog blocks (current mirrors, differential pairs, inverters).
- HL3 (Functional/block-level): High-level functional units (amplification, bias, feedback, multi-device stages).
The approach is detached from any hard-coded domain knowledge. It infers, from examples, the structural logic relationships underlying devices and their interconnectivity—applicable to previously unobserved netlist topologies. Provided only a handful of demonstrations and their target labels, the model achieves zero-shot transfer to netlists and subcircuit instantiations with unseen structural diversity.
3. Benchmark and Performance Metrics
A new benchmark suite is introduced:
- 300 flat SPICE netlists (operational amplifiers, 9–46 transistors per netlist, anonymized node/component names).
- Labeled at HL1-HL3 for exact instance grouping and type.
Evaluation uses strict cluster-level F1 score (): only a perfect match of type and instance inclusion counts as correct.
Summary of measured results (best model per complexity):
| Hierarchy | Example Types | |
|---|---|---|
| HL1 | Diode-connected, load/comp. caps | 1.00 |
| HL2 | Current mirror, diff pair, inverter | 0.81 |
| HL3 | Amplification, bias, feedback, load stage | 0.31 |
- On HL1, matches rule engines (F1 = 1.0).
- On HL2, approaches rule performance (F1 = 0.81).
- On HL3, achieves nontrivial detection on challenging, functionally abstracted blocks (F1 = 0.31).
- Best overall performance is achieved with GPT-4.1; Claude, Gemini, Grok are competitive; public LLaMA trails.
Error profile:
- At HL3, errors are dominated by incorrect instance grouping.
- More logic/test assertion failures arise (structural grouping logic is hard), while pure code syntax errors are rare.
For all cases, including explicit instructions in prompts strongly improves LLM accuracy over unimodal code- or prompt-only methods.
4. Innovations and Architectural Distinctiveness
Relative to prior analog subcircuit identification strategies, GENIE-ASI exhibits several unique properties:
- Training-Free Paradigm: No backpropagation, GNN-based contrastive learning, or hand-tuned subgraph templates are required. The LLM operates entirely by in-context procedural abstraction.
- Procedural Abstraction: Instead of matching labeled graphs or template morphologies, the LLM derives stepwise human-readable logic—exposing rules that are interpretable, correctable, and generalizable.
- Automated Code Repair: Assertion-driven feedback refines code iteratively, supporting robust automation of complex logical pipelines and minimizing failure propagation.
- Method and Model Agnostic: New/better LLMs can slot in as complete drop-in replacements; domain-specificity can be injected by prompt engineering, not system rewrites.
- Reusability and Rapid Extensibility: Generated Python modules become direct, modular detectors for new and arbitrary subcircuit types; expansions are made by supplementing few-shot examples in the prompt, not by annotating or retraining on thousands of netlists.
5. Architectural Formalism
The system can be formalized as a pipeline of conditional LLM invocations:
- Instruction generation (for each example ):
- Instruction merging (across all for final instruction ):
- Code generation:
- Automated code repair:
6. Implications for Analog Design Automation
GENIE-ASI demonstrates, for the first time, that LLMs—not just graph neural networks or domain-specific regular expressions—can serve as modular, reasoning-based EDA agents. The methodology dramatically lowers the upfront cost of adapting subcircuit detection flows to new block families or design nodes, reduces required expert involvement, and makes explainable automated documentation feasible via the natural language intermediate instruction format. The approach is robust to low- or zero-shot settings and opens a path toward broader foundation model integration in analog design automation pipelines.
7. Illustrative Example and Deployment
For illustration, consider the identification of NMOS current mirrors:
- Prompt examples: labeled netlists containing known current mirrors.
- LLM extracts grouping/connection/diode-configuration logic, formalizes rules:
1 2 3
1. Group NMOS devices. 2. Identify sets with identical gate and source, one acting as diode. 3. Validate connection structure to ensure proper current mirror topology.
- Executable Python code is synthesized, parsed against user-supplied netlists, and tested for correctness against benchmarked ground truth, undergoing repair if assertion failures occur.
For deployment, LLM-generated detectors can be batched on netlist directories, providing structured subcircuit instance mappings for further simulation or design workflows.
GENIE-ASI constitutes a shift toward LLMs as zero-shot, procedural automation engines in industrial analog CAD, with demonstrated parity to rule-based tools in basic tasks and competitive performance in complex, abstracted architectures, all achieved without curated labeled datasets or domain-specific retraining (Pham et al., 26 Aug 2025).
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free