Papers
Topics
Authors
Recent
2000 character limit reached

AiEDA: Open-Source AI-Aided Design Library

Updated 11 November 2025
  • AiEDA Library is an open-source framework for AI-aided design that unifies heterogeneous EDA tools using standardized design-to-vector methodologies.
  • It transforms raw design data into multi-scale vector representations, facilitating seamless integration with ML/DL frameworks like PyTorch and TensorFlow.
  • The library supports various EDA tasks—from placement and routing to timing analysis—validated against comprehensive benchmarks such as the iDATA dataset.

AiEDA is an open-source software library for artificial intelligence-aided design (AI-EDA) in digital integrated circuit workflows. Distinct from conventional Electronic Design Automation (EDA) toolchains, AiEDA integrates standardized design-to-vector data representation, end-to-end programmatic interfaces, and tightly coupled bridges to machine learning frameworks, thereby enabling modern AI-aided design (AAD) methodologies and research. The library provides unified APIs that abstract tool engines (placement, routing, timing, etc.), data management, multi-level vectorization of design artifacts, and model application, and has facilitated the construction of iDATA, a large structured dataset spanning all major design stages. AiEDA and iDATA collectively support representative benchmarks in prediction, generation, optimization, and analysis at net, path, graph, patch, and design levels (Qiu et al., 8 Nov 2025).

1. System Architecture and Layered Organization

AiEDA is structured into four functional layers: flow engines, data generation APIs, data management (including vectorization), and downstream application engines. This layered approach addresses fragmentation and heterogeneity commonly encountered in AI-EDA workflows due to disparate tool interfaces, non-standard file formats, and ad-hoc data extraction.

  • Flow Engines: AiEDA abstracts both open-source (e.g., OpenROAD, iEDA) and commercial (e.g., Cadence Innovus, PrimeTime) engines, as well as specialized accelerators (DREAMPlace, CUGR), into a uniform Python-accessible interface. Open-source engines are wrapped via pybind11 when C++ is available, or via TCL-script generation for tools with script-driven interfaces.
  • Data Generation APIs: The RunFlow interface can invoke any engine to execute a design stage or full flow, while RunFeature enables extraction of scalar and spatial metrics (from logfiles, .json, .csv) at each stage.
  • Data Management: Workspaces encapsulate configuration (paths to .lef, .def, .lib, .sdc files, engine selection, parameters), output routing, and script management. The core vectorization routines transform raw design files (DEF, GDS, JSON, etc.) into multi-scale vector representations under a unified API.
  • Downstream Application Engines: These process vectorized data for AI workflows at multiple abstraction levels, support feature engineering via filtering and parsing, and provide AI-ready data structures for direct training and evaluation within PyTorch or TensorFlow.

This suggests that AiEDA is architected to resolve data interchange and integration bottlenecks across conventional AI-for-EDA pipelines by enforcing end-to-end standardization.

2. Design-to-Vector Methodology

AiEDA establishes “design-to-vector” as a general framework mapping heterogeneous EDA data into structured, AI-consumable forms: fvec:{Raw EDA files}    V={vdesign,vnet,vgraph,vpath,vpatch}f_{\rm vec}: \{\text{Raw EDA files}\}\;\longrightarrow\; V = \{v_{\rm design},\,v_{\rm net},\,v_{\rm graph},\,v_{\rm path},\,v_{\rm patch}\} where each vv_{\cdot} is a structured container holding features, primitives, and metrics derived from core EDA artifacts.

  • Netlist-to-Vector: The gate-level netlist is interpreted as a hypergraph G=(V,E)G=(V,E) with VV the set of cell pins and EE the set of nets. These can be encoded as sparse incidence lists or as adjacency/degree arrays suitable for graph neural networks.
  • Layout-to-Vector: Physical layers are mapped to multi-channel rasters, using binary pixelization or physical property encoding:

L[x,y,]={1,if material present at layer , 0,otherwise.\mathbf{L}[x, y, \ell] = \begin{cases} 1, & \text{if material present at layer } \ell,\ 0, & \text{otherwise.} \end{cases}

  • Map-to-Vector: Continuous properties such as congestion, IR-drop, and timing slack are discretized into grid tensors MRH×W×C\mathbf{M}\in\mathbb{R}^{H\times W\times C}.
  • Net-to-Vector: Each routed net is decomposed into wires (wi=(xs,ys,xe,ye,)\vec{w}_i = (x_s, y_s, x_e, y_e, \ell)) and vias (viaj=(xc,yc,bot,top)\mathrm{via}_j=(x_c, y_c, \ell_{\rm bot}, \ell_{\rm top})).
  • Path-to-Vector: Timing paths are captured as sequences of (R, C, slew, delay) tuples, with normalization applied as needed.
  • Shape-to-Vector: Polygons are stored as lists of vertex coordinates; for volumetric or 3D layouts, conversion to layered grids or point clouds is supported.

These representations allow for seamless application of ML/DL techniques including graph neural networks, convolutional nets, and transformers. The vectorization hierarchy is critical for mapping EDA objects (physical infrastructure, timing, parasitics) to inputs suitable for learning tasks.

3. Programmatic Interfaces and Integration with AI Frameworks

AiEDA exposes a comprehensive Python API that encompasses flow control, data extraction, vectorization, and AI model training:

  • Installation:

1
pip install aieda

  • Design Flow and Feature Extraction:

1
2
3
4
5
6
7
from aieda.workspace import workspace_create
from aieda.flows import RunFlow
from aieda.data import RunFeature

ws = workspace_create("work/gcd28", tool="iEDA")
flow = RunFlow.runRT(ws, tool="iEDA")
feat = RunFeature.rt(ws, tool="iEDA")

  • Vectorization:

1
2
3
4
5
6
7
8
from aieda.data import RunVectors

vec = RunVectors(ws)
vec.read_def(ws.input_def)
vec.generateNet(ws.vec_dir)
vec.generateGraph(ws.vec_dir)
vec.generatePath(ws.vec_dir)
vec.generatePatch(ws.vec_dir)

  • Downstream Data Loading:

1
2
3
4
5
6
7
from aieda.data import DataVectors

data = DataVectors(ws)
nets = data.load_nets()
patchs = data.load_patchs()
timing_g = data.load_timing_graph()
timing_p = data.load_timing_paths()

  • Model Training (PyTorch/TensorFlow Ready):

1
2
3
4
5
6
from aieda.ai import select_model

model = select_model("TabNet")
train_loader = data.get_data("net_wire")
trainer = Trainer(model, config)
trainer.train()

Data is produced at multiple levels, facilitating batch-processing, cross-stage alignment, and multi-task learning. A plausible implication is that this API design ensures reproducibility and scalability in AI-EDA research and industrial flows.

4. iDATA Dataset and Data Pipeline

Leveraging AiEDA’s pipeline, the iDATA dataset was generated from 50 real 28 nm designs, spanning digital signal processors, CPUs, SoCs, and internal IP cores. The workflow is:

  • Flow: RTL → Synthesis → Innovus PD → iEDA extraction
  • Vectorization: Exploits 32 threads on Xeon Platinum 8268 servers with 1.5 TB RAM.

Table: iDATA Dataset Breakdown

Data Level # Items Size
Design 50 .json Stats, maps
Net 21.45 M files 235.9 GB
Graph 50 files ~10 GB
Path 1.63 M files 149.9 GB
Patch 1.61 M files 207.2 GB

Performance validation includes net timing and power fidelity ratios 0.975\geq 0.975 (for s713), and patch cell density correlation of 0.993. Compared to prior datasets, iDATA provides complete “Foundation Data” (as opposed to task-specific “Feature Data”), supporting a wider array of downstream tasks (Qiu et al., 8 Nov 2025).

5. Supported AI-Aided Design Tasks

Seven representative AI-aided design tasks are benchmarked using AiEDA and iDATA, spanning prediction, optimization, generation, and analysis:

  1. Net-Level Wirelength Prediction: Two-stage TabNet (via count R2=0.94R^2=0.94, wirelength ratio MRE↓ by 6%).
  2. Path-Level Delay Prediction: Hierarchical Conv1D \rightarrow Transformer model, yielding MAE 0.023 and MRE 0.077.
  3. Graph-Level Delay Prediction: GNN encoder (best: GIN) + Transformer, reaching MSE 0.0251, MAE 0.0556 (R2=0.9646R^2=0.9646).
  4. Patch-Level Congestion Prediction: U-Net regression on 4×4 patches, NRMSE 0.18 (naive: 0.23).
  5. Routing Mask Generation: U-Net model with F1 score 81%, IoU 67% on hold-out designs.
  6. Parameter Optimization: Multi-objective TPE for placement, achieving up to 72% HPWL reduction and 100% WNS/TNS improvement in small designs.
  7. Metrics Analysis & Tool Comparison: Comparative analysis of iEDA versus Innovus placement on 30 designs, highlighting non-monotonic relationships between wirelength, RSMT, and timing.

These benchmarks demonstrate the utility of standardized multi-scale vector interfaces in supporting the entire ML-augmented EDA research pipeline.

6. Impact, Availability, and Prospects

AiEDA addresses fragmentation in AI-EDA research and practice by:

  • Providing a unified, end-to-end flow from RTL to GDS under a single Python API.
  • Establishing rigorous, multi-level, standardized vector representations.
  • Delivering seamless bridges to mainstream AI libraries (PyTorch/TensorFlow).
  • Enabling the construction and dissemination of large, structured datasets (iDATA) with verified physical and timing fidelity.
  • Supporting a broad array of AAD tasks beyond prediction, including generative modeling and optimization.

The codebase is openly available at https://github.com/OSCC-Project/AiEDA, with iDATA pending public release (Qiu et al., 8 Nov 2025). This suggests AiEDA is positioned as both an enabler of benchmarking for the AI-EDA research community and a framework for prototyping industrial-strength AI-EDA flows using universally accessible, open-source components.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to AiEDA Library.