Agint Compiler: Agentic Graph Compiler
- Agint Compiler is an agentic graph compiler that transforms natural language instructions into dynamically typed, effect-aware code DAGs using a multi-stage, type-floored pipeline.
- It employs explicit type floors, semantic graph transformations, and a hybrid LLM-function JIT runtime to achieve reproducible and optimizable code generation.
- The system supports incremental compilation with iterative refinement, speculative execution, and toolchain interoperability to scale complex software engineering workflows.
Agint Compiler is an agentic graph compiler, interpreter, and runtime designed to transform natural-language instructions into dynamically typed, effect-aware code DAGs for software engineering agents. Its approach leverages explicit type floors, semantic graph transformations, and a hybrid LLM-function Just-In-Time (JIT) runtime. The system supports iterative refinement, reproducible and optimizable execution, speculative evaluation, and toolchain interoperability, enabling both technical and non-technical users to prototype, refine, and deploy collaborative coding workflows efficiently (Chivukula et al., 24 Nov 2025).
1. Data Structures: Typed Code DAGs and Type Floors
Agint models workflows as directed acyclic graphs (DAGs), , where is a set of nodes and denotes directed edges representing data- and control-dependencies. Each node is annotated with:
- Identifier:
- Type floor:
- Payload: holding NL, types, specs, code stubs, shims, or final code
- Effect monoid: (e.g., )
- Resolution state:
Edges denote that 's output flows into as a data or control dependency. The serializable grammar for DAGs is formalized in YAML/JSON, supporting tool interoperability.
The compiler enforces a partial order across type floors: , enabling staged refinement. Nodes promote via resolvers, formally:
$\begin{array}{c} \infer[\mathrm{FloorRefl}]{f <\!: f}{} \qquad \infer[\mathrm{FloorTrans}]{f_1 <\!: f_3}{f_1 <\!: f_2\quad f_2 <\!: f_3} \end{array}$
Promotion occurs via specialized resolvers in the graph context (e.g., ).
2. Hierarchical and Incremental Compilation Pipeline
Agint's compilation pipeline is multi-stage and incremental, facilitating scalable edits and resolution:
- NL Parsing ("compose"): Converts user-provided natural language to a plain TEXT-floor DAG.
- Schema Generation ("schemagin"): For data-centric components, creates schemas at the SPEC floor, supporting formats like SQL, DBML, JSON, YAML.
- Graph Construction: Materializes nodes, assigns floors, and builds dependencies.
- Type Checking ("refine"): Enriches nodes with explicit primitive type signatures, moving TEXT to TYPED.
- Effect Inference: Annotates each node with effect monoids via LLM analysis or historical runs.
- Spec Resolution: Upgrades TYPED nodes to SPEC and STUB signatures.
- Shim Injection: Wraps unresolved logic with virtual functions, moving STUB to SHIM.
- Code Generation & Optimization: Resolves to PURE code, optionally applying JIT and prompt optimization.
Semantic graph rewriting is integral. When a SPEC node defines pre/post conditions referencing subtasks, new SPEC nodes are generated, and edges are re-routed, supporting dynamic decomposition:
1 2 3 4 5 |
for node in DAG.nodes where node.floor == SPEC: if spec contains "X requires subtasks A,B": create new nodes A,B at SPEC rewire: A→node, B→node mark node as UNRESOLVED |
Dynamic refinement accommodates contextual updates in TEXT nodes and propagates changes, supporting interactive development.
3. Hybrid LLM–Function JIT Runtime and Execution
Agint's execution is managed by 'dagent', a hybrid runtime scheduling nodes as their dependencies resolve. Runtime decisions depend on the node’s floor:
- For where : Issue LLM calls (via Flyte) to synthesize implementations.
- For : Invoke native code stubs directly.
Effect tracking employs a monadic formulation:
This supports rollback of side-effects if failures occur during execution.
Speculative execution is supported by tracing predicted arguments across DAG nodes and launching early LLM calls to hide latencies:
where is the LLM call latency for node .
Reproducibility is addressed by deterministic prompt templates, seed flags, and model IDs, with flight-recorder logs enabling fully replayable inference.
4. The Agint Unix-Style Toolchain
Agint exposes four composable, unix-style tools:
| Tool | Function | CLI Example |
|---|---|---|
| dagify | DAG compiler, refinement, build | dagify compose/refine/resolve/compile … |
| dagent | Hybrid runtime, validation, JIT | dagent validate/execute/interpret/synthesize … |
| schemagin | Schema generator/visualizer | schemagin compose/refine/visualize … |
| datagin | Data ingestion, transform | datagin ingest/synthesize/transform … |
A typical workflow involves:
- Generating schemas (
schemagin), - Ingesting and transforming data (
datagin), - Composing DAGs from NL (
dagify), - Resolving and compiling to executable forms (
dagify), - Executing workflows with hybrid runtime (
dagent).
Both CLI and GUI interfaces are available to accommodate developer and non-technical user workflows.
5. Performance, Scalability, and Empirical Results
The system achieves substantial improvements in structured output latency and resource utilization. Specifically, Hydantic+Flyte achieves speedup in structured output latency versus monolithic LLMs, along with reduction in context window usage per node, and near-linear parallel speedup for independent subgraphs.
Representative performance summary:
| System | Latency | Throughput | Context/Node |
|---|---|---|---|
| Vanilla LLM | 12 s | 0.08 nodes/s | 1024 tok |
| Hydantic+Flyte | 4.0 s | 0.25 nodes/s | 256 tok |
| Agint (full pipeline) | 3.5 s | 0.30 nodes/s | 200 tok |
Results are based on internal case studies with 50-node workflows, supporting the system’s locality-preserving and parallelization claims (Chivukula et al., 24 Nov 2025).
6. End-to-End Example and Workflow Illustration
A complete workflow begins from a natural-language prompt—e.g., "Fetch data from REST API → clean JSON → load into PostgreSQL"—and Agint synthesizes a code DAG composed of nodes at TEXT and TYPED floors, progressing semantically until PURE code is emitted. Edges preserve data dependencies. The system then produces executable Python code routing data from API fetch to database ingest, with each function mapped to a node and resolved through the stages, enabling direct execution via the JIT-enabled runtime.
7. Significance, Positioning, and Comparative Context
Agint's agentic graph compilation defines a paradigm shift in automating the transformation of NL prompts into context-rich, effect-annotated, reproducible code structures. By employing a DAG-centric incremental compilation model with explicit type floors, Agint promotes reliable, concurrent codebase composition, and scales agentic workflows via graph locality.
In comparison to alternatives, Agint centralizes hierarchical graph refinement, explicit effect management, and continuous code/data transformation, addressing limitations in context management, scalability, and reproducibility found in monolithic agentic coding approaches (Chivukula et al., 24 Nov 2025). The framework’s emphasis on continuous co-creation via CLI and GUI also positions it for composable, collaborative use across software engineering teams.
A plausible implication is that Agint's explicit type-floors and effect-aware DAG semantics may generalize to broader agentic workspaces, providing a foundation for scalable, reproducible development within intelligent code generation systems.