Papers
Topics
Authors
Recent
2000 character limit reached

ARC-NCA: Graph Games & Neural Automata

Updated 19 November 2025
  • ARC-NCA is a dual-domain concept encompassing a non-disconnecting variant of Arc-Kayles in graph theory and a neural cellular automata method for solving abstraction tasks.
  • In graph games, ARC-NCA utilizes Grundy theory and parity-based algorithms, offering PSPACE-completeness in general and polynomial-time solutions for specific graph classes.
  • The neural cellular automata approach employs local update rules and a cost-efficient training regimen to achieve few-shot generalization on ARC tasks with competitive performance.

ARC-NCA denotes two technically distinct concepts: (1) the non-disconnecting variant of Arc-Kayles, a well-studied subtraction game on graphs, and (2) a family of Neural Cellular Automata architectures designed to solve few-shot abstraction and reasoning tasks in the Abstraction and Reasoning Corpus (ARC) and ARC-AGI. Both domains use the abbreviation “ARC-NCA,” but their contexts, underlying mathematics, and application areas differ. Below, each technical thread is examined through its definitions, complexity results, algorithmic properties, neural computational models, and empirical impact.

1. ARC-NCA in Graph Games: Definitions and Grundy Theory

ARC-NCA as introduced in the combinatorial game literature refers to the non-disconnecting variant of Arc-Kayles, itself defined as follows: given a finite undirected graph G=(V,E)G = (V, E), players alternately select an edge e={u,v}Ee = \{u, v\} \in E and delete both endpoints and all incident edges. In the non-disconnecting variant (ARC-NCA), moves are restricted so that the resulting subgraph after removal of uu and vv remains connected. The formal Grundy value for an ARC-NCA position GG is gn(G)=mex{gn(H):HOptn(G)}g_n(G) = \mathrm{mex}\{ g_n(H) : H \in \mathrm{Opt}_n(G) \}, where Optn(G)\mathrm{Opt}_n(G) denotes all connected children resulting from valid moves. ARC-NCA is also described as the connected subtraction game #1{2}\#1\{2\}: at each turn, two adjacent vertices are removed, provided that the graph remains connected (Burke et al., 16 Apr 2024).

2. Computational Complexity and Algorithmic Results

ARC-NCA admits a dichotomy in computational complexity:

  • PSPACE-completeness: ARC-NCA is PSPACE-complete in general, including when restricted to structured classes like split graphs and bipartite graphs of arbitrarily high even girth. Reductions from Node-Kayles and Avoid-True demonstrate hardness by encoding the existential and strategic nature of edge removals in connected graphs.
  • Polynomial-time solvability: Certain graph families admit efficient resolution. Unicyclic graphs, clique trees (block graphs), and threshold graphs with twin-free cliques are in P. For unicyclic graphs, O(n)(n) algorithms enumerate all possible "cycle-moves" and resolve their effects using recursive parity checks. In clique trees, backbone-move parity on articulation points suffices. For twin-free threshold graphs of size n5n \geq 5, the problem reduces to the one-pile subtraction game #1{1,2}\#1\{1,2\}, whose outcome is periodic of period 3 and computable in O(n+m)(n + m) time.
Graph Class ARC-NCA Complexity Main Resolution Technique
General/split/bipartite PSPACE-complete Reductions from Node-Kayles/Avoid-True
Unicyclic Polynomial-time Parity of legal "cycle-moves"
Clique tree/block graph Polynomial-time Articulation-backbone edge counting
Twin-free threshold Polynomial-time Mapping to #1{1,2}\#1\{1,2\} subtraction

3. Symmetry Strategy and Connections to Graph Isomorphism

In classical Arc-Kayles, the existence of an edge-disjoint involutive automorphism ensures that the second player can always mirror the opponent’s move, guaranteeing a win (P-position). Such symmetries are challenging to detect: deciding the existence of an edge-disjoint involutive automorphism is graph isomorphism hard (GI-complete), as reductions can encode full isomorphism checks in bipartite graphs. In ARC-NCA, the connected move constraint precludes generic mirroring, but the relationship to GI persists for symmetry detection and deeper combinatorial properties (Burke et al., 16 Apr 2024).

4. ARC-NCA in Neural Cellular Automata for Abstraction and Reasoning

ARC-NCA also refers to a developmental approach for solving ARC and ARC-AGI tasks using Neural Cellular Automata (NCA) and their extension EngramNCA. Tasks consist of few (3\le 3) input–output grid examples requiring robust abstraction and adaptive rule induction. ARC-NCA recasts each ARC task as the emergence of a small, self-organizing “developmental program” by running the NCA for a fixed number of micro-steps on the input grid. Each cell in the lattice maintains vector state S(t)RH×W×CS^{(t)} \in \mathbb{R}^{H \times W \times C}, with C channels encoding colors (RGBA) and hidden features. Updates are local:

S(t+1)=S(t)+f(WS(t))S^{(t+1)} = S^{(t)} + f(W * S^{(t)})

where ff is a small multilayer perceptron acting on KK sensed channels extracted by a 3×\times3 convolution kernel WW (Guichard et al., 13 May 2025, Xu et al., 18 Jun 2025).

EngramNCA introduces a private memory tensor M(t)M^{(t)} whose entries are not exposed to neighboring cells unless published, enabling primitive shape generation, morphological pattern encoding, and propagation across the visible grid.

5. Training Regimen and Few-Shot Generalization

ARC-NCA models are trained via gradient descent in a few-shot, per-task paradigm. Each automaton is initialized and trained solely on the sparse input–output grids from a single ARC problem. The loss is computed as the pixelwise mean squared error:

L=1HWCi,j,k(Si,j,k(T)Si,j,ktarget)2\mathcal{L} = \frac{1}{HWC} \sum_{i,j,k}(S_{i, j, k}^{(T)} - S_{i, j, k}^{\text{target}})^2

and optimized via backpropagation through time and AdamW, with decaying learning rate. Successful solution is thresholded at logL7\log \mathcal{L} \leq -7. No external data or pretraining is used; inference is extremely cost-efficient (e.g., 3×1043 \times 10^{-4} USD/task on RTX 4070 Ti), three orders of magnitude cheaper than GPT-4.5-based approaches (.29\approx .29 USD/task) (Guichard et al., 13 May 2025).

Key hyperparameters in the NCA regime include hidden channels H=20H=20, time steps T=10T=10, mask-probability m[0,0.75]m \sim [0, 0.75], and batch configuration of 128 trials per example per epoch. Regularization through asynchrony (masking) and dense supervision at each step was critical for generalization (Xu et al., 18 Jun 2025).

6. Empirical Performance and Model Variants

On the non-grid-resizing portion of the ARC public set (262 tasks), solve rates ranged from 6.5% (EngramNCA v1) to 12.9% (EngramNCA v3), surpassing or matching ChatGPT 4.5’s approximate 10.3% leaderboard score. Unioning multiple candidate solutions yielded up to 14.8% perfect matches, and relaxing thresholds to allow partial matches increased success rates up to 24% with all variants. In NCA approaches without Engram mechanisms, perfect arc task coverage reached 13.4% on a subset of 172 tasks; 95 tasks achieved final cross-entropy loss <0.01< 0.01, with 48 tasks attaining 90%\ge 90\% output pixel accuracy (Guichard et al., 13 May 2025, Xu et al., 18 Jun 2025).

Across the variants, asynchronous update schedules, layer-wise normalization, dense supervision, and the use of learned local feature filters were most impactful for scaling and robustness. Models failed systematically on tasks demanding global scene coordination, dynamic grid resizing, or novel colors.

Variant Solve Rate Mean logL\log \mathcal{L} Cost (USD/task)
NCA 10.7% -4.3 3×1043\times10^{-4}
EngramNCA v3 12.9% -4.35 4×1044\times10^{-4}
EngramNCA v4 10.3% -4.2 5×1045\times10^{-4}
GPT-4.5 10.3% n/a $0.29$

7. Extensions, Limitations, and Broader Significance

ARC-NCA demonstrates strengths in developmental induction, emergent primitive discovery, and extreme cost efficiency relative to transformer-based program synthesis. Limitations are grid-size rigidity, sporadic fine-granularity errors, and the absence of universal pre-trained automata for all tasks. Suggested extensions include pretraining NCAs on synthetic transformations, hybrid LLM–NCA architectures, latent-space or multi-scale NCAs, and the addition of attention or gating mechanisms. A plausible implication is that morphogenetic computation and decentralized local rules may offer scalable, data-lean approaches to abstraction and reasoning in artificial intelligence, distinguished from brute-force program search and deep learning extrapolation (Guichard et al., 13 May 2025, Xu et al., 18 Jun 2025).

In combinatorial game theory, ARC-NCA marks the complexity-theoretic boundary between structured and non-structured graphs for subtraction games and codifies the connection between symmetry detection and isomorphism hardness. Future directions include extending polynomial-time solvable cases, characterizing finer winning conditions, and formalizing developmental computation principles both in AI and games.

References

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to ARC-NCA.