Completely Neural Computers (CNCs)
- CNCs are computational machines where every component—processor, memory, control, and I/O—is implemented via neural substrates, enabling universal programmability.
- They leverage memory-augmented architectures and modular neural components (e.g., Neurocoder, MNC, NHC) to dynamically select, compose, and execute algorithms.
- Empirical studies show that CNC designs enhance continual learning, precise algorithmic generalization, and integration of neuro-inspired with conventional approaches.
A Completely Neural Computer (CNC) is a computational machine in which all components—including the processor, memory, control logic, and I/O—are instantiated or governed by neural substrates or artificial neural networks. Distinct from conventional computers, in which discrete modules are engineered and explicit programs are encoded, CNCs aim for universal computation, programmable storage, control flow, and modularity, all within a homogeneous neural or neuroinspired substrate. Achieving a CNC requires the integration of memory-augmented neural architectures, compositional program representation, explicit reprogrammability, and, in some physical instantiations, even direct realization in biological neuronal circuits. CNCs are an emerging machine form, with multiple engineering paradigms and substantial open challenges in stability, programmability, symbolic reasoning, and efficiency (Le et al., 2020, Leon, 4 Mar 2026, Tanneberg et al., 2021, Zhuge et al., 7 Apr 2026, Basso et al., 2024).
1. Defining Properties and Theoretical Foundation
CNCs must satisfy stringent functional criteria:
- Turing completeness: The architecture must support unbounded computation. Formally, for every Turing machine , there exists an initial runtime state such that the neural transition function exactly simulates (Zhuge et al., 7 Apr 2026).
- Universal programmability: New routines can be installed by input sequences that transform the CNC's state into , efficiently programming it with new behavior (Zhuge et al., 7 Apr 2026, Le et al., 2020).
- Behavioral consistency and update governance: Installed capabilities persist unless explicitly modified via a well-defined update interface, enforcing a clear “run/update” contract for persistent functional safety (Zhuge et al., 7 Apr 2026).
- Compositional modularity: Computation and memory are not monolithic but are assembled through the dynamic selection, composition, and reuse of modular neural programs or components (Le et al., 2020, Leon, 4 Mar 2026).
- Full neural substrate: All program logic, memory management, control flow, and I/O are realized via neural representations—either artificial (ANNs) or biological (neurons-on-a-chip) (Basso et al., 2024).
CNCs are distinct from conventional “neural Turing machines” or “differentiable computers” in that the entire machine—including control, program selection, and interface—emerges from learned or physically instantiated neural computation, without symbolic hand-coded components (Tanneberg et al., 2021, Le et al., 2020, Zhuge et al., 7 Apr 2026).
2. Architectural Instantiations
CNCs admit diverse realizations, exemplified by:
- Neurocoder: Implements a CNC atop existing neural networks by modularizing weight matrices into “singular-program” memories (left/right singular vectors and singular values), then using a neural controller to dynamically select and compose relevant programs via multi-head attention and low-rank synthesis. This enables task-conditioned dynamic reconfiguration of the main network, effective continual learning, and procedural neural programming (Le et al., 2020).
- Modular Neural Computer (MNC): Realizes CNCs by expressing deterministic algorithms as modular MLP subcircuits controlled by an MLP-based controller, explicit scalar memory, and one-hot gating for functional modules. All control flow is internalized through gating signals and memory-manipulating modules; algorithms such as in-place sorting, array minimum, or A* search are compiled into fixed neural graphs (Leon, 4 Mar 2026).
- Neural Harvard Computer (NHC): Enforces the separation of “algorithmic” (controller, bus, memory) and “data” (input, ALU) streams, using dual memories with hard neuralized read/write heads and a neural controller. The architecture is trained by evolutionary strategies to generalize algorithms, transfer between domains, and maintain program abstraction throughout computation (Tanneberg et al., 2021).
- Biological CNCs (neurons-on-a-chip): Constructs digital logic gates, memory latches, and sequential circuits from engineered neuronal cultures, using specific wiring topologies, tuned synaptic weights, and buffering to synchronize logic and memory. Core gates (NAND, AND-NOT, NOT), SR and D flip-flops are realized with spike-based signaling and energy-constrained spiking neuron models, demonstrating full memory and compositional logic within a living neural substrate (Basso et al., 2024).
- Neural Computers (NC) as Video-Driven CNCs: Proposes CNCs as unified neural models whose latent runtime state internalizes computation, memory, and I/O, demonstrated via models that generate interactive video outputs (CLI/GUI) from instructions and actions. The long-term agenda targets fully persistent, programmable, and stable neural runtime machines (Zhuge et al., 7 Apr 2026).
3. Program Representation, Storage, and Execution
CNCs employ neural mechanisms for program encoding, selection, and dynamic execution:
- Stored Neural Programs as Data: In Neurocoder, programs are decomposed via SVD into singular-vector slots (), stored in external memory, and indexed either by key-based attention (“content addressing”) or usage-based selectors. At each timestep, the controller emits queries and interpolation gates, producing a bespoke weight matrix for execution. Residual integration allows recovery of higher-rank details beyond the slot structure (Le et al., 2020).
- Modular Gated Execution: In MNC and NHC, computation unfolds as execution phases, with one-hot gates activating distinct MLP modules. All control flow is itself embedded in neural vectors and module outputs; explicit if/else statements are replaced by selective gating and addressable memory manipulations (Leon, 4 Mar 2026, Tanneberg et al., 2021).
- Explicit Memory and Hard Read/Write Interfaces: CNCs may utilize explicit, sometimes content-addressable, external memory slots. In NHC, dual memories (control and data) are manipulated through hard neural interfaces (learned or analytically determined), with linkages tracking allocation, temporal, and ancestry relationships to support recursion and abstraction (Tanneberg et al., 2021).
- Procedural and Recursive Composition: Program controllers (e.g., LSTMs in Neurocoder) run recurrent attention loops that function as differentiable program counters, emitting a sequence of program “calls” (slot selects, module activations) that can, in principle, implement arbitrary computation and recursive routines (Le et al., 2020, Tanneberg et al., 2021).
The following table summarizes core program storage and execution strategies in leading CNC architectures:
| CNC Paradigm | Program Representation | Storage | Execution & Control Logic |
|---|---|---|---|
| Neurocoder (Le et al., 2020) | Modular SVD-based “singular programs” | Slot-based ext. memory | LSTM controller, multi-head attention |
| MNC (Leon, 4 Mar 2026) | Hand-coded MLP submodules | Scalar key-value ext. memory | One-hot module gating, controller MLP |
| NHC (Tanneberg et al., 2021) | Learned controller, fixed ALU | Dual hard-addressed memory | Evolutionary-learned neural control |
| Bio-CNC (Basso et al., 2024) | Neuronal gates & latches | Persistent neural activity | Spiking dynamics, synaptic inhibition |
4. Empirical Performance and Capabilities
CNC designs have demonstrated:
- Continual learning and modularity: Neurocoder-equipped networks mitigate catastrophic forgetting, outperforming Elastic Weight Consolidation (EWC), Synaptic Intelligence (SI), and Neural Stored-program Memory (NSM) by 5–15% accuracy in Split MNIST/CIFAR tasks, while improving task performance in both supervised and reinforcement learning regimes (e.g., Atari A3C scores: 1.5k–3k vs. zero for non-modular baselines) (Le et al., 2020).
- Exactness and deterministic execution: MNC compiles classical algorithms into neural substrates with exact, deterministic output for variable-length inputs, supporting formal proofs of behavior (e.g., in-place sorting, array minimum, A* search) (Leon, 4 Mar 2026).
- Algorithmic generalization: NHC demonstrates perfect generalization on 8/11 algorithmic tasks, with 73.3–87.7% perfect runs on sorting problems extending to over a million steps, and error-free transfer across domains (arithmetic, Sokoban, Boolean algebra) (Tanneberg et al., 2021).
- Biological scalability and energy efficiency: Spiking neuron CNCs achieve full truth-table coverage for logic and memory circuits with metabolic cost remaining within physiological bounds (energy proxy ε = 0.5–0.8 a.u.), although clock rates (~10 Hz) are 10⁸× slower than CMOS (Basso et al., 2024).
- Current limitations: Neural Computers as video models have learned basic I/O alignment and short-horizon control, but routine reuse, long-horizon execution stability, explicit update governance, and robust symbolic reasoning remain open challenges (Zhuge et al., 7 Apr 2026).
5. Implementation Challenges and Open Problems
Several obstacles and frontiers confront CNC research:
- Stability and symbolic reasoning: Pure neural architectures struggle to maintain routine reuse and stability over long horizons; symbolic modules or discrete reasoning primitives may need to be integrated for robust computation (Zhuge et al., 7 Apr 2026).
- Programmability and compilation: Automating “neural compilation”—the translation of new algorithms into modular neural components—remains unsolved; manual decomposition is required in current MNCs (Leon, 4 Mar 2026).
- Resource efficiency: Soft-attention over large memory is per access; scaling to massive memories requires sparse or hashed addressing (Leon, 4 Mar 2026). Biological implementations face physical wiring and synchrony constraints (Basso et al., 2024).
- Separation of run and update: Enforcing persistent program installation versus ephemeral working memory (e.g., via explicit gating and update protocols) is needed for deployable, auditable CNCs (Zhuge et al., 7 Apr 2026).
- Hardware–software co-design: Machine-native CNCs that operate over unified tensor representations may enable continuous, differentiable interfaces across all functional modules and I/O, blurring distinctions between code and runtime state (Zhuge et al., 7 Apr 2026).
6. Future Directions and Outlook
CNCs represent an emerging computational paradigm with the potential to unify the strengths of neural learning, modularity, and universal programmability. Roadmaps highlight:
- Scaling to formal universality: By expanding memory substrates (larger context windows, explicit dynamic memory), CNCs are expected to achieve Turing completeness and support compositional neural programs that can be installed, invoked, and reused (Zhuge et al., 7 Apr 2026).
- Explicit programming interfaces: Advances are needed in update/run separation, versioning, and audit trails to support persistent, safe program installation and modification (Zhuge et al., 7 Apr 2026).
- Integration of neuro-symbolic and discrete modules: Incorporation of hard, exact symbolic logic within neural architectures could resolve barriers in reasoning, consistency, and execution fidelity (Zhuge et al., 7 Apr 2026, Tanneberg et al., 2021).
- Biological realization and sustainability: Neuron-based CNCs offer carbon-neutral computing possibilities and hybrid architectures with silicon, though high speeds and scalable, robust interfacing are yet to be resolved (Basso et al., 2024).
- Learning from program traces: Treating user I/O traces, workflows, and interaction logs as executable specifications may accelerate in-context CNC programming and drive meta-learning of new computational primitives (Zhuge et al., 7 Apr 2026).
A plausible implication is that CNCs, if realized at scale with robust programmability and stability, could serve as a foundational machine form, subsuming roles now split between explicit programming, AI agents, and conventional symbolic computers (Zhuge et al., 7 Apr 2026).