Neural Computers: Unifying Computation, Memory, and Interface in a Single Learned Runtime
This presentation introduces Neural Computers, a radical new machine abstraction where a trainable neural system unifies computation, memory, and I/O in a single learned runtime state. Unlike conventional computers with separate programs and memory, or AI agents that merely interface with existing systems, Neural Computers embody the entire running computer as a persistent neural latent state. We explore two working prototypes—CLIGen for terminal interfaces and GUIWorld for desktop environments—that demonstrate high-fidelity interface rendering and action-conditioned control, while examining the substantial challenges that remain on the path toward truly general-purpose Completely Neural Computers.Script
What if the computer you're using right now—its memory, its processor, its entire interface—could be replaced by a single neural network that learns to be a computer? That's the radical proposition behind Neural Computers, where all of computation collapses into one trainable, persistent state.
Neural Computers aren't just another differentiable memory architecture. The model is the computer. At each step, a persistent latent state accumulates the full executable context—your terminal buffer, your desktop state, everything—and generates the next interface frame conditioned on your actions.
The researchers built two working prototypes to test this vision in practice.
CLIGen handles terminal interfaces, generating readable text at high fidelity—character accuracy reaches 54 percent, meaning it's learning to render, not just memorize. GUIWorld tackles full desktop environments, achieving near-perfect cursor control when actions are injected deep within the diffusion transformer blocks.
This figure reveals a critical dependency: at practical font sizes around 13 pixels, the Neural Computer maintains crisp, readable terminal output. But shrink the font below 10 pixels, and fidelity degrades sharply. The system is learning to render interface state as images, not manipulating symbols—which brings us to the elephant in the room.
Here's the hard truth: current Neural Computers cannot reliably do arithmetic. Baseline accuracy on symbolic tasks sits at 4 percent. Feed them better prompts that hint at the answer, and performance jumps to 83 percent—but that's rendering answers, not computing them. The roadmap to true Completely Neural Computers demands breakthroughs in long-horizon consistency, persistent function installation, and genuine symbolic generalization.
Neural Computers propose a future where software is no longer code you write, but interactions you demonstrate—where the runtime itself is learned. The prototypes work, the vision is clear, but the gap between rendering interfaces and replacing conventional computers remains vast. Visit EmergentMind.com to explore this paper in depth and create your own research video.