- The paper demonstrates that AI twins can approximate individual neuron and synapse functions, forming the basis for universally representing brain signals.
- It employs universal approximation theorems to model biological signal transmissions with arbitrarily small error, underscoring the feasibility of AI-based brain replicas.
- The study suggests that understanding the brain's energy-efficient, unidirectional signaling can inspire next-gen low-energy AI algorithms and advanced brain-computer interfaces.
This paper, "Artificial Intelligence without Restriction Surpassing Human Intelligence with Probability One: Theoretical Insight into Secrets of the Brain with AI Twins of the Brain" (2412.06820), addresses the fundamental question of whether artificial intelligence can surpass human intelligence and proposes a novel theoretical framework using AI to understand the human brain at a fundamental level.
The authors argue that while the human brain is incredibly complex, breakthroughs in understanding its "secrets" and predicting AI's potential can come from a divide-and-conquer approach focused on its fundamental building blocks: biological neurons and synapses. They identify four key properties of these components:
- They are the two fundamental communication components.
- They exhibit unidirectional signal transmission.
- They are alternately connected in sequence.
- Neurons follow the "all-or-none law" (firing completely or not at all).
The paper posits that despite the complexity of detailed biophysical models (like the Hodgkin-Huxley model) and neuron dynamics, the essential signal transferring function of individual neurons and the neurotransmission relationship within synapses can be considered (piecewise) continuous functions.
Leveraging established universal approximation theorems for artificial neural networks (like Single-Hidden Layer Feedforward Networks, SLFNs), the paper theoretically demonstrates that:
- Any single neuron's signal transferring relationship can be universally approximated by an artificial neural network with any expected small error (Theorem 1).
- Any single synapse's neurotransmission relationship can also be universally approximated by an artificial neural network with any expected small error (Theorem 2).
The authors introduce the concept of "all-or-none smoothness piecewise continuous functions" which extends the neuron's all-or-none law to synapses and composite systems. A key theoretical finding (Theorem 3) is that combining such functions preserves this property. Since the brain and its regions/subsystems are sequentially constructed by unidirectional neurons and synapses (which are all-or-none smoothness piecewise continuous), the entire brain and its subsystems also possess this property (Theorem 4).
This leads to the central theoretical contribution: the "Brain-AI-Representation Theorem" (Theorem 5). This theorem states that the human brain and any of its subsystems, constructed by sequentially linked neurons and synapses, can be represented and universally approximated by "AI twins" (artificial neural networks or other AI components) with any expected small error. This is achieved by replacing each biological neuron and synapse with a corresponding AI component that approximates its function.
Based on this theoretical capacity for AI twins to approximate the brain, the paper argues that unrestricted AI could surpass human intelligence with high probability. This potential is further amplified by factors driving AI's exponential growth:
- More advanced algorithms and architectures.
- Access to significantly larger and different types of data sources (beyond human senses).
- Vastly greater potential computing power through interconnected systems, specialized chips, and emerging technologies like quantum computing.
- Integration with smart materials (neuromorphic, photonics, etc.).
- Deployment of a massive number of AI agents.
- Autonomous, continuous knowledge exchange and inheritance (unlike slower human generational knowledge transfer).
The paper offers insights into potential brain mechanisms by contrasting them with AI practices:
- Error Backpropagation (BP): The authors suggest that the brain likely doesn't use the standard BP algorithm for parameter tuning because it requires bidirectional information flow over connections, whereas biological neuron/synapse transmission is primarily unidirectional. The high energy cost of BP compared to the brain's efficiency is also cited as a factor. Instead, brain adaptation and "parameter tuning" might occur over evolutionary timescales through natural selection across generations.
- Neuron Spiking: The paper interprets spiking as a form of frequency modulation, which is known to be more efficient and noise-resistant than amplitude modulation over long distances, explaining its suitability for signal transmission across vast neural networks.
Practical Implementation and Applications:
The theoretical findings open several doors for practical application and research:
- AI Twins for Neuroscience Research: Instead of relying solely on complex mathematical models of neuron dynamics, AI twins can be used to model individual neurons and synapses at the cellular level. By building bottom-up models from these AI components ("BrainAIC"), researchers could potentially analyze the dynamics and functioning of brain regions and systems more efficiently. This requires detailed structural information about brain connectivity, potentially enabled by advancements in nanoscale scanning (like the petavoxel reconstruction mentioned).
- Collaborative Research: The approach suggests a framework for worldwide, interdisciplinary teams to model different types of neurons, synapses, and functional subsystems of the brain concurrently using AI techniques.
- Energy-Efficient AI: Understanding the brain's energy efficiency, potentially linked to its evolutionary optimization and the absence of energy-intensive processes like BP, could inspire the development of new, low-energy AI algorithms and hardware.
- Controllable and Explainable AI: By using AI to model and understand the brain's perception and cognition functions, it may be possible to develop AI techniques that are more controllable, explainable, and possess reasoning capabilities inspired by biological processes.
- Brain Illness Solutions: The most speculative, but potentially impactful, application lies in using AI twins to model and potentially replace malfunctioning biological neurons and synapses. The paper suggests that nanometer-sized AI chips, mimicking the function of biological components, could theoretically be used to restore abnormal or mimic physiological functions in damaged brain regions. This depends heavily on ethical considerations, the ability to accurately scan brain structure at the cellular level, and precisely identify gain/loss of function at the component level.
The paper concludes by positioning "Intelligence" (including biological and artificial) as a new scientific subject, where AI's exponential growth could lead to discoveries about nature's principles, similar to how mathematics and physics operate. It strongly emphasizes the critical need for appropriate governance and restrictions on AI development to mitigate potential existential risks, particularly highlighting computational power and model scale as potentially controllable factors.