Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
o3 Pro
5 tokens/sec
GPT-4.1 Pro
15 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

Artificial Intelligence without Restriction Surpassing Human Intelligence with Probability One: Theoretical Insight into Secrets of the Brain with AI Twins of the Brain (2412.06820v1)

Published 4 Dec 2024 in cs.AI

Abstract: AI has apparently become one of the most important techniques discovered by humans in history while the human brain is widely recognized as one of the most complex systems in the universe. One fundamental critical question which would affect human sustainability remains open: Will AI evolve to surpass human intelligence in the future? This paper shows that in theory new AI twins with fresh cellular level of AI techniques for neuroscience could approximate the brain and its functioning systems (e.g. perception and cognition functions) with any expected small error and AI without restrictions could surpass human intelligence with probability one in the end. This paper indirectly proves the validity of the conjecture made by Frank Rosenblatt 70 years ago about the potential capabilities of AI, especially in the realm of artificial neural networks. Intelligence is just one of fortuitous but sophisticated creations of the nature which has not been fully discovered. Like mathematics and physics, with no restrictions artificial intelligence would lead to a new subject with its self-contained systems and principles. We anticipate that this paper opens new doors for 1) AI twins and other AI techniques to be used in cellular level of efficient neuroscience dynamic analysis, functioning analysis of the brain and brain illness solutions; 2) new worldwide collaborative scheme for interdisciplinary teams concurrently working on and modelling different types of neurons and synapses and different level of functioning subsystems of the brain with AI techniques; 3) development of low energy of AI techniques with the aid of fundamental neuroscience properties; and 4) new controllable, explainable and safe AI techniques with reasoning capabilities of discovering principles in nature.

Summary

  • The paper demonstrates that AI twins can approximate individual neuron and synapse functions, forming the basis for universally representing brain signals.
  • It employs universal approximation theorems to model biological signal transmissions with arbitrarily small error, underscoring the feasibility of AI-based brain replicas.
  • The study suggests that understanding the brain's energy-efficient, unidirectional signaling can inspire next-gen low-energy AI algorithms and advanced brain-computer interfaces.

This paper, "Artificial Intelligence without Restriction Surpassing Human Intelligence with Probability One: Theoretical Insight into Secrets of the Brain with AI Twins of the Brain" (2412.06820), addresses the fundamental question of whether artificial intelligence can surpass human intelligence and proposes a novel theoretical framework using AI to understand the human brain at a fundamental level.

The authors argue that while the human brain is incredibly complex, breakthroughs in understanding its "secrets" and predicting AI's potential can come from a divide-and-conquer approach focused on its fundamental building blocks: biological neurons and synapses. They identify four key properties of these components:

  1. They are the two fundamental communication components.
  2. They exhibit unidirectional signal transmission.
  3. They are alternately connected in sequence.
  4. Neurons follow the "all-or-none law" (firing completely or not at all).

The paper posits that despite the complexity of detailed biophysical models (like the Hodgkin-Huxley model) and neuron dynamics, the essential signal transferring function of individual neurons and the neurotransmission relationship within synapses can be considered (piecewise) continuous functions.

Leveraging established universal approximation theorems for artificial neural networks (like Single-Hidden Layer Feedforward Networks, SLFNs), the paper theoretically demonstrates that:

  • Any single neuron's signal transferring relationship can be universally approximated by an artificial neural network with any expected small error (Theorem 1).
  • Any single synapse's neurotransmission relationship can also be universally approximated by an artificial neural network with any expected small error (Theorem 2).

The authors introduce the concept of "all-or-none smoothness piecewise continuous functions" which extends the neuron's all-or-none law to synapses and composite systems. A key theoretical finding (Theorem 3) is that combining such functions preserves this property. Since the brain and its regions/subsystems are sequentially constructed by unidirectional neurons and synapses (which are all-or-none smoothness piecewise continuous), the entire brain and its subsystems also possess this property (Theorem 4).

This leads to the central theoretical contribution: the "Brain-AI-Representation Theorem" (Theorem 5). This theorem states that the human brain and any of its subsystems, constructed by sequentially linked neurons and synapses, can be represented and universally approximated by "AI twins" (artificial neural networks or other AI components) with any expected small error. This is achieved by replacing each biological neuron and synapse with a corresponding AI component that approximates its function.

Based on this theoretical capacity for AI twins to approximate the brain, the paper argues that unrestricted AI could surpass human intelligence with high probability. This potential is further amplified by factors driving AI's exponential growth:

  • More advanced algorithms and architectures.
  • Access to significantly larger and different types of data sources (beyond human senses).
  • Vastly greater potential computing power through interconnected systems, specialized chips, and emerging technologies like quantum computing.
  • Integration with smart materials (neuromorphic, photonics, etc.).
  • Deployment of a massive number of AI agents.
  • Autonomous, continuous knowledge exchange and inheritance (unlike slower human generational knowledge transfer).

The paper offers insights into potential brain mechanisms by contrasting them with AI practices:

  • Error Backpropagation (BP): The authors suggest that the brain likely doesn't use the standard BP algorithm for parameter tuning because it requires bidirectional information flow over connections, whereas biological neuron/synapse transmission is primarily unidirectional. The high energy cost of BP compared to the brain's efficiency is also cited as a factor. Instead, brain adaptation and "parameter tuning" might occur over evolutionary timescales through natural selection across generations.
  • Neuron Spiking: The paper interprets spiking as a form of frequency modulation, which is known to be more efficient and noise-resistant than amplitude modulation over long distances, explaining its suitability for signal transmission across vast neural networks.

Practical Implementation and Applications:

The theoretical findings open several doors for practical application and research:

  1. AI Twins for Neuroscience Research: Instead of relying solely on complex mathematical models of neuron dynamics, AI twins can be used to model individual neurons and synapses at the cellular level. By building bottom-up models from these AI components ("BrainAIC"), researchers could potentially analyze the dynamics and functioning of brain regions and systems more efficiently. This requires detailed structural information about brain connectivity, potentially enabled by advancements in nanoscale scanning (like the petavoxel reconstruction mentioned).
  2. Collaborative Research: The approach suggests a framework for worldwide, interdisciplinary teams to model different types of neurons, synapses, and functional subsystems of the brain concurrently using AI techniques.
  3. Energy-Efficient AI: Understanding the brain's energy efficiency, potentially linked to its evolutionary optimization and the absence of energy-intensive processes like BP, could inspire the development of new, low-energy AI algorithms and hardware.
  4. Controllable and Explainable AI: By using AI to model and understand the brain's perception and cognition functions, it may be possible to develop AI techniques that are more controllable, explainable, and possess reasoning capabilities inspired by biological processes.
  5. Brain Illness Solutions: The most speculative, but potentially impactful, application lies in using AI twins to model and potentially replace malfunctioning biological neurons and synapses. The paper suggests that nanometer-sized AI chips, mimicking the function of biological components, could theoretically be used to restore abnormal or mimic physiological functions in damaged brain regions. This depends heavily on ethical considerations, the ability to accurately scan brain structure at the cellular level, and precisely identify gain/loss of function at the component level.

The paper concludes by positioning "Intelligence" (including biological and artificial) as a new scientific subject, where AI's exponential growth could lead to discoveries about nature's principles, similar to how mathematics and physics operate. It strongly emphasizes the critical need for appropriate governance and restrictions on AI development to mitigate potential existential risks, particularly highlighting computational power and model scale as potentially controllable factors.

Youtube Logo Streamline Icon: https://streamlinehq.com