Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What can the brain teach us about building artificial intelligence? (1909.01561v1)

Published 4 Sep 2019 in cs.AI and q-bio.NC
What can the brain teach us about building artificial intelligence?

Abstract: This paper is the preprint of an invited commentary on Lake et al's Behavioral and Brain Sciences article titled "Building machines that learn and think like people". Lake et al's paper offers a timely critique on the recent accomplishments in artificial intelligence from the vantage point of human intelligence, and provides insightful suggestions about research directions for building more human-like intelligence. Since we agree with most of the points raised in that paper, we will offer a few points that are complementary.

Insights from Neuroscience for Building Artificial Intelligence

The paper "What can the brain teach us about building artificial intelligence?" authored by Dileep George provides a complementary perspective to Lake et al.'s notable work on building human-like intelligence in machines. George's commentary explores the importance of incorporating insights from neuroscience alongside cognitive science to inform the development of AGI.

Enhancing Learning through Inductive Biases

A central theme of the paper is the significance of inductive biases in AI. While universal optimization algorithms represent an ideal form of learning, practical implementations necessitate inductive biases to streamline learning and inference processes. Human brains demonstrate remarkable efficiency and versatility, largely attributed to evolutionary inductive biases. The paper posits that these biases are implicitly integrated into the biological substrates (proteins, cells) used by evolution. Emulating these well-tuned inductive biases in AI could offer robust and efficient learning mechanisms suitable for AGI.

George critiques the notion of evolving intelligence without assumptions, indicating that such an approach is not only challenging but also potentially infeasible. He emphasizes the pragmatic approach of learning from the human brain's inductive biases, albeit acknowledging that some of these assumptions might be relaxed as optimization algorithms evolve.

Neuroscience Contributions Beyond Cognitive Science

While Lake et al. emphasize cognitive science, George underscores the added value of examining neuroscience data. He points to specific neural structures and their functions, such as spatial lateral connections in the visual cortex, which aid in enforcing contour continuity, and the factorization of contours and surfaces which allows the recognition and imagination of objects with non-prototypical appearances. These biological functions present potential models or inspirations for generative models in AI. George advocates for a deeper exploration of cortical representations and inference dynamics as they offer valuable templates for developing AGI.

Rethinking 'Human-Level Performance' Benchmarks

The paper challenges the utility of 'human-level performance' as a benchmark for AGI. It criticizes the superficial success claims in AI publications, such as DeepQ's 'human-level' performance in games, which often fail to account for generalization and robustness against minor perturbations in the environment. George recommends that 'human-level' benchmarks should instead focus on criteria such as learning from few examples, generalization to diverse distributions, and adaptability to new tasks and queries.

The Role of Message-Passing Algorithms

George extends the discussion on structured probabilistic models by highlighting the relevance of message-passing (MP) algorithms for inference, an area only briefly touched upon by Lake et al. Unlike MCMC, which is noted for its asymptotic guarantees, MP algorithms offer faster inference, consistent with the speed observed in cortical processing. Despite lacking theoretical guarantees, MP algorithms have shown practical efficacy and align with evidence from cortical activity patterns. Integrating MP algorithms in AI systems could achieve a balance between MCMC’s robustness and the rapid inference capabilities of neural networks.

Implications and Future Directions

The insights offered by George’s commentary carry practical and theoretical implications for the field of AI. Practically, understanding and emulating the brain's inductive biases and inference mechanisms might accelerate the development of robust and versatile AGI systems. Theoretically, delineating the limits of human intelligence versus fundamental algorithmic constraints provides a clearer roadmap for future AI research.

Future developments in AI could benefit profoundly from a multidisciplinary approach that incorporates detailed biological insights. As optimization algorithms advance, selectively relaxing the biological inductive biases while retaining efficiency and robustness might bridge the gap between current AI systems and AGI.

In summary, George’s paper advocates for a nuanced approach to AI development, leveraging both cognitive science and neuroscience insights to build systems that mirror the adaptability and efficiency of human intelligence. This perspective highlights the critical role of inductive biases, the need for more meaningful benchmarks, and the potential of MP algorithms, contributing to the ongoing dialogue on the roadmap to AGI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Dileep George (29 papers)
Citations (943)