Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs (1909.06161v2)

Published 13 Sep 2019 in cs.CV, cs.LG, cs.NE, eess.IV, and q-bio.NC

Abstract: Deep convolutional artificial neural networks (ANNs) are the leading class of candidate models of the mechanisms of visual processing in the primate ventral stream. While initially inspired by brain anatomy, over the past years, these ANNs have evolved from a simple eight-layer architecture in AlexNet to extremely deep and branching architectures, demonstrating increasingly better object categorization performance, yet bringing into question how brain-like they still are. In particular, typical deep models from the machine learning community are often hard to map onto the brain's anatomy due to their vast number of layers and missing biologically-important connections, such as recurrence. Here we demonstrate that better anatomical alignment to the brain and high performance on machine learning as well as neuroscience measures do not have to be in contradiction. We developed CORnet-S, a shallow ANN with four anatomically mapped areas and recurrent connectivity, guided by Brain-Score, a new large-scale composite of neural and behavioral benchmarks for quantifying the functional fidelity of models of the primate ventral visual stream. Despite being significantly shallower than most models, CORnet-S is the top model on Brain-Score and outperforms similarly compact models on ImageNet. Moreover, our extensive analyses of CORnet-S circuitry variants reveal that recurrence is the main predictive factor of both Brain-Score and ImageNet top-1 performance. Finally, we report that the temporal evolution of the CORnet-S "IT" neural population resembles the actual monkey IT population dynamics. Taken together, these results establish CORnet-S, a compact, recurrent ANN, as the current best model of the primate ventral visual stream.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Jonas Kubilius (5 papers)
  2. Martin Schrimpf (18 papers)
  3. Kohitij Kar (7 papers)
  4. Ha Hong (3 papers)
  5. Najib J. Majaj (6 papers)
  6. Rishi Rajalingham (2 papers)
  7. Elias B. Issa (3 papers)
  8. Pouya Bashivan (15 papers)
  9. Jonathan Prescott-Roy (1 paper)
  10. Kailyn Schmidt (1 paper)
  11. Aran Nayebi (22 papers)
  12. Daniel Bear (3 papers)
  13. Daniel L. K. Yamins (26 papers)
  14. James J. DiCarlo (19 papers)
Citations (239)

Summary

  • The paper demonstrates that CORnet-S, modeled after four primate ventral stream areas with recurrent connectivity, achieves top Brain-Score performance and 73.1% top-1 accuracy on ImageNet.
  • The paper highlights that incorporating recurrent connections enables the model to capture temporal dynamics similar to the primate IT cortex, emphasizing the role of feedback in visual processing.
  • The paper shows that a compact 15-layer architecture can effectively balance anatomical fidelity with computational efficiency, paving the way for bio-inspired deep learning models.

Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs

The paper "Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs" addresses the burgeoning intersection of neuroscience and artificial intelligence by introducing CORnet-S, a shallow and recurrent artificial neural network (ANN) model that mirrors certain anatomical aspects of the primate visual cortex. The paper proposes that aligning deep learning architectures more closely with neuroanatomical structures can lead to models that not only perform well on object recognition tasks but also exhibit functional similarities to biological brains.

Key Findings and Methodology

CORnet-S is structured around four computational areas analogous to the ventral visual stream in primate brains—V1, V2, V4, and IT—in contrast to the typically deeper architectures prevalent in machine learning. The model emphasizes recurrent connectivity, reflecting the inferred functional importance of recurrence in visual processing. Utilizing Brain-Score, a comprehensive benchmark composed of neural and behavioral data from primates, the paper evaluates the predictive fidelity of the model in comparison to actual brain activity.

  1. Performance and Predictivity: CORnet-S outperforms other models in terms of Brain-Score while maintaining competitive performance on ImageNet with a top-1 accuracy of 73.1%. It is noted for best-in-class neural predictivity of ventral stream activity.
  2. Recurrent Connectivity: The investigation highlights recurrence as a critical feature for aligning model dynamics with neural data. The recurrence structure allows CORnet-S to capture temporal dynamics of primate inferior temporal (IT) cortex activity, a capability beyond feedforward-only models.
  3. Architectural Compactness: With a depth of only 15 layers, CORnet-S achieves a pragmatic balance between anatomical simplicity and computational performance, demonstrating that shallow models can effectively capture brain-like processing without sacrificing accuracy.

Implications and Future Directions

The implications of this work are multifaceted. Practically, CORnet-S exemplifies how biologically informed design can yield efficient architectures that generalize well across datasets, as evidenced by its competitive transfer learning performance on CIFAR-100. Theoretically, it reinforces the potential for neuroscience to inspire robust deep learning models by incorporating structural and functional inspirations from biological systems.

As the field of AI continues to draw inspiration from the mechanisms of the brain, future research may aim to incorporate additional neuroanatomical details, such as modeling lateral geniculate nucleus processing or implementing biologically plausible learning mechanisms. Enhancing the anatomical fidelity of these models holds promise for deeper insights into visual processing and has the potential to spur advancements in brain-machine interfaces and neuroprosthetics.

In essence, CORnet-S demonstrates that bridging the gap between artificial and neural networks is not only feasible but beneficial. As computational power and neuroscientific understanding advance, models akin to CORnet-S will likely play a pivotal role in shaping the future landscape of bio-inspired AI.

X Twitter Logo Streamline Icon: https://streamlinehq.com