Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Brain-inspired Computational Model for Human-like Concept Learning (2401.06471v1)

Published 12 Jan 2024 in cs.AI
A Brain-inspired Computational Model for Human-like Concept Learning

Abstract: Concept learning is a fundamental aspect of human cognition and plays a critical role in mental processes such as categorization, reasoning, memory, and decision-making. Researchers across various disciplines have shown consistent interest in the process of concept acquisition in individuals. To elucidate the mechanisms involved in human concept learning, this study examines the findings from computational neuroscience and cognitive psychology. These findings indicate that the brain's representation of concepts relies on two essential components: multisensory representation and text-derived representation. These two types of representations are coordinated by a semantic control system, ultimately leading to the acquisition of concepts. Drawing inspiration from this mechanism, the study develops a human-like computational model for concept learning based on spiking neural networks. By effectively addressing the challenges posed by diverse sources and imbalanced dimensionality of the two forms of concept representations, the study successfully attains human-like concept representations. Tests involving similar concepts demonstrate that our model, which mimics the way humans learn concepts, yields representations that closely align with human cognition.

Introduction

Concept learning is a cornerstone of human cognition, fundamental to activities such as categorizing, reasoning, and decision making. In understanding concept learning, multisensory representation and text-derived representation stand out as two pillars within the brain's knowledge structure. The harmonious interplay between these pillars is orchestrated by a semantic control system within the brain, crafting a nuanced and adaptive learning process.

Mimicking Human Cognition

Researchers have created a brain-inspired computational model that leverages spiking neural networks to replicate this dance of concept learning. Challenging issues like source diversity and imbalanced dimensionality between sensory and textual data are elegantly navigated in the model, achieving a closer alignment with the way humans process and understand concepts.

Computational Model Structure

Integrating insights from computational neuroscience and cognitive psychology, the model is neatly divided into three segments: a multisensory information processing module, a text-derived information processing module, and a semantic cooperation module. Each of these mimics a specific cognitive function—corresponding to the brain's multimodal experiential system, linguistic system, and semantic control system, respectively—to construct human-like concept representations.

Model Evaluation and Future Prospects

Evaluations through 'similar concepts tests' showcase that the model generates representations resonant with human cognition. The nuanced interplay between sensory and text-based representations echoes the complexity of human thought, exemplifying the model’s sophisticated comprehension of concepts. While this research introduces a prototype aligning closely with the human thought process, future explorations in cognitive mechanisms and dataset mapping, amongst others, hold the promise of furthering brain-inspired artificial intelligence's frontier.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Language and simulation in conceptual processing, Symbols, embodiment, and meaning (2008) 245–283.
  2. Y. Bi, Dual coding of knowledge in the human brain, Trends in Cognitive Sciences 25 (2021) 883–895.
  3. Two forms of knowledge representations in the human brain, Neuron 107 (2020) 383–393.
  4. How concepts are encoded in the human brain: A modality independent, category-based cortical organization of semantic knowledge, NeuroImage 135 (2016) 232–242. URL: https://www.sciencedirect.com/science/article/pii/S1053811916301021. doi:https://doi.org/10.1016/j.neuroimage.2016.04.063.
  5. Heteromodal cortical areas encode sensory-motor features of word meaning, Journal of Neuroscience 36 (2016) 9763–9769. URL: https://www.jneurosci.org/content/36/38/9763. doi:10.1523/JNEUROSCI.4095-15.2016.
  6. Intrinsic functional network architecture of human semantic processing: Modules and hubs, Neuroimage 132 (2016) 542–555.
  7. A tri-network model of human semantic processing, Frontiers in Psychology 8 (2017) 1538.
  8. M. O. Ernst, M. S. Banks, Humans integrate visual and haptic information in a statistically optimal fashion, Nature 415 (2002) 429–433.
  9. C. V. Parise, M. O. Ernst, Correlation detection as a general mechanism for multisensory integration, Nature Communications 7 (2016) 11543.
  10. Efficient computation and cue integration with noisy population codes, Nature neuroscience 4 (2001) 826–831.
  11. A computational perspective on the neural basis of multisensory spatial representations, Nature Reviews Neuroscience 3 (2002) 741–747.
  12. D. Kiela, L. Bottou, Learning image embeddings using convolutional neural networks for improved multi-modal semantics, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014.
  13. Deep learning for tactile understanding from visual and haptic data, in: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2016, pp. 536–543.
  14. Imagined visual representations as multimodal embeddings, in: Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017.
  15. Learning multimodal word representation via dynamic fusion methods, in: Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  16. Models of semantic representation with visual attributes, in: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2013, pp. 572–582.
  17. Multi-modal models for concrete and abstract concept meaning, Transactions of the Association for Computational Linguistics 2 (2014) 285–296.
  18. Multimodal distributional semantics, Journal of artificial intelligence research 49 (2014) 1–47.
  19. Mca-nmf: Multimodal concept acquisition with non-negative matrix factorization, PloS one 10 (2015) e0140732.
  20. F. Hill, A. Korhonen, Learning abstract concept embeddings from multi-modal data: Since you probably can’t see what i mean, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 255–265.
  21. C. Silberer, M. Lapata, Learning grounded meaning representations with autoencoders, in: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2014, pp. 721–732.
  22. Associative multichannel autoencoder for multimodal word representation, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018, pp. 115–124.
  23. Tensor fusion network for multimodal sentiment analysis, arXiv preprint arXiv:1707.07250 (2017).
  24. Deepcu: Integrating both common and unique latent information for multimodal sentiment analysis, in: International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, 2019.
  25. Efficient low-rank multimodal fusion with modality-specific factors, arXiv preprint arXiv:1806.00064 (2018).
  26. C. P. Davis, E. Yee, Building semantic memory from embodied and distributional language experience, Wiley Interdisciplinary Reviews: Cognitive Science 12 (2021) e1555.
  27. L. W. Barsalou, Perceptions of perceptual symbols, Behavioral and brain sciences 22 (1999) 637–660.
  28. Z. S. Harris, Distributional structure, Word 10 (1954) 146–162.
  29. D. Lynott, L. Connell, Modality exclusivity norms for 423 object properties, Behavior Research Methods 41 (2009) 558–564.
  30. D. Lynott, L. Connell, Modality exclusivity norms for 400 nouns: The relationship between perceptual experience and surface word form, Behavior research methods 45 (2013) 516–526.
  31. Toward a brain-based componential semantic representation, Cognitive neuropsychology 33 (2016) 130–174.
  32. Distributed representations of words and phrases and their compositionality, Advances in neural information processing systems 26 (2013).
  33. Glove: Global vectors for word representation, in: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
  34. Y. Wang, Y. Zeng, Statistical analysis of multisensory and text-derived representations on concept learning, Frontiers in Computational Neuroscience 16 (2022).
  35. Simlex-999: Evaluating semantic models with (genuine) similarity estimation, Computational Linguistics 41 (2015) 665–695.
  36. Large-scale learning of word relatedness with constraints, in: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, 2012, pp. 1406–1414.
  37. Power spectrum analysis of bursting cells in area mt in the behaving monkey, Journal of Neuroscience 14 (1994) 2870–2892.
  38. W. R. Softky, C. Koch, Cortical cells should fire regularly, but do not (1992).
  39. Y. Wang, Y. Zeng, Multisensory concept learning framework based on spiking neural networks, Frontiers in Systems Neuroscience 16 (2022).
  40. Biological neuron coding inspired binary word embeddings, Cognitive Computation 11 (2019) 676–684.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yuwei Wang (60 papers)
  2. Yi Zeng (153 papers)
Citations (1)