Grounded learning for compositional vector semantics (2401.06808v1)
Abstract: Categorical compositional distributional semantics is an approach to modelling language that combines the success of vector-based models of meaning with the compositional power of formal semantics. However, this approach was developed without an eye to cognitive plausibility. Vector representations of concepts and concept binding are also of interest in cognitive science, and have been proposed as a way of representing concepts within a biologically plausible spiking neural network. This work proposes a way for compositional distributional semantics to be implemented within a spiking neural network architecture, with the potential to address problems in concept binding, and give a small implementation. We also describe a means of training word representations using labelled images.
- Diederik Aerts. Quantum structure in cognition. Journal of Mathematical Psychology, 53(5):314–348, October 2009. ISSN 00222496. doi: 10.1016/j.jmp.2009.04.005. URL https://linkinghub.elsevier.com/retrieve/pii/S0022249609000558.
- The Guppy Effect as Interference. In Jerome R. Busemeyer, François Dubois, Ariane Lambert-Mogiliansky, and Massimo Melucci, editors, Quantum Interaction, Lecture Notes in Computer Science, pages 36–47, Berlin, Heidelberg, 2012. Springer. ISBN 978-3-642-35659-9. doi: 10.1007/978-3-642-35659-9˙4.
- Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 1183–1193, USA, October 2010. Association for Computational Linguistics.
- What Kind of Natural Language Inference are NLP Systems Learning: Is this Enough? In Special Session on Natural Language Processing in Artificial Intelligence, volume 2, pages 919–931. SCITEPRESS, February 2019. ISBN 978-989-758-350-6. doi: 10.5220/0007683509190931. URL http://www.scitepress.org/Papers/2019/76835.
- Concepts as Semantic Pointers: A Framework and Computational Model. Cognitive Science, page 35, 2016.
- A Quantum Probability Perspective on Borderline Vagueness. Topics in Cognitive Science, pages n/a–n/a, September 2013. ISSN 17568757. doi: 10.1111/tops.12041. URL http://doi.wiley.com/10.1111/tops.12041.
- Gemma Boleda. Distributional Semantics and Linguistic Theory. arXiv:1905.01896 [cs], March 2020. doi: 10.1146/annurev-linguistics-011619-030303. URL http://arxiv.org/abs/1905.01896.
- Recursive Neural Networks Can Learn Logical Semantics. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and Their Compositionality, pages 12–21, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.18653/v1/W15-4002. URL https://www.aclweb.org/anthology/W15-4002.
- Quantum-like non-separability of concept combinations, emergent associates and abduction. Logic Journal of the IGPL, 20(2):445–457, April 2012. ISSN 1367-0751. doi: 10.1093/jigpal/jzq049. URL https://doi.org/10.1093/jigpal/jzq049.
- Structured sequence processing and combinatorial binding: Neurobiologically and computationally informed hypotheses. Philosophical Transactions of the Royal Society B: Biological Sciences, 375(1791):20190304, February 2020. doi: 10.1098/rstb.2019.0304. URL https://royalsocietypublishing.org/doi/full/10.1098/rstb.2019.0304.
- A Compositional Explanation of the Pet Fish Phenomenon. arXiv:1509.06594 [cs, math], September 2015. URL http://arxiv.org/abs/1509.06594.
- Mathematical Foundations for a Compositional Distributional Model of Meaning. arXiv:1003.4394 [cs, math], March 2010. URL http://arxiv.org/abs/1003.4394.
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
- Leonidas A. A. Doumas and John E. Hummel. Computational models of higher cognition. In The Oxford Handbook of Thinking and Reasoning, Oxford Library of Psychology, pages 52–66. Oxford University Press, New York, NY, US, 2012. ISBN 978-0-19-973468-9.
- A Large-Scale Model of the Functioning Brain. Science, 338(6111):1202–1205, November 2012. ISSN 0036-8075, 1095-9203. doi: 10.1126/science.1225266. URL https://www.sciencemag.org/lookup/doi/10.1126/science.1225266.
- Chris Eliasmith. How to Build a Brain: A Neural Architecture for Biological Cognition. Oxford University Press, 2013. ISBN 978-0-19-934523-6. URL https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546.
- Ross W Gayler. Vector Symbolic Architectures Answer Jackendoff’s Challenges for Cognitive Neuroscience. In Joint International Conference on Cognitive Science, page 6, July 2003. URL https://arxiv.org/abs/cs/0412059.
- Concrete Sentence Spaces for Compositional Distributional Models of Meaning. arXiv:1101.0309 [cs], December 2010. URL http://arxiv.org/abs/1101.0309.
- John E. Hummel. Getting symbols out of a neural architecture. Connection Science, 23(2):109–118, June 2011. ISSN 0954-0091. doi: 10.1080/09540091.2011.569880. URL https://doi.org/10.1080/09540091.2011.569880.
- Martha Lewis. Quantum Computing and Cognitive Simulation, March 2021. URL https://psyarxiv.com/hvbgt/.
- Does CLIP Bind Concepts? Probing Compositionality in Large Image Models, March 2023. URL http://arxiv.org/abs/2212.10537.
- Andrea E. Martin and Leonidas A. A. Doumas. Tensors and compositionality in neural systems. Philosophical Transactions of the Royal Society B: Biological Sciences, 375(1791):20190306, February 2020. doi: 10.1098/rstb.2019.0306. URL https://royalsocietypublishing.org/doi/10.1098/rstb.2019.0306.
- Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1334. URL https://www.aclweb.org/anthology/P19-1334.
- Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781 [cs], September 2013. URL http://arxiv.org/abs/1301.3781.
- Composition in Distributional Models of Semantics. Cognitive Science, 34(8):1388–1429, November 2010. ISSN 03640213. doi: 10.1111/j.1551-6709.2010.01106.x. URL http://doi.wiley.com/10.1111/j.1551-6709.2010.01106.x.
- A practical and linguistically-motivated approach to compositional distributional semantics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 90–99, Baltimore, Maryland, June 2014. Association for Computational Linguistics. doi: 10.3115/v1/P14-1009. URL https://www.aclweb.org/anthology/P14-1009.
- Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar, 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1162. URL http://aclweb.org/anthology/D14-1162.
- Tony A. Plate. Distributed Representations and Nested Compositional Structure. PhD thesis, 1994.
- Can quantum probability provide a new direction for cognitive modeling? Behavioral and Brain Sciences, 36(3):255–274, June 2013. ISSN 0140-525X, 1469-1825. doi: 10.1017/S0140525X12001525. URL https://www.cambridge.org/core/product/identifier/S0140525X12001525/type/journal_article.
- Paul Smolensky. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46(1-2):159–216, November 1990. ISSN 00043702. doi: 10.1016/0004-3702(90)90007-M. URL https://linkinghub.elsevier.com/retrieve/pii/000437029090007M.
- Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D13-1170.
- Testing the Generalization Power of Neural Network Models across NLI Benchmarks. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 85–94, Florence, Italy, August 2019. Association for Computational Linguistics. doi: 10.18653/v1/W19-4810. URL https://aclanthology.org/W19-4810.
- Quantum Mathematics in Artificial Intelligence. Journal of Artificial Intelligence Research, 72:1307–1341, December 2021. ISSN 1076-9757. doi: 10.1613/jair.1.12702. URL https://www.jair.org/index.php/jair/article/view/12702.