Modeling Bilingual Sentence Processing: Evaluating RNN and Transformer Architectures for Cross-Language Structural Priming (2405.09508v2)
Abstract: This study evaluates the performance of Recurrent Neural Network (RNN) and Transformer models in replicating cross-language structural priming, a key indicator of abstract grammatical representations in human language processing. Focusing on Chinese-English priming, which involves two typologically distinct languages, we examine how these models handle the robust phenomenon of structural priming, where exposure to a particular sentence structure increases the likelihood of selecting a similar structure subsequently. Our findings indicate that transformers outperform RNNs in generating primed sentence structures, with accuracy rates that exceed 25.84\% to 33. 33\%. This challenges the conventional belief that human sentence processing primarily involves recurrent and immediate processing and suggests a role for cue-based retrieval mechanisms. This work contributes to our understanding of how computational models may reflect human cognitive processes across diverse language families.
- Kathryn Bock and Zenzi M. Griffin. 2000. The persistence of structural priming: Transient activation or implicit learning? Journal of Experimental Psychology: General, 129(2):177–192.
- Stefan Frank. 2021. Cross-language structural priming in recurrent neural network language models. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 43.
- Neural network models of language acquisition and processing. In Human language: From genes and brain to behavior, pages 277–293. MIT Press.
- Enhancing N-Gram Based Metrics with Semantics for Better Evaluation of Abstractive Text Summarization. Journal of Computer Science and Technology, 37(5):1118–1133.
- Yufen Hsieh. 2017. Structural priming during sentence comprehension in Chinese–English bilinguals. Applied Psycholinguistics, 38(3):657–678.
- T. Florian Jaeger and Neal E. Snider. 2013. Alignment as a consequence of expectation adaptation: Syntactic priming is affected by the prime’s prediction error given both prior and recent experience. Cognition, 127(1):57–83.
- Structural priming as implicit learning: Cumulative priming effects and individual differences. Psychonomic Bulletin & Review, 18(6):1133–1139.
- Computational principles of working memory in sentence comprehension. Trends in Cognitive Sciences, 10(10):447–454.
- Few-shot Learning with Multilingual Generative Language Models. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9019–9052.
- Tal Linzen and Marco Baroni. 2021. Syntactic structure from deep learning. Annual Review of Linguistics, 7:195–212.
- Helga Loebell and Kathryn Bock. 2003. Structural priming across languages.
- Danny Merkx and Stefan L. Frank. 2021. Human Sentence Processing: Recurrence or Attention? In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 12–22, Online. Association for Computational Linguistics.
- Structural priming demonstrates abstract grammatical representations in multilingual language models. arXiv preprint arXiv:2311.09194.
- Dan Parker and Michael Shvartsman. 2018. The cue-based retrieval theory. Language Processing and Disorders, page 121.
- Martin J. Pickering and Holly P. Branigan. 1998. The Representation of Verbs: Evidence from Syntactic Priming in Language Production. Journal of Memory and Language, 39(4):633–651.
- Martin J Pickering and Victor S Ferreira. 2008. Structural priming: a critical review. Psychological bulletin, 134(3):427.
- Using priming to uncover the organization of syntactic representations in neural language models. arXiv preprint arXiv:1909.10579.
- Jeong-Ah Shin and Kiel Christianson. 2009. Syntactic processing in korean–english bilingual production: Evidence from cross-linguistic structural priming. Cognition, 112(1):175–180.
- Marten Van Schijndel and Tal Linzen. 2018. A neural model of adaptation in reading. arXiv preprint arXiv:1808.09930.