Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models (2205.02023v3)

Published 4 May 2022 in cs.CL

Abstract: The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision. However, it remains unclear how these models learn to generalise across languages. In this work, we conjecture that multilingual pre-trained models can derive language-universal abstractions about grammar. In particular, we investigate whether morphosyntactic information is encoded in the same subset of neurons in different languages. We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe. Our findings show that the cross-lingual overlap between neurons is significant, but its extent may vary across categories and depends on language proximity and pre-training data size.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Karolina StaƄczak (17 papers)
  2. Edoardo Ponti (11 papers)
  3. Lucas Torroba Hennigen (14 papers)
  4. Ryan Cotterell (226 papers)
  5. Isabelle Augenstein (131 papers)
Citations (9)