Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RuSentEval: Linguistic Source, Encoder Force! (2103.00573v2)

Published 28 Feb 2021 in cs.CL

Abstract: The success of pre-trained transformer LLMs has brought a great deal of interest on how these models work, and what they learn about language. However, prior research in the field is mainly devoted to English, and little is known regarding other languages. To this end, we introduce RuSentEval, an enhanced set of 14 probing tasks for Russian, including ones that have not been explored yet. We apply a combination of complementary probing methods to explore the distribution of various linguistic properties in five multilingual transformers for two typologically contrasting languages -- Russian and English. Our results provide intriguing findings that contradict the common understanding of how linguistic knowledge is represented, and demonstrate that some properties are learned in a similar manner despite the language differences.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Vladislav Mikhailov (31 papers)
  2. Ekaterina Taktasheva (8 papers)
  3. Elina Sigdel (2 papers)
  4. Ekaterina Artemova (53 papers)
Citations (5)