Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probing LLMs for Joint Encoding of Linguistic Categories (2310.18696v1)

Published 28 Oct 2023 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs exhibit impressive performance on a range of NLP tasks, due to the general-purpose linguistic knowledge acquired during pretraining. Existing model interpretability research (Tenney et al., 2019) suggests that a linguistic hierarchy emerges in the LLM layers, with lower layers better suited to solving syntactic tasks and higher layers employed for semantic processing. Yet, little is known about how encodings of different linguistic phenomena interact within the models and to what extent processing of linguistically-related categories relies on the same, shared model representations. In this paper, we propose a framework for testing the joint encoding of linguistic categories in LLMs. Focusing on syntax, we find evidence of joint encoding both at the same (related part-of-speech (POS) classes) and different (POS classes and related syntactic dependency relations) levels of linguistic hierarchy. Our cross-lingual experiments show that the same patterns hold across languages in multilingual LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Giulio Starace (4 papers)
  2. Konstantinos Papakostas (4 papers)
  3. Rochelle Choenni (17 papers)
  4. Apostolos Panagiotopoulos (5 papers)
  5. Matteo Rosati (26 papers)
  6. Alina Leidinger (8 papers)
  7. Ekaterina Shutova (52 papers)
Citations (3)