Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations (2101.11492v2)

Published 27 Jan 2021 in cs.CL

Abstract: The adaptation of pretrained LLMs to solve supervised tasks has become a baseline in NLP, and many recent works have focused on studying how linguistic information is encoded in the pretrained sentence representations. Among other information, it has been shown that entire syntax trees are implicitly embedded in the geometry of such models. As these models are often fine-tuned, it becomes increasingly important to understand how the encoded knowledge evolves along the fine-tuning. In this paper, we analyze the evolution of the embedded syntax trees along the fine-tuning process of BERT for six different tasks, covering all levels of the linguistic structure. Experimental results show that the encoded syntactic information is forgotten (PoS tagging), reinforced (dependency and constituency parsing) or preserved (semantics-related tasks) in different ways along the fine-tuning process depending on the task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Laura Pérez-Mayos (3 papers)
  2. Roberto Carlini (2 papers)
  3. Miguel Ballesteros (70 papers)
  4. Leo Wanner (10 papers)
Citations (6)