Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantifying the Contextualization of Word Representations with Semantic Class Probing (2004.12198v2)

Published 25 Apr 2020 in cs.CL

Abstract: Pretrained LLMs have achieved a new state of the art on many NLP tasks, but there are still many open questions about how and why they work so well. We investigate the contextualization of words in BERT. We quantify the amount of contextualization, i.e., how well words are interpreted in context, by studying the extent to which semantic classes of a word can be inferred from its contextualized embeddings. Quantifying contextualization helps in understanding and utilizing pretrained LLMs. We show that top layer representations achieve high accuracy inferring semantic classes; that the strongest contextualization effects occur in the lower layers; that local context is mostly sufficient for semantic class inference; and that top layer representations are more task-specific after finetuning while lower layer representations are more transferable. Finetuning uncovers task related features, but pretrained knowledge is still largely preserved.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mengjie Zhao (35 papers)
  2. Philipp Dufter (21 papers)
  3. Yadollah Yaghoobzadeh (34 papers)
  4. Hinrich Schütze (250 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.