Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance (1910.07181v3)

Published 16 Oct 2019 in cs.CL

Abstract: Pretraining deep LLMs has led to large performance gains in NLP. Despite this success, Schick and Sch\"utze (2020) recently showed that these models struggle to understand rare words. For static word embeddings, this problem has been addressed by separately learning representations for rare words. In this work, we transfer this idea to pretrained LLMs: We introduce BERTRAM, a powerful architecture based on BERT that is capable of inferring high-quality embeddings for rare words that are suitable as input representations for deep LLMs. This is achieved by enabling the surface form and contexts of a word to interact with each other in a deep architecture. Integrating BERTRAM into BERT leads to large performance increases due to improved representations of rare and medium frequency words on both a rare word probing task and three downstream tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Timo Schick (31 papers)
  2. Hinrich Schütze (250 papers)
Citations (43)