Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Remove: Towards Isotropic Pre-trained BERT Embedding (2104.05274v2)

Published 12 Apr 2021 in cs.CL

Abstract: Pre-trained LLMs such as BERT have become a more common choice of NLP tasks. Research in word representation shows that isotropic embeddings can significantly improve performance on downstream tasks. However, we measure and analyze the geometry of pre-trained BERT embedding and find that it is far from isotropic. We find that the word vectors are not centered around the origin, and the average cosine similarity between two random words is much higher than zero, which indicates that the word vectors are distributed in a narrow cone and deteriorate the representation capacity of word embedding. We propose a simple, and yet effective method to fix this problem: remove several dominant directions of BERT embedding with a set of learnable weights. We train the weights on word similarity tasks and show that processed embedding is more isotropic. Our method is evaluated on three standardized tasks: word similarity, word analogy, and semantic textual similarity. In all tasks, the word embedding processed by our method consistently outperforms the original embedding (with average improvement of 13% on word analogy and 16% on semantic textual similarity) and two baseline methods. Our method is also proven to be more robust to changes of hyperparameter.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuxin Liang (7 papers)
  2. Rui Cao (65 papers)
  3. Jie Zheng (68 papers)
  4. Jie Ren (329 papers)
  5. Ling Gao (18 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.