Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings (2305.10786v2)

Published 18 May 2023 in cs.CL

Abstract: Prior studies diagnose the anisotropy problem in sentence representations from pre-trained LLMs, e.g., BERT, without fine-tuning. Our analysis reveals that the sentence embeddings from BERT suffer from a bias towards uninformative words, limiting the performance in semantic textual similarity (STS) tasks. To address this bias, we propose a simple and efficient unsupervised approach, Diagonal Attention Pooling (Ditto), which weights words with model-based importance estimations and computes the weighted average of word representations from pre-trained models as sentence embeddings. Ditto can be easily applied to any pre-trained LLM as a postprocessing operation. Compared to prior sentence embedding approaches, Ditto does not add parameters nor requires any learning. Empirical evaluations demonstrate that our proposed Ditto can alleviate the anisotropy problem and improve various pre-trained models on STS tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qian Chen (264 papers)
  2. Wen Wang (144 papers)
  3. Qinglin Zhang (30 papers)
  4. Siqi Zheng (61 papers)
  5. Chong Deng (22 papers)
  6. Hai Yu (40 papers)
  7. Jiaqing Liu (20 papers)
  8. Yukun Ma (33 papers)
  9. Chong Zhang (137 papers)
Citations (1)