Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Refining Sentence Embedding Model through Ranking Sentences Generation with Large Language Models (2502.13656v2)

Published 19 Feb 2025 in cs.CL

Abstract: Sentence embedding is essential for many NLP tasks, with contrastive learning methods achieving strong performance using annotated datasets like NLI. Yet, the reliance on manual labels limits scalability. Recent studies leverage LLMs to generate sentence pairs, reducing annotation dependency. However, they overlook ranking information crucial for fine-grained semantic distinctions. To tackle this challenge, we propose a method for controlling the generation direction of LLMs in the latent space. Unlike unconstrained generation, the controlled approach ensures meaningful semantic divergence. Then, we refine exist sentence embedding model by integrating ranking information and semantic information. Experiments on multiple benchmarks demonstrate that our method achieves new SOTA performance with a modest cost in ranking sentence synthesis.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Liyang He (5 papers)
  2. Chenglong Liu (11 papers)
  3. Rui Li (384 papers)
  4. Zhenya Huang (52 papers)
  5. Shulan Ruan (10 papers)
  6. Jun Zhou (370 papers)
  7. Enhong Chen (242 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com