Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CMLM-CSE: Based on Conditional MLM Contrastive Learning for Sentence Embeddings (2306.09594v1)

Published 16 Jun 2023 in cs.CL and cs.AI

Abstract: Traditional comparative learning sentence embedding directly uses the encoder to extract sentence features, and then passes in the comparative loss function for learning. However, this method pays too much attention to the sentence body and ignores the influence of some words in the sentence on the sentence semantics. To this end, we propose CMLM-CSE, an unsupervised contrastive learning framework based on conditional MLM. On the basis of traditional contrastive learning, an additional auxiliary network is added to integrate sentence embedding to perform MLM tasks, forcing sentence embedding to learn more masked word information. Finally, when Bertbase was used as the pretraining LLM, we exceeded SimCSE by 0.55 percentage points on average in textual similarity tasks, and when Robertabase was used as the pretraining LLM, we exceeded SimCSE by 0.3 percentage points on average in textual similarity tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Wei Zhang (1489 papers)
  2. Xu Chen (413 papers)
Citations (1)