Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SentPWNet: A Unified Sentence Pair Weighting Network for Task-specific Sentence Embedding (2005.11347v1)

Published 22 May 2020 in cs.CL and cs.LG

Abstract: Pair-based metric learning has been widely adopted to learn sentence embedding in many NLP tasks such as semantic text similarity due to its efficiency in computation. Most existing works employed a sequence encoder model and utilized limited sentence pairs with a pair-based loss to learn discriminating sentence representation. However, it is known that the sentence representation can be biased when the sampled sentence pairs deviate from the true distribution of all sentence pairs. In this paper, our theoretical analysis shows that existing works severely suffered from a good pair sampling and instance weighting strategy. Instead of one time pair selection and learning on equal weighted pairs, we propose a unified locality weighting and learning framework to learn task-specific sentence embedding. Our model, SentPWNet, exploits the neighboring spatial distribution of each sentence as locality weight to indicate the informative level of sentence pair. Such weight is updated along with pair-loss optimization in each round, ensuring the model keep learning the most informative sentence pairs. Extensive experiments on four public available datasets and a self-collected place search benchmark with 1.4 million places clearly demonstrate that our model consistently outperforms existing sentence embedding methods with comparable efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Li Zhang (693 papers)
  2. Han Wang (420 papers)
  3. Lingxiao Li (38 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.