Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks (2010.08240v2)

Published 16 Oct 2020 in cs.CL

Abstract: There are two approaches for pairwise sentence scoring: Cross-encoders, which perform full-attention over the input pair, and Bi-encoders, which map each input independently to a dense vector space. While cross-encoders often achieve higher performance, they are too slow for many practical use cases. Bi-encoders, on the other hand, require substantial training data and fine-tuning over the target task to achieve competitive performance. We present a simple yet efficient data augmentation strategy called Augmented SBERT, where we use the cross-encoder to label a larger set of input pairs to augment the training data for the bi-encoder. We show that, in this process, selecting the sentence pairs is non-trivial and crucial for the success of the method. We evaluate our approach on multiple tasks (in-domain) as well as on a domain adaptation task. Augmented SBERT achieves an improvement of up to 6 points for in-domain and of up to 37 points for domain adaptation tasks compared to the original bi-encoder performance.

Evaluation of Augmented SBERT for Pairwise Sentence Scoring

The paper "Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks" presents a methodological innovation aimed at enhancing the performance of bi-encoders in sentence scoring scenarios. This approach leverages data augmentation by utilizing cross-encoders to label sentence pairs, subsequently enriching the training dataset for bi-encoders. The primary objective is to bridge the performance gap observed between cross-encoders and bi-encoders in various NLP tasks.

Core Methodology

The authors introduce Augmented SBERT (AugSBERT), a model that employs a cross-encoder to generate labels for a larger dataset of sentence pairs. These labeled pairs augment the training data available to the bi-encoder. The selection and labeling of these pairs are performed with particular attention, as the choice of input pairs is critical to the success of the method. Several sampling strategies are explored to optimize this process.

Experimental Evaluation

The efficacy of AugSBERT is validated through both in-domain and domain adaptation tasks. Key results include improvements of up to 6 percentage points in in-domain tasks and as much as 37 percentage points in domain adaptation scenarios when compared to the baseline SBERT bi-encoder performance.

In-Domain Tasks

For in-domain tasks, the evaluation spans four diverse applications: argument similarity, semantic textual similarity, duplicate question detection, and news paraphrase identification. Across these tasks, the model consistently exhibited 1 to 6 percentage point gains over the base SBERT, with BM25 emerging as the most computationally efficient and effective sampling strategy.

Domain Adaptation

In domain adaptation, AugSBERT demonstrated substantial performance increases, particularly when transferring from a general domain (e.g., Quora) to a specialized domain (e.g., Sprint). This underscores AugSBERT's ability to enhance the adaptability of bi-encoders to new domains without requiring extensive labeled data from those domains.

Implications and Future Directions

The improvement of bi-encoder performance through augmented data preparation can significantly impact real-world applications that rely on efficient sentence scoring without the prohibitive computational cost of cross-encoders. The findings from this research might pave the way for more scalable bi-encoder models suitable for larger and more dynamic datasets.

Future directions may include the exploration of more sophisticated sampling techniques or the application of this framework to other challenging NLP tasks such as sentiment analysis and dialogue systems. Additionally, the intersection of this approach with other transfer learning paradigms could yield further insights into effective model adaptation strategies.

In summary, this paper makes a valuable contribution to the ongoing efforts to enhance bi-encoder models using data augmentation techniques. The strategic use of cross-encoders to label data for bi-encoders offers a practical avenue for improving model accuracy in both in-domain and cross-domain tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nandan Thakur (24 papers)
  2. Nils Reimers (25 papers)
  3. Johannes Daxenberger (13 papers)
  4. Iryna Gurevych (264 papers)
Citations (211)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com