Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

YES SIR!Optimizing Semantic Space of Negatives with Self-Involvement Ranker (2109.06436v1)

Published 14 Sep 2021 in cs.IR and cs.CL

Abstract: Pre-trained model such as BERT has been proved to be an effective tool for dealing with Information Retrieval (IR) problems. Due to its inspiring performance, it has been widely used to tackle with real-world IR problems such as document ranking. Recently, researchers have found that selecting "hard" rather than "random" negative samples would be beneficial for fine-tuning pre-trained models on ranking tasks. However, it remains elusive how to leverage hard negative samples in a principled way. To address the aforementioned issues, we propose a fine-tuning strategy for document ranking, namely Self-Involvement Ranker (SIR), to dynamically select hard negative samples to construct high-quality semantic space for training a high-quality ranking model. Specifically, SIR consists of sequential compressors implemented with pre-trained models. Front compressor selects hard negative samples for rear compressor. Moreover, SIR leverages supervisory signal to adaptively adjust semantic space of negative samples. Finally, supervisory signal in rear compressor is computed based on condition probability and thus can control sample dynamic and further enhance the model performance. SIR is a lightweight and general framework for pre-trained models, which simplifies the ranking process in industry practice. We test our proposed solution on MS MARCO with document ranking setting, and the results show that SIR can significantly improve the ranking performance of various pre-trained models. Moreover, our method became the new SOTA model anonymously on MS MARCO Document ranking leaderboard in May 2021.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Ruizhi Pu (7 papers)
  2. Xinyu Zhang (296 papers)
  3. Ruofei Lai (13 papers)
  4. Zikai Guo (3 papers)
  5. Yinxia Zhang (3 papers)
  6. Hao Jiang (230 papers)
  7. Yongkang Wu (12 papers)
  8. Yantao Jia (14 papers)
  9. Zhicheng Dou (113 papers)
  10. Zhao Cao (36 papers)
Citations (1)