Towards Robust Ranker for Text Retrieval (2206.08063v1)
Abstract: A ranker plays an indispensable role in the de facto 'retrieval & rerank' pipeline, but its training still lags behind -- learning from moderate negatives or/and serving as an auxiliary module for a retriever. In this work, we first identify two major barriers to a robust ranker, i.e., inherent label noises caused by a well-trained retriever and non-ideal negatives sampled for a high-capable ranker. Thereby, we propose multiple retrievers as negative generators improve the ranker's robustness, where i) involving extensive out-of-distribution label noises renders the ranker against each noise distribution, and ii) diverse hard negatives from a joint distribution are relatively close to the ranker's negative distribution, leading to more challenging thus effective training. To evaluate our robust ranker (dubbed R$2$anker), we conduct experiments in various settings on the popular passage retrieval benchmark, including BM25-reranking, full-ranking, retriever distillation, etc. The empirical results verify the new state-of-the-art effectiveness of our model.
- Yucheng Zhou (37 papers)
- Tao Shen (87 papers)
- Xiubo Geng (36 papers)
- Chongyang Tao (61 papers)
- Can Xu (98 papers)
- Guodong Long (115 papers)
- Binxing Jiao (18 papers)
- Daxin Jiang (138 papers)