B-PROP: Bootstrapped Pre-training with Representative Words Prediction for Ad-hoc Retrieval (2104.09791v4)
Abstract: Pre-training and fine-tuning have achieved remarkable success in many downstream NLP tasks. Recently, pre-training methods tailored for information retrieval (IR) have also been explored, and the latest success is the PROP method which has reached new SOTA on a variety of ad-hoc retrieval benchmarks. The basic idea of PROP is to construct the \textit{representative words prediction} (ROP) task for pre-training inspired by the query likelihood model. Despite its exciting performance, the effectiveness of PROP might be bounded by the classical unigram LLM adopted in the ROP task construction process. To tackle this problem, we propose a bootstrapped pre-training method (namely B-PROP) based on BERT for ad-hoc retrieval. The key idea is to use the powerful contextual LLM BERT to replace the classical unigram LLM for the ROP task construction, and re-train BERT itself towards the tailored objective for IR. Specifically, we introduce a novel contrastive method, inspired by the divergence-from-randomness idea, to leverage BERT's self-attention mechanism to sample representative words from the document. By further fine-tuning on downstream ad-hoc retrieval tasks, our method achieves significant improvements over baselines without pre-training or with other pre-training methods, and further pushes forward the SOTA on a variety of ad-hoc retrieval tasks.
- Xinyu Ma (49 papers)
- Jiafeng Guo (161 papers)
- Ruqing Zhang (60 papers)
- Yixing Fan (55 papers)
- Yingyan Li (8 papers)
- Xueqi Cheng (274 papers)