Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering (2203.06942v2)

Published 14 Mar 2022 in cs.CL and cs.IR

Abstract: To alleviate the data scarcity problem in training question answering systems, recent works propose additional intermediate pre-training for dense passage retrieval (DPR). However, there still remains a large discrepancy between the provided upstream signals and the downstream question-passage relevance, which leads to less improvement. To bridge this gap, we propose the HyperLink-induced Pre-training (HLP), a method to pre-train the dense retriever with the text relevance induced by hyperlink-based topology within Web documents. We demonstrate that the hyperlink-based structures of dual-link and co-mention can provide effective relevance signals for large-scale pre-training that better facilitate downstream passage retrieval. We investigate the effectiveness of our approach across a wide range of open-domain QA datasets under zero-shot, few-shot, multi-hop, and out-of-domain scenarios. The experiments show our HLP outperforms the BM25 by up to 7 points as well as other pre-training methods by more than 10 points in terms of top-20 retrieval accuracy under the zero-shot scenario. Furthermore, HLP significantly outperforms other pre-training methods under the other scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Jiawei Zhou (78 papers)
  2. Xiaoguang Li (73 papers)
  3. Lifeng Shang (90 papers)
  4. Lan Luo (22 papers)
  5. Ke Zhan (3 papers)
  6. Enrui Hu (3 papers)
  7. Xinyu Zhang (297 papers)
  8. Hao Jiang (230 papers)
  9. Zhao Cao (36 papers)
  10. Fan Yu (63 papers)
  11. Xin Jiang (243 papers)
  12. Qun Liu (231 papers)
  13. Lei Chen (487 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.