Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Retrieval Oriented Masking Pre-training Language Model for Dense Passage Retrieval (2210.15133v1)

Published 27 Oct 2022 in cs.CL and cs.IR

Abstract: Pre-trained LLM (PTM) has been shown to yield powerful text representations for dense passage retrieval task. The Masked LLMing (MLM) is a major sub-task of the pre-training process. However, we found that the conventional random masking strategy tend to select a large number of tokens that have limited effect on the passage retrieval task (e,g. stop-words and punctuation). By noticing the term importance weight can provide valuable information for passage retrieval, we hereby propose alternative retrieval oriented masking (dubbed as ROM) strategy where more important tokens will have a higher probability of being masked out, to capture this straightforward yet essential information to facilitate the LLM pre-training process. Notably, the proposed new token masking method will not change the architecture and learning objective of original PTM. Our experiments verify that the proposed ROM enables term importance information to help LLM pre-training thus achieving better performance on multiple passage retrieval benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dingkun Long (23 papers)
  2. Yanzhao Zhang (18 papers)
  3. Guangwei Xu (18 papers)
  4. Pengjun Xie (85 papers)
Citations (4)