Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PromptDSI: Prompt-based Rehearsal-free Instance-wise Incremental Learning for Document Retrieval (2406.12593v2)

Published 18 Jun 2024 in cs.IR, cs.AI, cs.CL, and cs.LG

Abstract: Differentiable Search Index (DSI) utilizes Pre-trained LLMs (PLMs) for efficient document retrieval without relying on external indexes. However, DSI needs full re-training to handle updates in dynamic corpora, causing significant computational inefficiencies. We introduce PromptDSI, a prompt-based rehearsal-free approach for instance-wise incremental learning document retrieval. PromptDSI attaches prompts to the frozen PLM's encoder of DSI, leveraging its powerful representation to efficiently index new corpora while maintaining a balance between stability and plasticity. We eliminate the initial forward pass of prompt-based continual learning methods that doubles training and inference time. Moreover, we propose a topic-aware prompt pool that employs neural topic embeddings as fixed keys. This strategy ensures diverse and effective prompt usage, addressing the challenge of parameter underutilization caused by the collapse of the query-key matching mechanism. Our empirical evaluations demonstrate that BERT-based PromptDSI matches IncDSI in managing forgetting while improving new corpora performance by more than 4% Hits@10 on NQ320k and upto 3% MRR@10 on MS MARCO 300k.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Tuan-Luc Huynh (5 papers)
  2. Thuy-Trang Vu (23 papers)
  3. Weiqing Wang (54 papers)
  4. Yinwei Wei (36 papers)
  5. Trung Le (94 papers)
  6. Yuan-Fang Li (90 papers)
  7. Thanh-Toan Do (92 papers)
  8. Dragan Gasevic (12 papers)