Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference (2204.11458v1)

Published 25 Apr 2022 in cs.CL and cs.IR

Abstract: State-of-the-art neural models typically encode document-query pairs using cross-attention for re-ranking. To this end, models generally utilize an encoder-only (like BERT) paradigm or an encoder-decoder (like T5) approach. These paradigms, however, are not without flaws, i.e., running the model on all query-document pairs at inference-time incurs a significant computational cost. This paper proposes a new training and inference paradigm for re-ranking. We propose to finetune a pretrained encoder-decoder model using in the form of document to query generation. Subsequently, we show that this encoder-decoder architecture can be decomposed into a decoder-only LLM during inference. This results in significant inference time speedups since the decoder-only architecture only needs to learn to interpret static encoder embeddings during inference. Our experiments show that this new paradigm achieves results that are comparable to the more expensive cross-attention ranking approaches while being up to 6.8X faster. We believe this work paves the way for more efficient neural rankers that leverage large pretrained models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Kai Hui (27 papers)
  2. Honglei Zhuang (31 papers)
  3. Tao Chen (397 papers)
  4. Zhen Qin (105 papers)
  5. Jing Lu (158 papers)
  6. Dara Bahri (30 papers)
  7. Ji Ma (72 papers)
  8. Jai Prakash Gupta (1 paper)
  9. Cicero Nogueira dos Santos (31 papers)
  10. Yi Tay (94 papers)
  11. Don Metzler (3 papers)
Citations (16)