ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference (2204.11458v1)
Abstract: State-of-the-art neural models typically encode document-query pairs using cross-attention for re-ranking. To this end, models generally utilize an encoder-only (like BERT) paradigm or an encoder-decoder (like T5) approach. These paradigms, however, are not without flaws, i.e., running the model on all query-document pairs at inference-time incurs a significant computational cost. This paper proposes a new training and inference paradigm for re-ranking. We propose to finetune a pretrained encoder-decoder model using in the form of document to query generation. Subsequently, we show that this encoder-decoder architecture can be decomposed into a decoder-only LLM during inference. This results in significant inference time speedups since the decoder-only architecture only needs to learn to interpret static encoder embeddings during inference. Our experiments show that this new paradigm achieves results that are comparable to the more expensive cross-attention ranking approaches while being up to 6.8X faster. We believe this work paves the way for more efficient neural rankers that leverage large pretrained models.
- Kai Hui (27 papers)
- Honglei Zhuang (31 papers)
- Tao Chen (397 papers)
- Zhen Qin (105 papers)
- Jing Lu (158 papers)
- Dara Bahri (30 papers)
- Ji Ma (72 papers)
- Jai Prakash Gupta (1 paper)
- Cicero Nogueira dos Santos (31 papers)
- Yi Tay (94 papers)
- Don Metzler (3 papers)