Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EmbedDistill: A Geometric Knowledge Distillation for Information Retrieval (2301.12005v2)

Published 27 Jan 2023 in cs.LG

Abstract: Large neural models (such as Transformers) achieve state-of-the-art performance for information retrieval (IR). In this paper, we aim to improve distillation methods that pave the way for the resource-efficient deployment of such models in practice. Inspired by our theoretical analysis of the teacher-student generalization gap for IR models, we propose a novel distillation approach that leverages the relative geometry among queries and documents learned by the large teacher model. Unlike existing teacher score-based distillation methods, our proposed approach employs embedding matching tasks to provide a stronger signal to align the representations of the teacher and student models. In addition, it utilizes query generation to explore the data manifold to reduce the discrepancies between the student and the teacher where training data is sparse. Furthermore, our analysis also motivates novel asymmetric architectures for student models which realizes better embedding alignment without increasing online inference cost. On standard benchmarks like MSMARCO, we show that our approach successfully distills from both dual-encoder (DE) and cross-encoder (CE) teacher models to 1/10th size asymmetric students that can retain 95-97% of the teacher performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Seungyeon Kim (22 papers)
  2. Ankit Singh Rawat (64 papers)
  3. Manzil Zaheer (89 papers)
  4. Sadeep Jayasumana (19 papers)
  5. Veeranjaneyulu Sadhanala (8 papers)
  6. Wittawat Jitkrittum (42 papers)
  7. Aditya Krishna Menon (56 papers)
  8. Rob Fergus (67 papers)
  9. Sanjiv Kumar (123 papers)
Citations (6)