Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Analysis on Matching Mechanisms and Token Pruning for Late-interaction Models (2403.13291v1)

Published 20 Mar 2024 in cs.IR

Abstract: With the development of pre-trained LLMs, the dense retrieval models have become promising alternatives to the traditional retrieval models that rely on exact match and sparse bag-of-words representations. Different from most dense retrieval models using a bi-encoder to encode each query or document into a dense vector, the recently proposed late-interaction multi-vector models (i.e., ColBERT and COIL) achieve state-of-the-art retrieval effectiveness by using all token embeddings to represent documents and queries and modeling their relevance with a sum-of-max operation. However, these fine-grained representations may cause unacceptable storage overhead for practical search systems. In this study, we systematically analyze the matching mechanism of these late-interaction models and show that the sum-of-max operation heavily relies on the co-occurrence signals and some important words in the document. Based on these findings, we then propose several simple document pruning methods to reduce the storage overhead and compare the effectiveness of different pruning methods on different late-interaction models. We also leverage query pruning methods to further reduce the retrieval latency. We conduct extensive experiments on both in-domain and out-domain datasets and show that some of the used pruning methods can significantly improve the efficiency of these late-interaction models without substantially hurting their retrieval effectiveness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Pre-indexing pruning strategies. In International Symposium on String Processing and Information Retrieval. Springer, 177–193.
  2. Static index pruning in web search engines: Combining term and document popularities with query views. ACM Transactions on Information Systems (TOIS) 30, 1 (2012), 1–28.
  3. Design trade-offs for search engine caching. ACM Transactions on the Web (TWEB) 2, 4 (2008), 1–28.
  4. SparTerm: Learning Term-based Sparse Representation for Fast Text Retrieval. arXiv preprint arXiv:2010.00768 (2020).
  5. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv preprint arXiv:1611.09268 (2018).
  6. Roi Blanco and Alvaro Barreiro. 2007. Static pruning of terms in inverted files. In Advances in Information Retrieval: 29th European Conference on IR Research, ECIR 2007, Rome, Italy, April 2-5, 2007. Proceedings 29. Springer, 64–75.
  7. Roi Blanco and Alvaro Barreiro. 2010. Probabilistic static pruning of inverted files. ACM Transactions on Information Systems (TOIS) 28, 1 (2010), 1–33.
  8. Stefan Büttcher and Charles LA Clarke. 2006. A document-centric approach to static index pruning in text retrieval systems. In Proceedings of the 15th ACM international conference on Information and knowledge management. 182–189.
  9. Static index pruning for information retrieval systems. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. 43–50.
  10. Zhuyun Dai and Jamie Callan. 2019. Context-aware sentence/passage term importance estimation for first stage retrieval. arXiv preprint arXiv:1910.10687 (2019).
  11. Zhuyun Dai and Jamie Callan. 2020. Context-aware document term weighting for ad-hoc search. In Proceedings of The Web Conference 2020. 1897–1907.
  12. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
  13. SPLADE v2: Sparse lexical and expansion model for information retrieval. arXiv preprint arXiv:2109.10086 (2021).
  14. SPLADE: Sparse lexical and expansion model for first stage ranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2288–2292.
  15. A white box analysis of colbert. In European Conference on Information Retrieval. Springer, 257–263.
  16. The vocabulary problem in human-system communication. Commun. ACM 30, 11 (1987), 964–971.
  17. COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 3030–3042.
  18. Introducing neural bag of whole-words with colberter: Contextualized late interactions using enhanced reduction. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 737–747.
  19. Embedding-based retrieval in facebook search. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2553–2561.
  20. Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. In International Conference on Learning Representations.
  21. Billion-scale similarity search with gpus. IEEE Transactions on Big Data (2019).
  22. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 6769–6781.
  23. Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 39–48.
  24. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In International Conference on Learning Representations.
  25. A Static Pruning Study on Sparse Neural Retrievers. arXiv preprint arXiv:2304.12702 (2023).
  26. A Study on Token Pruning for ColBERT. arXiv preprint arXiv:2112.06540 (2021).
  27. Jimmy Lin and Xueguang Ma. 2021. A few brief notes on deepimpact, coil, and a conceptual framework for information retrieval techniques. arXiv preprint arXiv:2106.14807 (2021).
  28. Pyserini: A Python Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021). 2356–2362.
  29. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
  30. Sparse, Dense, and Attentional Representations for Text Retrieval. Transactions of the Association for Computational Linguistics 9 (2021), 329–345.
  31. Learning passage impacts for inverted indexes. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1723–1727.
  32. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085 (2019).
  33. From doc2query to docTTTTTquery. Online preprint (2019).
  34. Document expansion by query prediction. arXiv preprint arXiv:1904.08375 (2019).
  35. Jay M Ponte and W Bruce Croft. 1998. A language modeling approach to information retrieval. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval. 275–281.
  36. Multi-vector retrieval as sparse alignment. arXiv preprint arXiv:2211.01267 (2022).
  37. Understanding the Behaviors of BERT in Ranking. arXiv preprint arXiv:1904.07531 (2019).
  38. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
  39. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683 (2019).
  40. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval 3, 4 (2009), 333–389.
  41. ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 3715–3734.
  42. Donald J Schuirmann. 1987. A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. Journal of pharmacokinetics and biopharmaceutics 15 (1987), 657–680.
  43. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
  44. Sree Lekha Thota and Ben Carterette. 2011. Within-document term-based index pruning with statistical hypothesis testing. In Advances in Information Retrieval: 33rd European Conference on IR Research, ECIR 2011, Dublin, Ireland, April 18-21, 2011. Proceedings 33. Springer, 543–554.
  45. Nicola Tonellotto and Craig Macdonald. 2021. Query embedding pruning for dense retrieval. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 3453–3457.
  46. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In International Conference on Learning Representations.
  47. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019).
  48. Optimizing dense retrieval model training with hard negatives. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1503–1512.
  49. An analysis of BERT in document ranking. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 1941–1944.
  50. RepBERT: Contextualized text embeddings for first-stage retrieval. arXiv preprint arXiv:2006.15498 (2020).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Qi Liu (485 papers)
  2. Gang Guo (31 papers)
  3. Jiaxin Mao (47 papers)
  4. Zhicheng Dou (113 papers)
  5. Ji-Rong Wen (299 papers)
  6. Hao Jiang (230 papers)
  7. Xinyu Zhang (296 papers)
  8. Zhao Cao (36 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.