Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
Gemini 2.5 Pro Premium
48 tokens/sec
GPT-5 Medium
15 tokens/sec
GPT-5 High Premium
23 tokens/sec
GPT-4o
104 tokens/sec
DeepSeek R1 via Azure Premium
77 tokens/sec
GPT OSS 120B via Groq Premium
466 tokens/sec
Kimi K2 via Groq Premium
201 tokens/sec
2000 character limit reached

Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval (2304.00114v1)

Published 31 Mar 2023 in cs.IR, cs.AI, and cs.CL

Abstract: Vector-based retrieval systems have become a common staple for academic and industrial search applications because they provide a simple and scalable way of extending the search to leverage contextual representations for documents and queries. As these vector-based systems rely on contextual LLMs, their usage commonly requires GPUs, which can be expensive and difficult to manage. Given recent advances in introducing sparsity into LLMs for improved inference efficiency, in this paper, we study how sparse LLMs can be used for dense retrieval to improve inference efficiency. Using the popular retrieval library Tevatron and the MSMARCO, NQ, and TriviaQA datasets, we find that sparse LLMs can be used as direct replacements with little to no drop in accuracy and up to 4.3x improved inference speeds

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com