Papers
Topics
Authors
Recent
2000 character limit reached

SPLADE-doc: Sparse Retrieval & Expansion Model

Updated 14 December 2025
  • The paper introduces SPLADE-doc, a transformer-based model that generates high-dimensional sparse document representations via term expansion and log-saturation.
  • SPLADE-doc applies semantic expansion by mapping contextual tokens to an extensive vocabulary and aggregates token scores using max-pooling, enhancing retrieval accuracy.
  • It utilizes hybrid thresholding, DF-FLOPS regularization, and pruning strategies to achieve remarkable cost-efficiency and maintain production-level latency.

SPLADE-doc is a transformer-based learned sparse retrieval model that encodes documents (and optionally queries) as high-dimensional sparse vectors of “term expansion” weights, optimized to enable efficient matching and ranking via inverted index structures. Employing explicit sparsity regularization, log-saturation, and token-wise projection onto the full vocabulary, SPLADE-doc advances the effectiveness and cost-efficiency of first-stage retrieval. It supports robust expansion, interpretability, and can be adapted for production-level latency and scale.

1. Model Architecture: Expansion, Encoding, and Sparsification

SPLADE-doc is constructed atop a transformer backbone (typically BERT-base or DistilBERT) that processes each document dd into sequence embeddings hiRHh_i \in \mathbb{R}^H [hi=BERT(t)ih_i = \mathrm{BERT}(t)_i, i=1,,ni=1,\ldots,n]. These hidden states are projected via a linear layer, GeLU activation, and LayerNorm to obtain per-token expansion vectors: vi=LayerNorm(GeLU(Wtrhi)),viRHv_i = \mathrm{LayerNorm}(\mathrm{GeLU}(W_{\rm tr}\,h_i)), \quad v_i \in \mathbb{R}^H Each viv_i is then mapped onto the complete vocabulary VV (V30|V|\approx30K) using the input embedding EjE_j and learned bias bjb_j: wij=viEj+bjw_{ij} = v_i^\top E_j + b_j Activations are made nonnegative using ReLU, and a log-saturation is applied per position: wjd=maxilog(1+ReLU(wij))w_j^d = \max_{i}\log(1 + \mathrm{ReLU}(w_{ij})) This max-pooling produces a highly sparse expansion vector per document, d=[w1,,wV]\mathbf{d} = [w_1, \ldots, w_{|V|}], where most entries are zero. All nonzero (j,wjd)(j, w_j^d) pairs are indexed for retrieval (Formal et al., 2021, Qiao et al., 2023, Lassance et al., 11 Mar 2024).

2. Semantic Expansion and Vocabulary-Driven Representations

SPLADE-doc’s “expansion” mechanism allows each input token to “activate” arbitrary vocabulary terms—not just those present in the original text. The cross-vocab projection enables semantic mapping:

  • Contextual tokens can assign impact scores to related, abstract, or synonymous terms.
  • The aggregation (max or sum) incorporates both literal term matching and semantic expansion.
  • Resulting document vectors encode re-weighted observed words and predictive expansions (Formal et al., 2021, Mackenzie et al., 2023).

Empirical studies have confirmed that the vocabulary need not consist of semantically meaningful words; SPLADE can utilize arbitrary or “latent” tokens for effective retrieval if enough sparse dimensions are available (Mackenzie et al., 2023).

3. Training Objectives and Regularization

SPLADE-doc employs a combination of ranking loss (in-batch negatives), sparsity regularization, and (for some variants) knowledge distillation. The foundational training losses are:

  • Contrastive ranking loss (with in-batch negatives and occasionally BM25 hard negatives):

Lrank=1Bi=1Blog[es(qi,di+)es(qi,di+)+es(qi,di)+jies(qi,dj+)]\mathcal{L}_{\rm rank} = -\frac{1}{B}\sum_{i=1}^B \log \left[ \frac{e^{s(q_i, d_i^+)}}{e^{s(q_i, d_i^+)} + e^{s(q_i, d_i^-)} + \sum_{j\neq i}e^{s(q_i, d_j^+)} } \right ]

where s(q,d)=jqwjds(q,d) = \sum_{j\in q} w_j^d, with qq as the bag-of-words query.

  • Sparsity regularization
    • L1 penalty on expansion vectors (applied to queries and/or docs): L1=jwjd\mathcal{L}_{\ell_1} = \sum_j w_j^d
    • FLOPS regularization for posting list control:

    LFLOPS=j=1V(1Ni=1Nwj(di))2\mathcal{L}_{\rm FLOPS} = \sum_{j=1}^{|V|} \left( \frac{1}{N} \sum_{i=1}^N w_j^{(d_i)} \right )^2

    FLOPS discourages uniformly high term activation, producing balanced term/document postings.

  • Distillation losses (optional, as in SPLADE-v3 or DistilSPLADE):

The total loss combines ranking and regularization terms, with λ\lambda controlling sparsity-strength (λq\lambda_q for queries, λd\lambda_d for documents), enabling a direct effectiveness–efficiency trade-off.

4. Sparsification Schemes and Efficiency Optimization

Production SPLADE-doc deployments require aggressive sparsification strategies:

  • Hybrid thresholding (Qiao et al., 2023): Learn soft threshold tQt_Q for queries and hard threshold tDt_D for documents to prune low-impact weights, using smooth approximations (sigmoid gating) for trainability. Threshold regularizers further push up tQ,tDt_Q, t_D for greater sparsity and reduced latency.
  • DF-FLOPS regularization (Porco et al., 21 May 2025): Penalize activation of high document-frequency (DF) terms rather than global density. A dynamic weighting scheme discourages common trivial terms, decreasing posting list lengths and end-to-end retrieval latency, while retaining salient high-DF terms if justified by context. DF weights are updated periodically to adapt to model evolution.
  • Pruning strategies (Won et al., 27 Nov 2025):
    • Document-centric kdk_d pruning: keep top-kk document term weights.
    • Top-kk query selection: limit query expansions to kqk_q high-impact terms.
    • Boolean threshold filtering: require a minimum fraction τ\tau of overlapping expanded terms in retrieval, dramatically lowering latency at quality cost.

Empirical results show that document-centric pruning often benefits both speed and recall, whereas aggressive query term pruning must be managed carefully to avoid semantic loss; combining strategies (e.g., kq=7k_q=7, kd=10k_d=10, τ=20%40%\tau=20\%-40\%) yields near-BM25 latency with substantial semantic gains.

5. Indexing, Inference, and System Integration

Documents are encoded via SPLADE-doc into sparse expansion-weight dictionaries, which are indexed in standard inverted posting list engines (e.g., Apache Solr v9, PISA, Pyserini/Lucene). Queries are represented as literal token sets (or optionally expanded by a query encoder), and retrieval proceeds by:

  • Fetching document posting lists for query surface tokens.
  • Accumulating document scores by sum of document expansion weights on matched query terms:

s(q,d)=jqwjds(q, d) = \sum_{j \in q} w_j^d

  • Returning top-kk scored documents.

Because SPLADE-doc vectors are maximally sparse, query latency on large corpora is competitive with classic bag-of-words systems (BM25), with average query times on benchmark datasets often in the $10-100$ ms range, index sizes reduced by 23×2-3\times via thresholding, and negligible loss in retrieval metrics (Qiao et al., 2023, Porco et al., 21 May 2025).

6. Empirical Performance and Production Trade-offs

On MS MARCO dev and TREC DL 2019, SPLADE-doc substantially outperforms BM25 and prior lexical expansion models:

Model MRR@10 R@1000 NDCG@10 R@1000
BM25 0.184 0.853 0.506 0.745
doc2query-T5 0.277 0.947 0.642 0.827
SPLADE-doc 0.322 0.946 0.667 0.747

Expanded-SPLADE delivers effective retrieval on billion-scale corpora, narrowing latency gaps to BM25 and demonstrating superior semantic matching for complex queries. Pruning, thresholding, and DF-FLOPS regularization jointly facilitate matching BM25-level latency in large-scale engines with major quality gains (e.g., SSS@10, nDCG@10) (Won et al., 27 Nov 2025, Porco et al., 21 May 2025).

A notable finding is that SPLADE-doc’s performance is robust to aggressive vocabulary constraints; even sparse models restricted to stopwords or random tokens exceed BM25, as activation slots encode latent semantic signals (Mackenzie et al., 2023).

7. Advanced SPLADE-doc Variants and Future Directions

Recent developments include SPLADE-v3 (Lassance et al., 11 Mar 2024), which achieves higher retrieval metrics via superior distillation, multi-hard-negative mining, and advanced optimization. SPLADE-doc (document-only variant) supports pure bag-of-words querying (no query encoder, zero GPU at query time), suitable for latency-critical production. Cross-encoder re-ranking of SPLADE-v3 results yields marginal additional gains.

The latest adaptation—Echo-Mistral-SPLADE (Doshi et al., 20 Aug 2024)—utilizes a causal decoder-only LLM backbone with echo embeddings for sparse expansion, surpassing all previous SPLADE variants by nDCG@10 on BEIR and maintaining efficient retrieval.

Continued research is focused on improved regularization (DF-FLOPS), sophisticated sparsification (hybrid thresholding, joint regularization), and expansions to broader vocabularies, balancing interpretability and semantic capacity. Practitioners should carefully tune regularization strengths, monitor posting list statistics, and consider tailored pruning to suit deployment and scaling requirements (Porco et al., 21 May 2025, Qiao et al., 2023).


SPLADE-doc stands as a highly effective, efficient, and flexible paradigm for learned sparse lexical retrieval. Its architecture allows for deep semantic expansion, tunable sparsity, and integration into standard search engines, supporting both academic research and real-world search deployment (Formal et al., 2021, Formal et al., 2021, Lassance et al., 11 Mar 2024).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to SPLADE-doc.