Papers
Topics
Authors
Recent
2000 character limit reached

Document-Centric Pruning

Updated 3 December 2025
  • Document-centric pruning is a set of techniques that remove redundant components from document representations to optimize retrieval efficiency.
  • It employs methods such as top-K term selection, PCA reduction, and attention-guided token or patch filtering to balance storage, latency, and quality.
  • The approach delivers substantial memory and computation savings with minimal accuracy loss, making it ideal for web-scale and resource-constrained applications.

Document-centric pruning refers to a family of static or dynamic techniques that aggressively reduce the size and computational footprint of document representations in retrieval and document understanding models, by removing less useful or redundant components at the document level. These approaches, employed for both neural and symbolic indexes, balance storage, latency, and retrieval quality—a crucial advantage for web-scale information retrieval, vision-language pipelines, and context-efficient retrieval-augmented generation. Document-centric pruning encompasses static, per-document index trimming in sparse neural retrievers (e.g., SPLADE, DeepImpact, ColBERT), adaptive patch or token selection in vision-LLMs for OCR and VQA, and static projection of irrelevant subtrees in semi-structured data such as XML.

1. Static Document-Centric Pruning in Sparse and Dense Retrieval

Sparse neural retrievers, such as SPLADE, DeepImpact, and uniCOIL, represent each document as a sparse vector of term importances. Document-centric pruning in this context is a one-time, document-level offline operation that retains only the top-K most salient terms per document. Given a term-weight map snd:VR+s_n^d : V \to \mathbb{R}_+ for document dd, where VV is the vocabulary, one sorts all nonzero terms by descending weight and keeps the top KK. All others are zeroed. This operation is formalized as:

w^d,i={wd,i,wd,iTd 0,otherwise\hat{w}_{d, i} = \begin{cases} w_{d, i}, & w_{d, i} \geq T_d \ 0, & \text{otherwise} \end{cases}

with TdT_d the KKth largest weight in dd (or $0$ if d<K|d|<K). The pruned posting list is used for subsequent retrieval (Won et al., 27 Nov 2025, Lassance et al., 2023).

A closely related formulation is used in late-interaction models (ColBERT, COIL), which index each document as a collection of token or patch vectors. Here, document-centric pruning discards a user-defined fraction α\alpha of tokens per document, where selection is performed via heuristics (first-kk, top-IDF), learned attention, or global statistical criteria (Lassance et al., 2021, Liu et al., 20 Mar 2024). In dense retrieval, static PCA-based dimension pruning provides similar storage reduction: each document embedding is projected offline onto the kk leading principal components, yielding D=DVkD' = D V_k for the global embedding matrix DD, where kk is chosen to preserve 90%\gtrsim90\% of total variance (Siciliano et al., 13 Dec 2024).

Typical tradeoffs are summarized in the following table:

Model / Method Pruning Level Storage Retrieval Drop Speedup
Sparse (SPLADE) K=64, ~2–4x –50% ≤2% nDCG@10 loss 2–4x
Late-Interaction (ColBERT) α=0.75 –25% <1pp MRR@10 loss 1.2–1.4x
Dense (PCA) k=d/2 –50% ≤5% nDCG@10 loss ∼2x

2. Dynamic, Attention-Guided Pruning in Vision-Language Retrieval

For document images and multi-modal retrieval, document-centric pruning operates at the patch or region level using model-internal attention. In "Hierarchical Patch Compression for ColPali," dynamic query-time pruning computes VLM attention weights αi\alpha_i for each image patch ii. The pMpM patches with highest attention (pp being the retention ratio) are selected, discarding the remainder. The pruned set SpS_p is defined by:

Sp=argmaxS=k=pMiSαiS_p = \arg\max_{|S|=k = \lceil p M \rceil} \sum_{i \in S} \alpha_i

Scoring restricts the late-interaction computation to these patches, reducing complexity from O(NqM)O(N_q M) to O(NqpM)O(N_q \lceil pM\rceil). For p=0.6p=0.6, one observes a 40% reduction in computation with only 1% nDCG@10 loss (Bach, 19 Jun 2025). This same design pattern is used for token pruning in VLM document OCR: lightweight foreground–background classifiers filter out non-text patches, followed by index-preserving selection and possible max-pooling refinement to recover fragmented text lines. Critical to effectiveness is the preservation of original grid indices for spatial structure (Son et al., 8 Sep 2025).

3. Adaptive, Offline Pruning via Intra-Document Attention

Adaptive document-centric pruning can be query-agnostic and performed entirely offline. "DocPruner" uses the attention distribution between each patch and a global (end-of-sequence/EOS) token obtained from the final transformer layer. Each patch receives an importance score I(dj)I(d_j) as the head-averaged attention from EOS. The document-specific threshold

τd=μd+kσd\tau_d = \mu_d + k\,\sigma_d

is used, with μd\mu_d and σd\sigma_d the mean and standard deviation of patch attentions, and kk a sensitivity hyperparameter. Patches scoring I(dj)τdI(d_j) \leq \tau_d are pruned. This approach yields storage reductions of 44–62% with <1.5%<1.5\% nDCG@5 degradation across diverse visual-document benchmarks (Yan et al., 28 Sep 2025). Comparisons to static thresholding, fixed-ratio pruning, and noise filtering show substantial superiority for DocPruner due to document-adaptive threshold selection and model-agnostic design.

4. Lossless and Near-Lossless Pruning in Late-Interaction Models

A principled extension is lossless document-centric pruning, wherein only those document vectors that are guaranteed never to affect any query score under the (ReLU-max) ColBERT similarity are removed. This is formalized as finding vectors dd^- locally dominated by all other document vectors: for all qq, either qd0q\cdot d^- \leq 0 or d+:qd+>qd\exists\,d^+\,:\,q\cdot d^+ > q\cdot d^-. Characterization is accomplished via linear programming, and training-time regularizers (nuclear-norm, token similarity, L1L_1 norm) are employed to increase redundancy among token vectors, thus maximizing the prunable proportion. Empirically, up to 68% of document tokens can be removed with <1<1 point effectiveness loss at MRR@10 or nDCG@10 across in- and out-of-domain settings (Zong et al., 17 Apr 2025).

5. Practical Tradeoffs, Guidelines, and Boundary Conditions

Empirical evaluation across literature consistently highlights a Pareto tradeoff between memory/compute reduction and retrieval effectiveness. For sparse neural indexes, keeping top K=32K = 32–$64$ tokens/terms per document enables 24×2–4\times first-stage latency reductions with ≤2–8% relative loss. In vision and OCR tasks, pruning 40–55% of input tokens/patches using attention masks, token classifiers, or hybrid morphological postprocessing enables 2×2\times3×3\times reduction in total FLOPs; conversely, failure to preserve index ordering or overaggressive pruning leads to catastrophic accuracy drop (Son et al., 8 Sep 2025). In variable-length transformers, dynamically adjusting pruning thresholds (per-document, sentence-level, or patch entropy) achieves greater robustness across diverse domains. Dynamic, context-aware pruning can be seamlessly unified with reranking heads, as in the Provence Q&A context pruner (Chirkova et al., 27 Jan 2025).

6. Document Projection in Structured and Semi-Structured Data

Document-centric pruning also arises formally in semi-structured systems. In XML engines, type-based document projection computes, by static analysis, the minimal set of element types or schema rules traversed by a given XPath or XQuery, and prunes the input document by removing all unreachable subtrees before execution. This streaming, bufferless algorithm is both sound and (under single-type, local schema) complete, and achieves 1–5% of the original document size for most queries with 3–20×\times query-time speedup (Benzaken et al., 2011).

7. Limitations, Challenges, and Emerging Directions

Document-centric pruning—whether based on static salience, model-internal attention, or linear redundancy—has robustly reduced storage and computation across a spectrum of retrieval and understanding models. Remaining challenges include corner cases with uniform attention (limiting savings), adaptation to very long documents, query distribution drift (static pruners may lose rare but critical details), and efficient joint optimization of inter-token/patch diversity. Future work focuses on differentiable, in-training pruning, hybrid dynamic-static schemes, and extending information-of-interest–driven pruning to audio, video, and arbitrary graph structures (Yan et al., 28 Sep 2025, Zong et al., 17 Apr 2025, Chirkova et al., 27 Jan 2025).


Document-centric pruning, as rigorously articulated across text, vision, and semi-structured representation modalities, offers a tractable, model-agnostic control for compressing document representations, enabling scalable and efficient retrieval with tightly bounded effectiveness loss, and is foundational for high-throughput, resource-limited, and retrieval-augmented applications.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Document-Centric Pruning.