Papers
Topics
Authors
Recent
2000 character limit reached

Query-Agnostic Pruning in Model Compression

Updated 19 November 2025
  • Query-agnostic pruning is a compression technique that uses task-independent, global criteria to remove less important parameters while preserving broad model utility.
  • It employs methods such as global thresholding, activation-based masking, and latent divergence scoring to optimize efficiency without retraining for specific tasks.
  • Empirical evidence shows that these techniques achieve significant speedups and sparsity (up to 16.6x) with minimal performance loss, making them ideal for scalable systems.

Query-agnostic pruning refers to a class of model or index compression methods that operate independently of any particular query, task, or downstream objective. Instead of responding to the demands of a specific input distribution, these methods employ generic, broadcasted criteria—often involving aggregate measures of importance or quality—to filter or remove parameters, representations, or data elements. The fundamental motivation for query-agnostic pruning lies in its ability to reduce resource consumption (memory, computation, latency) while maintaining broad utility, transferability, and core model performance across a spectrum of unknown future tasks.

1. Foundational Principles and Formal Definitions

Query-agnostic pruning diverges from task-specific or query-aware pruning by its reliance on global, non-adaptive criteria. In the context of neural models and retrieval systems, the approach is characterized by applying a universal threshold or importance signal computed independently of the end-task. Consider the following formalizations:

  • Static Index Pruning (Retrieval): Given a posting set P={(t,d)}P = \{(t,d)\} for term tt in document dd, and weights wt,dw_{t,d}, query-agnostic pruning removes all entries below a global threshold τA\tau_A:

P={(t,d)P:wt,dτA}P' = \{ (t,d) \in P : w_{t,d} \geq \tau_A \}

where τA\tau_A is chosen to meet a target resource budget or percentile cutoff (Lassance et al., 2023).

  • Model Pruning (Neural Networks): For a neural network ff with parameters Θ\Theta, query-agnostic pruning seeks a single mask mm to optimize a generic criterion CgC_g over a calibration set DgD_g:

m=argmaxmCg(f(Θm),Dg)s.t.  m0=km^* = \arg\max_m C_g(f(\Theta \odot m), D_g) \quad \text{s.t.} \; ||m||_0 = k

where CgC_g is agnostic to any downstream task and DgD_g is not drawn from specific target domains (Farina et al., 8 Apr 2024).

The core purpose is to preserve transferable representations and keep generic effectiveness high without repeated re-optimization for each new query or application scenario.

2. Methodologies and Algorithms

Techniques in query-agnostic pruning are typically data-free or employ minimal, generic calibration data, eschewing gradient-based, adaptive, or label-driven approaches. Representative methodologies include:

  • Global Thresholding (Retrieval): Gather all entry weights and prune by percentile or absolute value, with no differentiation by term or document (Lassance et al., 2023).
  • Passage Quality Estimation: Learn a scalar function Q(p)Q(p) predicting the likelihood that passage pp will be relevant to any future query; prune passages below an alphaalpha-quantile threshold, formalized as

P={pP:Q(p)t}P' = \{p \in P : Q(p) \geq t\}

where tt is chosen such that P=(1α)P|P'| = (1-\alpha)|P| (Chang et al., 16 Jul 2024).

  • Activation and Magnitude-Based Masking: In neural models, parameter importance is computed from weight magnitude and calibration-set neuron activations, as in MULTIFLOW, where

S(θlr)=S(l)θlrS(r)S(\theta_{lr}) = S(l) \cdot |\theta_{lr}| \cdot S(r)

with S(l),S(r)S(l), S(r) derived from average activation norms (Farina et al., 8 Apr 2024).

  • Latent-Divergence Operator Scoring (Diffusion): LD-Pruner quantifies operator contribution by measuring changes in latent distribution moments upon operator removal:

Sop=avgorigavgmod2+stdorigstdmod2S_\text{op} = \| \text{avg}_\text{orig} - \text{avg}_\text{mod} \|_2 + \| \text{std}_\text{orig} - \text{std}_\text{mod} \|_2

(Castells et al., 18 Apr 2024).

  • Feature-Space Expert Grouping (MoE): Task-agnostic expert pruning in Mixture-of-Experts (MoE) models relies on clustering based on output similarity (e.g., centered kernel alignment or mean squared error over generic calibration inputs), followed by merging similar experts to reduce redundancy (zhang et al., 12 Jul 2024).

These algorithms are executed offline; compute-intensive steps (sorting, clustering, or measuring latent shifts) are restricted to a single pass or calibration cycle.

3. Domains and Example Frameworks

Query-agnostic pruning has been systematically explored across multiple domains:

Domain / Model Family Representative Method Citation
Sparse Neural Retrieval Static Index Pruning (Lassance et al., 2023)
Passage Corpora (Search Engines) QualT5, Static Pruning (Chang et al., 16 Jul 2024)
Vision-LLMs MULTIFLOW (Farina et al., 8 Apr 2024)
Latent Diffusion Models LD-Pruner (Castells et al., 18 Apr 2024)
Sparse MoE LLMs Feature-Guided Grouping (zhang et al., 12 Jul 2024)

In information retrieval, query-agnostic index pruning enables speedups of 2×2\times5×5\times with minimal effectiveness loss. In neural architectures, frameworks such as MULTIFLOW and LD-Pruner explicitly avoid retraining or adaptation, promoting broad post-hoc transfer.

4. Quantitative Impact and Effectiveness

Empirical studies demonstrate that query-agnostic pruning can yield significant resource savings while bounding performance degradation:

  • Retrieval Models: On MSMARCO dev, query-agnostic pruning of DeepImpact, uniCOIL-Tilde, and SPLADE-L achieves at least 2×2\times reduction in latency with absolute loss 2.5%\leq 2.5\% MRR@10; up to 16.6×16.6\times speedup with moderate drop (Lassance et al., 2023).
  • Passage Pruning (QualT5): The supervised T5-based approach prunes 25%30%25\%-30\% of passages with no loss of nDCG@10 in several retrieval pipelines (BM25, TAS-B, SPLADEv2, MonoELECTRA) (Chang et al., 16 Jul 2024).
  • Vision-LLM Pruning (MULTIFLOW): At 63%90%63\%-90\% sparsity, MULTIFLOW matches or exceeds state-of-the-art alternatives across image-text retrieval, captioning, and VQA, outperforming combinatorial and magnitude-only approaches, with only a small calibration set and zero gradients (Farina et al., 8 Apr 2024).
  • Diffusion Models (LD-Pruner): Pruning 32%48%32\%-48\% parameters in Stable Diffusion yields 19%34.9%19\%-34.9\% inference speedup, with FID improved by 5.2%5.2\% on MS-COCO T2I, indicating minimal or reversed performance loss (Castells et al., 18 Apr 2024).
  • Sparse MoE Pruning: Grouping and merging experts saves 25%50%25\%-50\% memory per layer (e.g., Mixtral-8×7 B to 6 or 4 experts), with 2.8%7.8%2.8\%-7.8\% accuracy drop (versus larger losses for router- or usage-based criteria) (zhang et al., 12 Jul 2024).

These empirical results suggest query-agnostic pruning strategies are competitive with task-specific methods for efficiency and generalizability.

5. Limitations, Controversies, and Open Directions

Query-agnostic pruning methods present the following key limitations and open questions:

  • Edge Cases and Failure Modes: In passage pruning, repetitive content (e.g., legitimate channel listings, certain scientific formats) may be erroneously discarded due to low generic quality scores (Chang et al., 16 Jul 2024).
  • Operator and Expert Dependencies: LD-Pruner and MoE feature-based pruning ignore inter-operator or expert interaction effects, possibly missing synergetic redundancy; future work may consider group-wise or dependency-aware criteria (Castells et al., 18 Apr 2024, zhang et al., 12 Jul 2024).
  • Quality Versus Fairness: Passage pruning may disproportionately affect longer documents or specific data genres, raising fairness and producer-side concerns (Chang et al., 16 Jul 2024).
  • Loss of Task Adaptivity: By construction, query-agnostic methods cannot exploit unique characteristics of emerging tasks; optimal transfer for highly specialized domains may require hybrid methods.
  • Efficiency-Performance Trade-offs: Break-even analysis (e.g., QualT5-Tiny requires 14%\gtrsim 14\% prune fraction for net TAS-B indexing benefit) can limit applicability in high-throughput environments (Chang et al., 16 Jul 2024).

A plausible implication is that next-generation query-agnostic pruning may benefit from soft masking, learnable sparsity, or mutual information metrics to refine the balance between generality and performance.

6. Practical Guidelines and Implementation Considerations

Best practices for deploying query-agnostic pruning include:

  • Calibration Data: Employ a small, representative, task-neutral calibration set (128–512 samples) for feature-based similarity metrics, activation statistics, or latent moment estimation (Farina et al., 8 Apr 2024, zhang et al., 12 Jul 2024).
  • Threshold Selection: Percentile-based or quantile thresholding offers robust resource control, easily parameterizable offline (Lassance et al., 2023).
  • Representation Choices: For expert pruning, choose between output features, vectorized weights, and surrogate mappings depending on computational constraints and model structure (zhang et al., 12 Jul 2024).
  • Post-Pruning Tuning: Though most schemes are entirely post-hoc, optional fine-tuning (especially in high-prune or critical performance regimes) may recover residual effectiveness (zhang et al., 12 Jul 2024).
  • Scalability: Sorting, clustering, masking, and latent scoring algorithms are designed for low-overhead, single-pass execution, compatible with very large-scale models and indices (Chang et al., 16 Jul 2024, Castells et al., 18 Apr 2024).

These guidelines serve both index designers and neural model practitioners aiming to maximize efficiency without sacrificing transferability or relevance.

Query-agnostic pruning is central to scalable, sustainable machine learning deployment. Instead of exhaustive retraining for every new downstream query or task, a single, efficient pruning step produces a broadly transferable, resource-optimized model or retrieval index. Recent work in vision-language, generative diffusion, sparse MoE, and neural retrieval demonstrates the generality and practical benefits of the paradigm.

Anticipated future directions include integration with passage segmentation and selection, blending task-agnostic and adaptive metrics, and deepening statistical or information-theoretic criteria for importance. The proven ability to prune 20%50%20\%-50\% of parameters or data with negligible or even improved performance foregrounds query-agnostic pruning as a cornerstone for post-hoc model compression, transfer learning, and efficient search at scale (Farina et al., 8 Apr 2024, Chang et al., 16 Jul 2024, Castells et al., 18 Apr 2024, zhang et al., 12 Jul 2024, Lassance et al., 2023).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Query-Agnostic Pruning.