Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learned Soft Queries in Neural Systems

Updated 27 January 2026
  • Learned Soft Queries are adaptive, trainable query embeddings that capture nuanced semantics for improved retrieval, preference alignment, and cache management in Transformers.
  • They integrate LLM-augmented teacher–student distillation, soft aggregation of multi-judge outputs, and trainable soft tokens to enhance model robustness and global attention.
  • Empirical results indicate marginal in-domain gains but significant out-of-domain improvements and performance boosts in long-context generation tasks.

Learned Soft Queries (Judge Q) cover a spectrum of neural mechanisms in which queries—whether for retrieval, evaluation, or attention—are not restricted to hard-coded objects, tokens, or pooling strategies, but are themselves trainable or synthesized to better capture the intended semantics or utility. The “Judge Q” designation has been applied in three distinct research contexts: dense retrieval with LLM-augmented teacher–student distillation, soft aggregation of multi-rubric judge outputs for preference modeling, and trainable soft queries for key–value (KV) cache eviction in Transformer LLMs. Each instantiation leverages the learnability and adaptability of queries to improve efficiency, robustness, or generalization in modern AI systems.

1. Soft Queries in Dense Retrieval with LLM Expansion

In dense retrieval, “soft queries” denote a learned embedding space in which input queries are mapped, with the goal of inheriting the expressive semantics of expanded queries—typically augmented by a LLM without incurring inference-time cost. The SoftQE framework (Pimpalkhute et al., 2024) establishes this paradigm by mapping a vanilla query qq (from space QQ) via a student encoder directly into the embedding space of the teacher's LLM-augmented expansions.

Technical Formulation

  • Define hp(q)Rdh_p(q)\in\mathbb{R}^d as the initial query embedding from a dual-encoder.
  • Use an LLM g(ϕ)g_{(\phi)} (e.g., text-davinci-003) and prompt I\mathcal{I} to generate a pseudo-document d=g(ϕ)(I,q)d' = g_{(\phi)}(\mathcal{I}, q).
  • The expanded query is q+=qdq^+ = q \oplus d'.
  • The teacher encoder fteacherf_\mathrm{teacher} embeds q+q^+ as hteacher(q+)Rdh_\mathrm{teacher}(q^+)\in\mathbb{R}^d.
  • The student encoder fstudentf_\mathrm{student} learns to produce q~=fstudent(q)hteacher(q+)\tilde{q} = f_\mathrm{student}(q) \approx h_\mathrm{teacher}(q^+), known as the “soft query” embedding.

Model Architecture and Optimization

Both teacher and student encoders are standard Transformer-based dual encoders (e.g., BERTbase_\mathrm{base}), producing a [CLS] token pooled vector, possibly L2L_2 normalized. The training objective is a convex combination of contrastive retrieval loss and mean-squared error (MSE) distillation:

LSoftQE=αLdist+(1α)LcontL_\mathrm{SoftQE} = \alpha\, L_\mathrm{dist} + (1-\alpha)\,L_\mathrm{cont}

where Ldist=fstudent(q)fteacher(q+)22L_\mathrm{dist} = \|f_\mathrm{student}(q) - f_\mathrm{teacher}(q^+)\|_2^2 and LcontL_\mathrm{cont} is the standard contrastive loss over (query, positive passage, negatives). Empirically, a “warm-up” schedule α=1\alpha=1 for the first three epochs, then α=0.2\alpha=0.2 for three epochs, yields best results.

Inference and Impact

At inference, no LLM expansion is used: queries are mapped by the student encoder, and document inner-product ranking is performed using precomputed document embeddings. Notably, SoftQE yields only marginal in-domain gains (+0.13 absolute for MS-MARCO MRR@10), but consistently improves out-of-domain BEIR tasks by an average of +2.83 nDCG@10, indicating enhanced robustness to domain shift (Pimpalkhute et al., 2024).

2. Learned Judge Q Aggregators for Preference Modeling

In the context of aligning LLM-based judges with human preferences, “Judge Q” refers to a learned soft aggregation mechanism that synthesizes preference predictions from multiple rubric-conditioned LLM judges. The key objective is to approximate (potentially diverse or conflicting) human-like persona-based preferences, enabling more robust reward modeling for RLHF or LLM routing decisions (Sprejer et al., 29 Oct 2025).

Multi-Judge Setup and Aggregation

  • KK rubric-conditioned LLM judges J(1),,J(K)J^{(1)},\ldots,J^{(K)} yield scalar scores rk(x)r_k(x) for input xx (typically prompt–answer pairs).
  • MM personas P1,,PMP_1,\ldots,P_M serve as synthetic human raters, each outputting ground-truth preference labels y{0,,10}y\in\{0,\ldots,10\}, to simulate human heterogeneity.
  • For each xx, collect score vector s(x)=[r1(x),,rK(x)]RKs(x) = [r_1(x),\ldots,r_K(x)]^\top\in\mathbb{R}^K.
  • A parametric aggregator fθ:RKRf_\theta:\mathbb{R}^K\rightarrow\mathbb{R} is trained to match persona outputs: yfθ(s)y\approx f_\theta(s).

Aggregator Implementations

fθ(s)=j=1Kgj(sj)+bf_\theta(s) = \sum_{j=1}^K g_j(s_j) + b

Each gjg_j is a learned spline to recalibrate judge jj's output.

  • Multi-Layer Perceptron (MLP):

h=ReLU(W1s+b1),fθ(s)=W2h+b2h = \mathrm{ReLU}(W_1 s + b_1),\quad f_\theta(s) = W_2 h + b_2

W1RH×KW_1\in\mathbb{R}^{H\times K}, HH via hyperparameter search.

Objective, Evaluation, and Robustness

The core objective is MSE regression:

L(θ)=1Ni=1N(fθ(s(xi))yi)2L(\theta) = \frac{1}{N}\sum_{i=1}^N (f_\theta(s(x_i))-y_i)^2

Robustness is assessed under both persona label and judge-rubric perturbations. Notably, GAM and MLP aggregators maintain R2>0.5R^2>0.5 even under substantial noise, where naive means deteriorate. Judge importance (as $1-p$ of spline term) reveals dimensions such as Truthfulness and Logical Consistency to be most influential on the synthetic preference metric (Sprejer et al., 29 Oct 2025).

3. Trainable Soft Queries for KV Cache Eviction in Transformers

The “Judge Q” approach in Transformer LLMs addresses the challenge of efficient KV cache eviction during long-context sequence generation. Traditional strategies select the last ww tokens to compute importance scores for cache eviction, an approach biased toward local context. Judge Q instead learns a set of nn soft-token embeddings that, once appended to the prompt, compute attention over all positions, yielding improved importance estimation for global information retention.

Methodology

  • Define nn learnable soft tokens; their embeddings QsoftRn×dQ_\mathrm{soft}\in\mathbb{R}^{n\times d} are the only tunable parameters.
  • At each training step, two sequences are used:
    • Inputsoft_\mathrm{soft}: (prompt, soft tokens)
    • Inputresp_\mathrm{resp}: (prompt, response tokens)
  • For both, attention maps from the query tokens (soft or response) to each prompt token are extracted, yielding Asoft,ArespRLA_\mathrm{soft},A_\mathrm{resp}\in\mathbb{R}^L.
  • Objective: minimize MSE between AsoftA_\mathrm{soft} and ArespA_\mathrm{resp}:

L=1LAsoftAresp22L = \frac{1}{L}\|A_\mathrm{soft} - A_\mathrm{resp}\|_2^2

Inference and Efficacy

During prefill inference:

  • The nn soft tokens are appended and attention scores AsoftA_\mathrm{soft} computed.
  • All key–value pairs are ranked by their importance scores; the top-kk under the token budget are retained.
  • Soft tokens are discarded before ongoing decoding.

Judge Q achieves higher “critical KV hit rate” than best windowed methods and less performance drop under tight memory constraints, yielding +1 to +3 points on LongBench and RULER across budgets (e.g., Judge Q @512 achieves 39.17 vs. SnapKV's 38.31 on LongBench; 74.12 vs. 68.21 on RULER) (Liu et al., 13 Sep 2025).

4. Comparative Summary of Judge Q Approaches

Application Domain Mechanism Notable Outcome
Dense Retrieval (Pimpalkhute et al., 2024) Student soft query matches LLM-expanded teacher embedding +2.83 nDCG@10 on BEIR
Preference Modeling (Sprejer et al., 29 Oct 2025) Soft aggregation via GAM/MLP over rubric judges R2R^2 ≈ 0.58, robust to bias
KV Cache Eviction (Liu et al., 13 Sep 2025) Trainable soft tokens attend globally for KV scoring +1–3 pt. on long-context

All implementations share reliance on gradient-based learning of query objects that mediate between source input and a desired utility function—either semantic richness, preference alignment, or information coverage.

5. Implementation Considerations and Limitations

Integration of Learned Soft Queries is notably efficient:

  • In dense retrieval, only query encoder weights are affected, with no change to latency, as LLM expansion is not required at inference (Pimpalkhute et al., 2024).
  • In preference aggregation, only the aggregator is trained, and the interpretability of GAM splines aids in auditability and fairness analysis (Sprejer et al., 29 Oct 2025).
  • For KV cache eviction, only nn soft-token embeddings are updated; no need for model-wide fine-tuning, with memory/compute overhead dominated by O(nLd)O(nL d) per layer attention during prefill (Liu et al., 13 Sep 2025).

Known caveats include the synthetic nature of persona-based labels in preference modeling, potential LLM circularity, limitation to single-vector encoders or scalar aggregations in some settings, and the need for broader human calibration.

6. Broader Impact and Future Directions

Learned Soft Queries, both in “Judge Q” and related forms, directly enable:

  • Improved zero-shot retrieval transfer, as LLM-driven expansions expose paraphrastic and rare event knowledge (Pimpalkhute et al., 2024).
  • Fine-grained, robust, and interpretable aggregation of LLM judges for RLHF, with enhanced resistance to rubric-induced bias and instability (Sprejer et al., 29 Oct 2025).
  • Globally-aware cache retention in long-context LLMs, crucial for efficient generation under resource pressure (Liu et al., 13 Sep 2025).

Promising directions include extending soft queries to multi-vector and late-interaction architectures, refining persona label distributions beyond uniform sampling, incorporating rank-based losses into aggregator training, and large-scale human validation of preference alignment.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Learned Soft Queries (Judge Q).