Papers
Topics
Authors
Recent
Search
2000 character limit reached

GRCF: Two-Stage Group-wise Ranking & Calibration

Updated 21 January 2026
  • The paper introduces a novel two-stage framework that leverages hierarchical residual vector quantization to mine hard negatives for enhanced ranking and calibration.
  • It implements group-wise listwise loss with auxiliary log-loss optimization, yielding improved GAUC and AUC metrics especially for cold-start scenarios.
  • The framework simplifies scalability by drawing negatives via in-batch group assignments and is extendable to regression and classification tasks.

The Two-Stage Group-wise Ranking and Calibration Framework (GRCF) is a modular approach designed to enhance ranking accuracy and probability calibration in large-scale predictive modeling settings. It systematically partitions data entities—such as users in recommender systems—into hierarchical clusters using residual vector quantization, then applies specialized listwise losses within these clusters to more effectively mine informative (hard) negative samples. Distinctive for its scalability and generality, GRCF achieves improvements in both ranking performance and calibration metrics without reliance on complex real-time negative mining infrastructure, and is extensible to diverse tasks including regression and classification (Yan et al., 15 Jun 2025).

1. Motivation and Background

Conventional ranking models in domains such as recommendation and sentiment analysis frequently adopt point-wise or in-batch negative sampling. While computationally convenient, these approaches predominantly use easy negatives, leading to suboptimal learning signals especially for samples that are difficult to differentiate. Pairwise and listwise ordinal approaches address order preservation but either neglect sample hardness or suffer from static loss hyperparameters. This results in models with unstable predictions, limited correlation alignment, and weakened generalization on hard or rare cases (Gao et al., 14 Jan 2026, Yan et al., 15 Jun 2025). GRCF was introduced to systematically address these limitations by adaptively mining hard negatives through a hierarchical group-wise loss design anchored on residual vector quantization.

2. Hierarchical Group Construction via Residual Vector Quantization

The foundational stage of GRCF implements residual vector quantization (RVQ) on entity (e.g., user) embeddings. Each embedding euRde_u \in \mathbb{R}^d undergoes an LL-stage quantization cascade, iteratively assigning codebook entries at each level to produce a discrete code sequence cu=[cu,1,...,cu,L]c_u = [c_{u,1}, ..., c_{u,L}], corresponding to a path in an LL-level trie structure. Mathematically:

  • ru(1)=eur_u^{(1)} = e_u
  • cu,l=argminkru(l)Ck(l)2c_{u,l} = \arg\min_k\|r_u^{(l)} - C_k^{(l)}\|^2
  • ru(l+1)=ru(l)Ccu,l(l)r_u^{(l+1)} = r_u^{(l)} - C_{c_{u,l}}^{(l)}
  • e^u=l=1LCcu,l(l)\hat{e}_u = \sum_{l=1}^L C_{c_{u,l}}^{(l)}

At each depth ll, users sharing the same code prefix [cu,1,...,cu,l][c_{u,1}, ..., c_{u,l}] are assigned to group Gm(l)G_m^{(l)}. This trie-based clustering captures increasingly fine-grained semantic similarity as ll increases, enabling adaptive escalation of negative difficulty within the learning process (Yan et al., 15 Jun 2025).

3. Group-wise Listwise Ranking Objectives

Within each hierarchical group Gm(l)G_m^{(l)}, a regression-compatible listwise cross-entropy loss (ListCE) is employed. Labels are normalized to soft targets within-group as y~i(l,m)=yi/(jGm(l)yj+ϵ)\tilde{y}_i^{(l,m)} = y_i / (\sum_{j \in G_m^{(l)}} y_j + \epsilon). The listwise loss at level ll is expressed as:

Llistce(l)=1Mlm=1MliGm(l)y~i(l,m)log(σ(si)jGm(l)σ(sj))L_{\text{listce}}^{(l)} = \frac{1}{M_l} \sum_{m=1}^{M_l} \sum_{i\in G_m^{(l)}} - \tilde{y}_i^{(l,m)} \log\left( \frac{\sigma(s_i)}{\sum_{j\in G_m^{(l)}} \sigma(s_j)} \right)

where σ()\sigma(\cdot) denotes the sigmoid function, sis_i refers to pre-activation scores, and MlM_l is the number of groups at level ll. Listwise ranking losses are combined across levels with learnable uncertainty weights σl\sigma_l:

Lhier=l=1L[12σl2Llistce(l)+logσl]L_{\text{hier}} = \sum_{l=1}^L \left[ \frac{1}{2\sigma_l^2} L_{\text{listce}}^{(l)} + \log\sigma_l \right]

This multi-level loss construct enables the model to focus on negatives of increasing hardness across the hierarchy, closely approximating hard negative mining based on local group structure without external retrieval (Yan et al., 15 Jun 2025).

4. Calibration Strategies and Multi-Objective Training

To overcome the tendency of ranking objectives to distort prediction probabilities, GRCF incorporates direct calibration objectives. Primary log-loss is applied to outputs from the original embeddings, and an auxiliary log-loss is computed using quantized embeddings (e^u\hat{e}_u). The overall multi-task optimization is:

Ltotal=Llogloss(y^,y)+λLlogloss(y^q,y)+LhierL_{\text{total}} = L_{\text{logloss}}(\hat{y}, y) + \lambda L_{\text{logloss}}(\hat{y}^q, y) + L_{\text{hier}}

where y^=σ(s)\hat{y} = \sigma(s), y^q=σ(sq)\hat{y}^q = \sigma(s^q), and a straight-through estimator enables backpropagation through the otherwise non-differentiable quantization process (euq=eu+stop_gradient(e^ueu)e_u^q = e_u + \mathrm{stop\_gradient}(\hat{e}_u - e_u)). Empirical evaluation demonstrates that this arrangement effectively improves ranking (GAUC, AUC) and preserves or enhances standard calibration metrics (LogLoss) (Yan et al., 15 Jun 2025).

5. Optimization and Implementation Considerations

Mini-batch training is conducted with batches (typically of size 4096) where each example carries its group codes cuc_u. No additional approximate nearest neighbor infrastructure is required, as all negatives are drawn implicitly through in-batch group assignments, substantially simplifying scaling considerations. The ranking losses at each hierarchy level can be computed efficiently through segmented operations, and the auxiliary calibration path is implemented with minimal additional overhead. Optimal performance is contingent on careful tuning of codebook depth LL and capacity KK; too deep or fine-grained codes can fragment the data and dilute gradient signals, while overly coarse codes fail to capture sufficient negative hardness (Yan et al., 15 Jun 2025).

6. Empirical Evaluation and Comparative Results

GRCF consistently outperforms baseline point-wise, pairwise, and listwise approaches—including methods with context-aware calibration—on large-scale industrial benchmarks such as KuaiRand and Taobao. Gains are observed in standard ranking and calibration objectives (LogLoss, AUC, GAUC). Notably, GRCF yields improved robustness for cold-start users (≤20 impressions), with greater than 0.005 GAUC improvement over closest competitors. Ablation studies highlight the necessity of hierarchical losses and quantized calibration; both components contribute directly to performance. Codebook sensitivity experiments reveal an optimal range for KK and LL, confirming the importance of balanced quantizer design (Yan et al., 15 Jun 2025).

7. Limitations and Future Directions

GRCF’s performance is contingent on the stability of embedding distributions and the judicious selection of codebook depth and size. Excessive fragmentation due to overly granular codebooks results in undersized groups and noisy gradients, while lack of an explicit commitment loss can lead to codebook drift (mitigated via exponential moving average updates and code expiration heuristics). The framework assumes static entity embeddings during training; domain shifts may necessitate reinitialization. A plausible implication is that future research could focus on dynamic codebook adaptation and commitment strategies to further enhance stability, or on extending the framework to domains with rapidly evolving input distributions (Yan et al., 15 Jun 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Two-Stage Group-wise Ranking and Calibration Framework (GRCF).