Papers
Topics
Authors
Recent
2000 character limit reached

ConvRec-R1: Rank-Aware Conversational RecSys

Updated 24 October 2025
  • The paper introduces an end-to-end ConvRec-R1 framework that uses a two-stage pipeline to align LLM outputs with catalog constraints and improve ranking quality.
  • It employs a Remap–Reflect–Adjust pipeline to transform teacher LLM recommendations into catalog-constrained examples, ensuring proper output formats.
  • Rank-GRPO refines per-rank credit assignment, boosting Recall and NDCG, particularly improving tail performance in recommendation lists.

ConvRec-R1 refers to an end-to-end framework for training LLM-based conversational recommender systems using a two-stage pipeline that combines catalog-grounded supervised fine-tuning with rank-aware reinforcement learning. The approach is intended to address persistent challenges in LLM recommendation systems, including out-of-catalog errors, output format violations, and listwise ranking degradation. ConvRec-R1 is described in the context of Rank-GRPO training, as detailed in (Zhu et al., 23 Oct 2025).

1. Motivation and Problem Context

LLM-based conversational recommender systems allow users to express preferences and receive recommended items in natural language dialogues. However, direct deployment of pretrained LLMs is problematic due to their tendency to generate items not present in the target catalog, to violate required output formatting (e.g., missing fields), and to exhibit poor ranking quality—particularly in the tail of the recommendation list. Previous alignment strategies either relied on human demonstrations or sequence-level reinforcement learning objectives, neither of which reliably addressed these issues at recall- and NDCG-oriented ranks.

ConvRec-R1 addresses these gaps by integrating two core algorithmic innovations:

  • Behavior cloning from remapped, catalog-constrained teacher LLM demonstrations.
  • Rank-aware policy optimization (Rank-GRPO) where reinforcement learning updates are computed at the rank level rather than per token or per sequence.

2. Supervised Warm-Start: Remap–Reflect–Adjust Pipeline

Stage 1 of the ConvRec-R1 framework is designed to produce high-quality, catalog-grounded training data for supervised fine-tuning (SFT). The Remap–Reflect–Adjust (“RRA”; Editor's term) pipeline aggregates multiple signals for effective behavioral cloning:

  • Remap: Teacher LLM-generated recommendations, which may include out-of-catalog items, are projected onto the target catalog C\mathcal{C} using scores derived from positional weighting, a semantic similarity matrix Sitem-itemS_{\text{item-item}} mapping teacher items to catalog items, an indicator matrix IicI_{\text{ic}} for known matches, and conversation–item similarities sconv-items_{\text{conv-item}}. The initial score vector is sremap=p(Sitem-item+Iic)+λsconv-items_{\text{remap}} = p \cdot (S_{\text{item-item}} + I_{\text{ic}}) + \lambda \cdot s_{\text{conv-item}}.
  • Reflect: The teacher LLM is further queried to “judge” the relevance of the top candidates given the dialogue context, assigning a reflection score rreflectr_{\text{reflect}} (numerical rating per item). The combined score becomes sreflect=sremap+γrreflects_{\text{reflect}} = s_{\text{remap}} + \gamma \cdot r_{\text{reflect}}.
  • Adjust: Learned multiplicative ww and additive bb bias vectors are applied for empirical calibration: sfinal=wsreflect+bs_{\text{final}} = w \odot s_{\text{reflect}} + b, using the Hadamard product.

After applying all three steps, items are selected and formatted as catalog-constrained recommendation lists, with the SFT loss LSFT(θ)=E(x,y)DSFT[logπθ(yx)]\mathcal{L}_{\mathrm{SFT}}(\theta) = -\mathbb{E}_{(x, y) \in \mathcal{D}_{\mathrm{SFT}}} [\log \pi_\theta(y | x)]. This produces a model policy aligned with catalog membership and format correctness before RL.

3. Rank-GRPO: Rank-Structured Reinforcement Learning

Stage 2 applies reinforcement learning with a rank-level variant of Group Relative Policy Optimization (GRPO):

  • Rank-wise Action Units: Each rank in the output list is treated as a distinct “action,” unlike typical RL for sequence generation which applies the reward per token or per output sequence.
  • Importance Ratio Computation: For rank kk, the effective probability is the geometric mean of the item’s token probabilities: πˉθ(yi(k)x)=exp(1yi(k)t=1yi(k)logπθ(yi,k,tx,yi,k,<t))\bar{\pi}_{\theta}(y_i^{(k)}|x) = \exp(\frac{1}{|y_i^{(k)}|}\sum_{t=1}^{|y_i^{(k)}|} \log \pi_\theta(y_{i,k,t}| x, y_{i,k,<t})), giving the importance ratio wi,k(θ)=πˉθ(yi(k)x)/πˉθold(yi(k)x)w_{i,k}(\theta) = \bar{\pi}_{\theta}(y_i^{(k)}|x) / \bar{\pi}_{\theta_{\text{old}}}(y_i^{(k)}|x).
  • Rank-level Reward Shaping: Sequence-level DCG reward is decomposed so each rank receives its “causal” credit: r(x,yi(k))=DCG@{k:N}=j=kN(relj/log2(j+1))r(x, y_i^{(k)}) = \mathrm{DCG}@\{k:N\} = \sum_{j=k}^{N} (\mathrm{rel}_j / \log_2(j+1)). Alternatively, an exponential discount variant rexpΓ(x,yi(k))=j=kN(relj/Γjk)r_{\exp_\Gamma}(x, y_i^{(k)}) = \sum_{j=k}^{N}(\mathrm{rel}_j / \Gamma^{j-k}) gives more granular control.
  • Surrogate Objective: The RL objective is JRank-GRPO(θ)=E[(1/GN)i=1Gk=1Nmin(wi,k(θ)Ai,k,clip(wi,k(θ),1ϵ,1+ϵ)Ai,k)]\mathcal{J}_{\mathrm{Rank\text{-}GRPO}}(\theta) = \mathbb{E}[ (1/GN) \sum_{i=1}^G \sum_{k=1}^N \min(w_{i,k}(\theta) A_{i,k}, \mathrm{clip}(w_{i,k}(\theta), 1-\epsilon, 1+\epsilon) A_{i,k}) ], where Ai,kA_{i,k} is rank-normalized advantage.

This approach eliminates non-causal credit assignment and aligns policy updates specifically with the ranking criteria, mitigating the degradation in tail ranking observed with sequence-level RL.

4. Empirical Evaluation

ConvRec-R1 was evaluated on the Reddit-v2 conversational recommendation benchmark using several backbone LLMs (Qwen2.5-0.5B-Instruct, Llama-3.2-1B, Llama-3.2-3B). Key results:

  • The SFT stage rapidly improves both catalog grounding (in-catalog recommendation rate exceeding 99%) and NDCG.
  • Rank-GRPO yields faster convergence and higher Recall and NDCG at K=20K=20 across all model scales compared to vanilla GRPO and GSPO baselines.
  • Rank-level updates specifically bolster tail performance in the recommendation list, reducing ranking quality degradation—a persistent failure mode for both token-wise and sequence-wise RL methods.

5. Algorithmic Distinctions

ConvRec-R1’s distinctive features include:

  • Use of blackbox teacher LLMs (e.g., GPT-4o) for demonstration mining, avoiding expensive human annotation.
  • Remap–Reflect–Adjust produces catalog-constrained, contextually relevant examples.
  • Rank-GRPO’s credit assignment resolves per-rank non-causality, facilitating stable and targeted policy improvements under metrics such as NDCG and Recall.
  • The objective formulation via geometric mean importance ratios moderates update instability that commonly arises in high-dimensional action spaces.

6. Implications and Future Research

The ConvRec-R1 design points to several research directions:

  • Extension of rank-wise RL objectives to other sequential decision domains, including search, dialogue, and listwise predictions.
  • Further refinement of reward shaping, potentially incorporating diversity or user-centric objectives.
  • Broadening the paradigm to leverage smaller open-source LLMs, increasing generalizability and cost-efficiency for production deployment.
  • Anticipated advances in multi-objective RL could also enable balancing relevance, diversity, popularity bias, and user satisfaction in conversational systems.

7. Significance and Technical Legacy

By introducing a pipeline that aligns LLM recommendation outputs to catalog and formatting constraints and refining ranking via causal, rank-aware RL, ConvRec-R1 represents a comprehensive solution to several chronic technical problems in conversational recommender systems. Its empirical success on standard datasets such as Reddit-v2 and effective utilization of rank-wise policy optimization establish it as a reference architecture for future LLM-based recommendation system research and development.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to ConvRec-R1.