CardRewriter: Query Rewriting Framework
- CardRewriter is an LLM-driven framework that uses multi-source knowledge cards to reformulate long-tail queries on short-video platforms, enhancing relevance and retrieval.
- It employs a two-stage pipeline that aggregates multi-modal signals and uses dedicated models for knowledge card construction and query rewriting.
- Deployed at scale on Kuaishou, CardRewriter demonstrates significant improvements in retrieval metrics, user experience, and content matching through tailored training and reward strategies.
CardRewriter is an LLM-driven framework engineered for domain-specific long-tail query rewriting on short-video platforms, featuring the construction of multi-source knowledge cards to guide query reformulation. It directly addresses the challenges posed by the mismatch between user intent and proprietary content retrieval, circumventing limitations in LLM pretraining by incorporating platform-native heterogeneous signals. Since September 2025, CardRewriter has been deployed at scale on Kuaishou, serving hundreds of millions of users, and demonstrating significant improvements in user experience and retrieval metrics (Gong et al., 11 Oct 2025).
1. Architecture and High-Level Workflow
CardRewriter operates in a two-stage pipeline: knowledge card construction and query rewriting, both optimized via dedicated models. Given a user-issued query , the system aggregates multi-source platform knowledge —including videos, live streams, micro dramas, and external documents—then invokes a card generation model to summarize as a single knowledge card . This card and original query are subsequently input to the rewriting model , yielding a rewritten query that serves as the final input to the retrieval engine. The formal process:
This mechanism injects platform-specific signals, enabling better correction of spelling errors, resolution of query ambiguity, and normalization toward retrievable proprietary content.
2. Multi-Source Knowledge Card Construction
The knowledge aggregation step encompasses:
- Platform Retrieval: Top- relevant videos are gathered via in-platform search.
- Multi-Modal Extraction: For each video, both visual () and textual components (title, caption, OCR, author, background music) are extracted.
- High-Supply Query Expansion: The system retrieves similar queries using Q2Q (rule-based) and EMB (embedding-based) approaches, collecting associated videos for context expansion.
- Open-Domain Augmentation: Relevant documents are fetched when proprietary data is sparse.
After duplicate elimination, the resultant knowledge set is summarized by the card generation model into a compact knowledge card, distilling salient signals, resolving conflicting information, and producing a clean semantic context for rewriting guidance.
3. Two-Stage Training Pipeline
Both the card generation and rewriting models are trained via a staged approach:
A. Supervised Fine-Tuning (SFT):
- Training data is curated from platform search logs, with denoting either multi-source knowledge (for the card model) or generated cards (for the rewriting model).
- Quality filtering uses a relevance judge and system preference signals.
- The SFT loss is standard cross-entropy:
B. Group Relative Policy Optimization (GRPO):
- Post-SFT, GRPO applies reinforcement learning. For each query in dataset , the model generates rollout trajectories .
- The objective maximizes advantage-weighted probability ratio, penalized by KL divergence from reference policy:
with .
4. Tailored Reward System
Training optimization relies on a composite reward , balancing:
- Semantic Relevance (): Binary judge-based scoring for alignment of rewritten queries and knowledge cards to original intent.
- System-Level Retrieval Effectiveness (): Quantifies improvements in retrieval outcomes (e.g., hitrate, clicks).
When immediate system feedback is unavailable, a Bradley-Terry reward model approximates preference probabilities between candidate rewrites:
Overall reward is defined piecewise:
- if
- if and
- otherwise
This design ensures that rewriting is not only semantically faithful but also tuned for improved retrieval efficacy.
5. Performance Metrics and Experimental Outcomes
Both offline and online evaluations employ multi-faceted metrics:
Offline:
- Relevance for knowledge cards (QC-Rel) and rewritten queries (QR-Rel), judged by advanced LLMs (e.g., Qwen3-235B-A22B).
- Retrieval increment:
- Hitrate@K: Fraction where ground-truth video is present in top-K results.
Online:
- Long-View Rate (LVR): Proportion of queries yielding long-form views.
- Click-Through Rate (CTR): Click ratio per query.
- Initiative Query Reformulation Rate (IQRR): Percentage of queries users manually reformulate.
Reported results include QR‑Rel and substantial increases in hitrate. A/B tests yield +1.853% in LVR, +3.729% CTR, and -2.630% IQRR on covered traffic.
6. Deployment Strategy and System Impact
Due to strict latency requirements, CardRewriter adopts a near-line deployment. Targeted queries—those with moderate search volume, ambiguous intent, and low retrieval performance—undergo offline processing. The corresponding knowledge cards and rewritten queries (or pre-fetched video results) are cached in an online key-value store. When such a query occurs in real time, the system serves cached results for immediate response. This architecture facilitates large-scale deployment without compromising latency or relevance.
CardRewriter has tangibly improved query rewriting and retrieval effectiveness on Kuaishou, enhancing user satisfaction and reducing the burden of manual query reformulation. The methodology demonstrates the feasibility of incorporating multi-modal, domain-specific knowledge for robust query rewriting in environments where user intent and content distribution are misaligned with generic LLM pretraining.
7. Technical Significance and Future Directions
CardRewriter’s principal innovation lies in the use of knowledge cards—a distilled, query-relevant summary of platform-specific data—to steer LLM-driven query rewriting. Combined with a principled two-stage training pipeline and a tailored reward design, it achieves strong results for proprietary content retrieval.
A plausible implication is that the approach is extensible beyond short-video platforms to other retrieval-intensive domains where user queries are long-tailed and platform content falls outside conventional LLM coverage. Future work may further refine knowledge aggregation, explore low-latency online rewriting, or integrate real-time user feedback to adapt cards and rewrite policies dynamically.