Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conditional Retrieval in Recommendation Systems

Updated 1 July 2025
  • Conditional Retrieval (CR) is a retrieval approach that conditions user embeddings on explicit interests to personalize and diversify candidate recommendations.
  • It constructs condition embeddings from user-declared interests and combines them with behavioral data to generate targeted, condition-aware representations.
  • CR improves recommendation performance by addressing cold-start challenges and enhancing the retrieval of diverse, long-tail content.

Conditional Retrieval (CR) is a class of retrieval and recommendation methodologies in which results are ranked or produced not purely according to user and context, but explicitly conditioned on additional variables or constraints, often representing user-declared interests, business objectives, or situational factors. Recent industrial deployments—most notably at Pinterest—frame CR as a form of conditioned user representation learning, enhancing diversity and long-tail coverage in candidate retrieval.

1. Conditional Retrieval: Definition, Context, and Motivation

In the multi-stage architecture of industrial recommendation systems (retrieval, ranking, and blending), the retrieval stage is responsible for assembling a high-recall, diverse set of candidate items. Conventional models, especially two-tower architectures, often encode user preferences in a single embedding derived from recent behavioral history. However, these models systematically struggle with representing and retrieving content relevant to a user’s explicit interests (such as self-declared topics), especially for new, low-activity, or multi-faceted users. Conditional Retrieval (CR) addresses these limitations by generating user embeddings conditioned on explicit user interests—such as followed topics—enabling more effective, personalized, and diverse candidate retrieval.

2. CR Mechanisms: Condition Construction and Association

CR systems, as implemented at Pinterest, operationalize conditioned user representation learning through two principal mechanisms:

  • Condition Construction: For each explicit user interest (e.g., a "topic" the user follows), a unique condition embedding is sourced from a topic embedding table. At retrieval time, for each user, a set of KexK_{ex} explicit interests (topics) is sampled. Each becomes a condition cc, and is incorporated into the user tower’s input. The remainder of the user's features (behavioral, demographic, etc.) are combined—usually via feature crossing layers—with cc, yielding a conditional user embedding ϕ(u,c)\phi(u, c). This enables the representation to adapt specifically to each explicit interest.
  • Condition Association: Training data is carefully constructed to bind user actions to their source condition. When an item ii is engaged (e.g., repinned) after being recommended due to a specific explicit interest cc, a triplet (c,u,i)(c, u, i) is logged. This association ensures that the resulting user+condition embedding is trained to retrieve items genuinely relevant to the triggering explicit interest.

A relevance filter at inference time ensures that items retrieved for a condition cc are truly related (e.g., via item-topic annotation), preventing drift.

3. Mathematical Formulation and Algorithmic Workflow

Letting uu represent a user, cc an explicit interest condition, and ii an item:

  • The condition-aware user embedding is ϕ(u,c)\phi(u, c).
  • Item embedding is ψ(i)\psi(i).
  • The retrieval affinity is:

f(iu,c)exp(ϕ(u,c)ψ(i))f(i|u, c) \propto \exp(\phi(u, c)^\top \psi(i))

At serving time, for each sampled explicit interest, ϕ(u,c)\phi(u, c) is computed and an ANN (approximate nearest neighbor) search retrieves items maximizing this score. Only items known to be relevant to cc (e.g., having topic annotation cc) are considered for that condition.

Training uses triplets (c,u,i)(c, u, i) either directly from logged data or, for cold-start, via augmentation.

This approach allows the system to generate KexK_{ex} per-condition user embeddings for each user, enabling retrieval slices tailored to various explicit interests.

4. Complementarity: Synergy with Implicit Interest Modeling (DCM)

CR is deployed jointly with implicit multi-interest modeling. Pinterest’s Differentiable Clustering Module (DCM) generates KimK_{im} user embeddings by clustering recent engagement history, modeling implicit interests responsive to short-term signals.

  • CR (explicit interest embedding): Covers long-term, explicit, or under-represented user interests (e.g., topics followed, but not currently active), excelling for “cold” or non-core users.
  • DCM (implicit interest embedding): Captures recency, adaptively reflects emergent or trending interests, and excels for highly active users.

The final candidate pool is the round-robin merge (with deduplication) of candidates retrieved by each explicit and implicit user embedding:

  • Overlap between DCM and CR candidate sets is typically low (e.g., 3.2% on Pinterest home feed), confirming that they retrieve complementary slices of the user interest space.

5. Empirical Impact on Engagement and Diversity

Deployment of the joint multi-embedding retrieval framework shows significant gains in both synthetic and live A/B testing:

  • Explicit (CR) modeling: Outperforms inverted index and alternative CR baselines, increasing home feed repins by up to +0.98% and diversity (A-Pincepts, i.e., unique interests adopted) by +1.03% for non-core users.
  • Implicit (DCM): Improves home feed repins by +0.86% in all users and +1.23% for core segments, with parallel increases in diversity metrics.
  • Combining CR and DCM: Realizes additive gains (+1.09% home feed repins, +0.81% A-Pincepts), with improvements robust across user segments. Especially for non-core (low-signal) users, explicit CR lifts reach +3.04%.
  • Gains translate into improved site-wide engagement, demonstrating ecosystem-level impact, not just retrieval-stage improvements.

This suggests that CR is particularly effective in surfacing long-tail content and reviving under-served or dormant user interests, addressing key cold-start and diversity challenges in industrial recommendation.

6. Detailed Algorithmic Steps of CR

Step Description
Condition Construction Select explicit topics from user profile and embed as c\mathbf{c}
Association for Training Log each user engagement and assign to source topic cc
User Embedding Compute ϕ(u,c)\phi(u, c) with feature crossing
Training Triplets Use (c,u,i)(c, u, i) for loss computation and backpropagation
Inference / Retrieval For each cc, retrieve items by ϕ(u,c)ψ(i)\phi(u, c)^\top \psi(i), post-filter
Serving/Merging Merge results across all explicit (CR) and implicit (DCM) interests

7. Significance and Broader Implications

CR, as realized in the Pinterest framework, demonstrates that explicit condition injection into user embeddings unlocks new performance regimes for retrieval systems, particularly by:

  • Enhancing coverage of both active and long-tail/off-cycle user interests.
  • Providing personalization options even for users with sparse behavioral history.
  • Enabling robust, fine-grained retrieval structure adaptable to a diverse user base.
  • Allowing comprehensive experimentation with condition granularity and association, as well as fusion strategies with other embedding approaches.

A plausible implication is that similar CR methodologies could be extended to other verticals (news, commerce, social media), wherever fine-grained, high-diversity candidate generation is critical. The success of joint explicit-implicit modeling supports the view that modern large-scale retrieval systems will increasingly rely on a portfolio of conditional, personalized retrieval strategies calibrated for different user behaviors and business goals.