Papers
Topics
Authors
Recent
Search
2000 character limit reached

Diffusion Preference-based Reward (DPR)

Updated 17 February 2026
  • Diffusion Preference-based Reward (DPR) is a framework that uses diffusion models to capture complex, multi-modal human preferences through denoising signals.
  • It replaces traditional scalar reward models by leveraging step-wise, context-aware reward estimation in both offline RL and image synthesis tasks.
  • Empirical results show DPR improves sample efficiency and alignment, outperforming MLP and Transformer-based approaches in benchmark settings.

Diffusion Preference-based Reward (DPR) denotes a class of techniques that leverage diffusion models to learn reward or preference functions from data involving human or proxy comparisons, with the goal of aligning generative models (for control, image synthesis, etc.) to nuanced or implicit objectives. DPR architectures replace traditional reward modeling approaches (e.g., MLP, Transformer, or Vision-LLM-based surrogates) by training the diffusion model itself—either directly as a reward discriminator or implicitly via preference-driven fine-tuning—yielding step-wise, temporally and contextually adaptive reward signals. These models have demonstrated superior alignment and sample efficiency in both offline reinforcement learning and large-scale preference-optimized generative modeling.

1. Core Principles and Motivation

Preference-based reinforcement learning (PbRL) mitigates the need for hand-specified reward functions by leveraging pairwise comparison data—i.e., which of two trajectories, images, or sequences a human or oracle prefers. Traditional approaches fit scalar reward models (e.g., rϕ(s,a)r_\phi(s,a)) using the Bradley-Terry model, which defines

P(τiτj)=exp(hrϕ(shi,ahi))exp(hrϕ(shi,ahi))+exp(hrϕ(shj,ahj))P(\tau^i \succ \tau^j) = \frac{\exp(\sum_{h} r_\phi(s^i_h, a^i_h))}{\exp(\sum_{h} r_\phi(s^i_h, a^i_h)) + \exp(\sum_{h} r_\phi(s^j_h, a^j_h))}

and then minimizes a cross-entropy loss over ground-truth preference labels. However, scalar or MLP/Transformer-based reward models poorly capture the high-dimensional, multi-modal structure of preference distributions over state-action or model output space; they are especially limited when preference labels are sparse, high-variance, or reflect non-additive objectives (Pang et al., 3 Mar 2025).

DPR leverages diffusion models' capacity to model complex distributions in high-dimensional data, both for reward learning and for aligning generative/policy models. Diffusion-based reward models can capture fine-grained, temporally local, and multi-modal preference structure—enabling robust reward estimation even under noisy, incomplete, or ambiguous human feedback.

2. Methodological Frameworks for DPR

DPR appears in several concrete algorithmic instantiations:

2.1. Diffusion Discriminator/Reward Model

Given state-action pairs (s,a)(s, a) (or, in generative modeling, image latents xtx_t), a forward diffusion process adds noise in TT steps, and a neural network ϵϕ(xt,t)\epsilon_\phi(x_t, t) is trained to denoise: q(xtxt1)=N(xt;αtxt1,(1αt)I)q(x_t \mid x_{t-1}) = \mathcal{N}(x_t ; \sqrt{\alpha_t} x_{t-1}, (1-\alpha_t)I)

LDM=Eϵϵϕ(αˉtx0+1αˉtϵ,t)2\mathcal{L}_{\mathrm{DM}} = \mathbb{E} \|\epsilon - \epsilon_\phi\left(\sqrt{\bar\alpha_t} x_0 + \sqrt{1-\bar\alpha_t} \epsilon, t\right)\|^2

Learned diffusion "scores" Dϕ(s,a;ϵ,t)=exp(ϵϵϕ((s,a),t)2)D_\phi(s,a;\epsilon,t) = \exp(-\|\epsilon - \epsilon_\phi((s,a), t)\|^2) assign high values to high-preference manifold points. Aggregate these scores over time to produce a scalar reward, e.g., as

rD(s,a)=1Tt=1Tlog(1Dϕ(s,a;ϵ,t))r_D(s, a) = -\frac{1}{T} \sum_{t=1}^{T} \log(1 - D_\phi(s, a; \epsilon, t))

Conditional DPR (C-DPR) extends this to relative preference labels by training separate reward heads for "preferred" and "non-preferred" classes and recombining them into a preference probability (Pang et al., 3 Mar 2025).

2.2. Latent Reward Modeling and Step-Level Optimization

Recent image synthesis frameworks repurpose latent diffusion backbones to operate natively in latent space, learning timestep-aware reward models rθ(xt,t,c)r_\theta(x_t, t, c) and optimizing preference objectives directly at each denoising step (Zhang et al., 3 Feb 2025, Liu et al., 11 Feb 2026). Noise-calibrated likelihoods, e.g., using Thurstone's model with variance scaling to tt, further adapt the reward to stepwise uncertainty.

2.3. Direct Preference Optimization and Surrogate Objectives

DPR can be instantiated as direct preference optimization (DPO) in the diffusion setting. One computes, at each step, per-sample denoising errors under the current and reference models (Fθ(x,t)F_\theta(x, t)), then maximizes margin between preferred and non-preferred samples: LDPR(θ)=E(xl,xw),tlogσ(At[Fθ(xw,t)Fθ(xl,t)])\mathcal{L}_{\text{DPR}}(\theta) = -\mathbb{E}_{(x^l, x^w), t} \log \sigma\left(A_t \left[F_\theta(x^w, t) - F_\theta(x^l, t)\right]\right) This yields stable, sample-efficient training and directly enforces preference margins on the data manifold (Huh et al., 16 Feb 2025).

3. Algorithmic Integration and Workflow

A typical DPR-based pipeline involves:

  1. Reward/Discriminator Pretraining: Train a diffusion model (or latent/conditional variant) on labeled preference pairs, using an objective as above, to learn a denoising-based reward or score function.
  2. Reward Assignment: Assign scalar rewards to new state-action pairs or diffusion outputs by aggregating diffusion-based scores over timesteps (or noising levels).
  3. Offline RL Loop or Fine-Tuning: Plug the DPR rewards into downstream RL algorithms (e.g., CQL, IQL, TD3BC) or generative fine-tuning (LoRA, DPO), updating the policy or generator with standard objectives but using DPR-derived feedback.

Pseudocode for reward extraction and offline RL with DPR is as follows:

1
2
3
4
5
6
7
8
9
10
for k in range(K):
    (tau0, tau1) = sample_preference_pair(D_L)
    t = sample_uniform(1, T)
    eps = sample_noise()
    update_phi_with_DPR_or_C_DPR_loss(...)  # as per above equations

while not_converged:
    (s, a) = sample_unlabeled(D_U)
    r = r_D(s, a) or r_C(s, a)
    perform_offline_RL_update((s, a, r))
(Pang et al., 3 Mar 2025)

4. Empirical Results and Comparative Performance

DPR and C-DPR have been benchmarked on classic control (Gym-MuJoCo, Adroit) and image generation tasks, with consistent improvements:

  • In offline RL with crowd-sourced (human) labels, DPR/C-DPR improved MuJoCo performance from \sim73.3 (MLP) and \sim71.5 (Transformer) to \sim78.8 (DPR) and \sim79.3 (C-DPR) on normalized score; on Adroit, improvements from \sim21.1/\sim22.1 to $25.2$/$29.0$ observed (Pang et al., 3 Mar 2025).
  • For preference-optimized diffusion models, DPR and variants (e.g., LRM, LPO, DiNa-LRM, SmPO) achieve higher pairwise win rates, sample quality (PickScore, HPSv2, Aesthetic), and computational efficiency compared to VLM reward-based pipelines (Zhang et al., 3 Feb 2025, Liu et al., 11 Feb 2026, Lu et al., 3 Jun 2025).
  • Stepwise latent reward modeling (e.g., DiNa-LRM) narrows the gap to state-of-the-art VLM-based reward performance while reducing compute and memory by \approx50\% (Liu et al., 11 Feb 2026).
  • For discrete data, DPR adapts to continuous-time Markov chain diffusion (D2-DPO), successfully aligning structured sequence models by direct preference optimization without explicit reward (Borso et al., 11 Mar 2025).

5. Extensions and Variants

DPR methodology supports various extensions:

  • Conditional and Multidimensional Reward: C-DPR conditions explicitly on preference context. Multi-dimensional extensions (MCDPO) address reward conflict by injecting outcome vectors and supporting dynamic tradeoffs among preference axes at inference (Jang et al., 11 Dec 2025).
  • Personalized and Multi-User Models: Personalized DPR extracts user embeddings from VLMs and conditions the diffusion model via cross-attention, enabling few-shot generalization to new individual preferences (Dang et al., 11 Jan 2025).
  • Smoothed and Calibration Techniques: SmPO-Diffusion introduces a smoothed likelihood mixture (replacing binary preference) and ReNoise inversion to better match the true preference distribution and mitigate over-optimization (Lu et al., 3 Jun 2025).
  • Dense Reward and Temporal Discounting: Dense DPR strategies inject temporal discounting into the preference loss, emphasizing early denoising steps critical for global structure in diffusion models (especially T2I generation) (Yang et al., 2024).
  • Preference-based Policy Alignment: FKPD applies DPR-style DPO with forward KL constraints for policy diffusion models, regularizing against out-of-distribution actions (Shan et al., 2024).

6. Limitations and Open Questions

Although DPR-based rewards yield expressivity and alignment not possible for shallow MLP/Transformer reward models, there remain open issues:

  • Binarization of Preference: Most current DPR variants partition data into preferred/dispreferred. Extensions to finer gradations, multi-class, or continuous scales remain under exploration (Pang et al., 3 Mar 2025, Jang et al., 11 Dec 2025).
  • Theoretical Understanding: The connection between diffusion denoising errors, learned score margins, and actual human preferences is empirically strong but lacks a full theoretical characterization (Pang et al., 3 Mar 2025).
  • Computational Cost: Multi-step denoising/score evaluation is more costly than scalar reward models, though some studies find T=10T=10 steps are often sufficient (Pang et al., 3 Mar 2025). Latent and step-level approaches (e.g., LPO, DiNa-LRM) offer significant efficiency gains (Liu et al., 11 Feb 2026, Zhang et al., 3 Feb 2025).
  • Reward Model Bias and Data Dependence: Use of proxy or synthetic reward models, or offline human preferences, may introduce alignment bias; methods robust to noisy or inconsistent preference data are being developed (Lu et al., 3 Jun 2025, Deng et al., 2024).

7. Summary and Outlook

Diffusion Preference-based Reward mechanisms replace classical scalar reward modeling pipelines in both RL/control and generative modeling, training diffusion models to directly capture human or task preferences over state-action pairs or generated outputs. By leveraging the inherent expressiveness and multi-modality of diffusion architectures, DPR yields more robust, granular, and calibrated reward signals, enabling higher-quality preference alignment. Extensions to personalized, multi-attribute, and discrete data settings, as well as efficiency improvements, have established DPR as a central paradigm for learning from preferences in high-dimensional generative and decision spaces (Pang et al., 3 Mar 2025, Zhang et al., 3 Feb 2025, Jang et al., 11 Dec 2025, Liu et al., 11 Feb 2026). Future work is focused on multi-class or continuous preference learning, further computational scaling, theoretical analysis, and integration with real-time human feedback.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Diffusion Preference-based Reward (DPR).