Papers
Topics
Authors
Recent
Search
2000 character limit reached

Compositional Diffusion with Guided Search (CDGS)

Updated 7 January 2026
  • The paper introduces CDGS, a framework that intertwines guided search with diffusion denoising to overcome mode-averaging in structured generative tasks.
  • It employs batch-based sampling, population pruning, and iterative local-to-global message passing to cohesively compose outputs from overlapping local models.
  • Empirical results demonstrate CDGS's superior performance in robotic planning, layout generation, and video synthesis compared to traditional diffusion methods.

Compositional Diffusion with Guided Search (CDGS) is a framework for structured generative modeling that synthesizes globally coherent outputs—such as long-horizon robotic plans, complex multi-object layouts, panoramas, or videos—by composing and coordinating the outputs of locally trained diffusion models via an embedded search procedure within the denoising process. CDGS addresses the breakdowns of naïve compositional diffusion methods when faced with multimodal local distributions, achieving robust, globally consistent synthesis by coupling denoising with batch-based selection, population-based pruning, and iterative local-to-global message passing.

1. Foundational Principles of Compositional Diffusion

Classical diffusion models, such as those based on DDPM, provide a flexible generative backbone but are typically monolithic, sampling entire data instances directly. In compositional settings, the generative process aims to assemble a global configuration from overlapping, locally valid factors—e.g., local state transitions in planning or object relations in layout synthesis. Given a set of local generative models p(yk)p(y_k), the goal is to sample global structures t=(x1,,xN)t = (x_1,\dots,x_N) with strong local and global consistency. The Bethe-approximate factor graph posterior takes the form

p(t)kp(yk)/ip(xi)di1p(t) \approx \prod_k p(y_k) / \prod_i p(x_i)^{d_i - 1}

where yky_k are local (possibly overlapping) subsequences, and did_i represents the number of factors involving xix_i (Mishra et al., 31 Dec 2025).

CDGS was introduced to overcome the failure of naïve composition of diffusion models, which, when faced with multimodal p(yk)p(y_k), averages over incompatible local modes, producing incoherent or infeasible global samples. The key innovation is intertwining guided search operations with each denoising step, enabling the selective exploration and reinforcement of compatible configurations throughout the sampling trajectory (Mishra et al., 31 Dec 2025, Fan et al., 24 Sep 2025).

2. Mathematical Formulation and Denoising Algorithms

For a given compositional task—plan synthesis, layout generation, or panoramic construction—the CDGS framework defines specific local models and their integration:

  • Layout Planning Example: Let NN be the number of objects, S={s0,,sN1}S = \{s_0,\dots,s_{N-1}\} their sizes, R={r1,,rM}R = \{r_1,\dots,r_M\} relationships, and P={p0,,pN1}P = \{p_0, \dots, p_{N-1}\} the positions. Each relationship rr is modeled with an energy function Er(PSr)E_r(P|S_r), leading to the joint:

p(PS,R)exp(rREr(PSr))p(P|S,R) \propto \exp\left( -\sum_{r\in R} E_r(P|S_r) \right)

Denoising proceeds using an annealed Unadjusted Langevin Algorithm (ULA) or DDPM-style update, where at each timestep tt, predicted noise vectors for each relation are summed:

ϵcombined(P(t),t)=rRϵt,r(P(t),Sr)\epsilon_\text{combined}(P^{(t)}, t) = \sum_{r\in R} \epsilon_{t, r}(P^{(t)}, S_r)

Reverse update:

P(t1)1αt[P(t)1αt1αˉtϵcombined(P(t),t)]+σtηP^{(t-1)} \leftarrow \frac{1}{\sqrt{\alpha_t}} [P^{(t)} - \frac{1-\alpha_t}{\sqrt{1-\bar{\alpha}_t}}\epsilon_\text{combined}(P^{(t)}, t)] + \sigma_t\eta

The loss for these denoising networks is the MSE between the actual and summed predictive noise (Fan et al., 24 Sep 2025).

  • Long-Horizon Planning Example: For structured planning, the process divides the trajectory into overlapping segments, each denoised individually using local models and then reconciled. The forward diffusion and reverse denoising are:

    • Forward: y(t)=αty(0)+1αtϵy(t) = \sqrt{\alpha_t}y(0) + \sqrt{1-\alpha_t}\epsilon, ϵN(0,I)\epsilon \sim \mathcal{N}(0,I)
    • Reverse (DDIM):

    y^(0)t=y(t)1αtϵθ(y(t),t)αt\hat{y}(0)_t = \frac{y(t) - \sqrt{1-\alpha_t}\epsilon_\theta(y(t),t)}{\sqrt{\alpha_t}}

    y(t1)=αt1y^(0)t+1αt1ϵθ(y(t),t)y(t-1) = \sqrt{\alpha_{t-1}}\hat{y}(0)_t + \sqrt{1-\alpha_{t-1}}\epsilon_\theta(y(t),t)

(Mishra et al., 31 Dec 2025).

3. Guided Search: Batch-Based Sampling and Pruning

CDGS interleaves population-based search and pruning within the denoising procedure at each diffusion timestep. At time tt, a batch of BB candidate global samples {z(b)(t)}\{z^{(b)}(t)\} is evolved:

  • Each z(b)(t1)z^{(b)}(t-1) is generated from z(b)(t)z^{(b)}(t) using compositional denoising updates.
  • A global cost J(z(0))J(z(0)) is defined, often in terms of DDIM-inversion curvature or a surrogate for log-likelihood across segments.
  • Guided proposal density:

ps(z(t1)z(t))p0(z(t1)z(t))exp(J(z(t1))/(2α))p_s(z(t-1)|z(t)) \propto p_0(z(t-1)|z(t)) \cdot \exp\left( -J(z(t-1))/(2\alpha) \right)

where p0p_0 is the default reverse diffusion transition and α\alpha controls exploration (Mishra et al., 31 Dec 2025).

  • Candidates with the lowest global cost are retained (“elite” selection, often top-KK), and the batch is repopulated by duplicating these elite samples.
  • Iterative forward and backward resampling within the batch propagates information via overlapping segments, akin to belief-propagation, thereby enforcing global coherence.

This approach mitigates the “mode-averaging” phenomenon found in naïve compositional diffusion by explicitly favoring candidates that respect both local multimodality and global feasibility.

4. Integration with Symbolic Reasoning and Vision-Language Agents

In tasks such as spatial layout synthesis, CDGS is tightly coupled with symbolic representations and VLMs:

  • A vision-language agent preprocesses input instances, extracting object instances, estimating physical sizes, and constructing scene graphs G=(O,R)G = (O, R) where O={oi}O = \{o_i\} and RR encodes relations as logical predicates.
  • Each predicate hr(P)h_r(P) indicates satisfaction of relationship rr for a given layout PP.
  • During denoising, hard constraints based on hrh_r can be enforced by pruning or by injecting penalty gradients into the reverse update:

gguidance(P(t))=λPrRloghr(P(t))g_\text{guidance}(P^{(t)}) = \lambda \nabla_P \sum_{r \in R} \log h_r(P^{(t)})

Often, hard satisfaction is preferred, i.e., candidates are rejected if hr(P(t))=0h_r(P^{(t)}) = 0 for any rr (Fan et al., 24 Sep 2025).

  • Ultimately, the output is a set of valid bounding boxes or trajectory states, which serve as input (e.g., via inpainting) to downstream conditional generative models.

5. Implementation Details and Pseudocode

CDGS is implemented with domain-adaptive score networks and clearly prescribed training regimes:

L(θ)=Ey(0),ϵ,tϵϵθ(αty(0)+1αtϵ,t)2L(\theta) = \mathbb{E}_{y(0),\epsilon,t} \| \epsilon - \epsilon_\theta(\sqrt{\alpha_t}y(0) + \sqrt{1-\alpha_t}\epsilon, t) \|^2

A high-level pseudocode for the CDGS main loop (abridged from (Mishra et al., 31 Dec 2025)):

1
2
3
4
5
6
7
8
9
10
11
12
Initialize B candidates Z_b(T) ~ N(0,I)
for t = T ... 1:
    for each candidate b:
        Z_b(t-1) <- compositional_DDIM(Z_b(t), net, alpha_t)
    for u in 1 ... U-1:
        Z <- forward_noising(Z, alpha_t)
        Z <- compositional_DDIM(Z, net, alpha_t)
    for b:
        J_b <- plan_cost(Z_b(0))
    Select top-K candidates by J_b
    Repopulate to B by duplicating elites
return Z_b(0)

Compositional segment updates and aggregation are managed as per the compositional_DDIM routine, segmenting and denoising each local window, then merging results to form the next global sample (Mishra et al., 31 Dec 2025).

6. Applications and Empirical Results

CDGS demonstrates versatility and high performance across several domains (Mishra et al., 31 Dec 2025, Fan et al., 24 Sep 2025):

  • Robot Manipulation Planning: On OGbench Maze and Scene tasks, CDGS matches or exceeds diffusion-based and RL baselines without long-horizon training data. For example, on PointMaze stitch, CDGS achieves 82% versus Diffuser's 29%, evidencing successful composition of short-horizon skills.
  • Task-and-Motion Planning (TAMP): CDGS outperforms no-PDDL baselines and rivals privileged PDDL+CEM methods, e.g., 0.64 for Hook Reach Task 1 vs 0.66 for the best baseline.
  • Panoramic Image Generation: By composing Stable Diffusion 2.0 patches into large panoramas, CDGS outperforms Multi-Diffusion and Sync-Diffusion on global coherence and prompt alignment metrics.
  • Long Video Synthesis: CDGS produces coherent 350-frame CogVideoX-2B samples, surpassing baselines on subject consistency, although with minor trade-offs in aesthetic quality.

In spatial layout generation, as realized in LayoutAgent, CDGS produces object layouts that respect geometric and semantic constraints, outperforming prior models on criteria such as layout coherence and aesthetic alignment (Fan et al., 24 Sep 2025).

7. Limitations and Future Development

Reported limitations include:

  • Requires explicit specification of start/goal states (for planning) or scene constraints (for layouts); generalization to variable-goal or unconstrained synthesis is not yet fully established.
  • The compositional horizon HH must be selected in advance, with no automated mechanism for optimizing sequence length or resizing domains.
  • While overlapping segments and iterative resampling communicate local information, global consistency is ultimately limited by the factorization structure; more expressive message passing or attention across non-local, long-range dependencies could further enhance performance (Mishra et al., 31 Dec 2025).

A plausible implication is that future advances may focus on adaptive horizon selection, automated factor graph construction, and richer integration of symbolic reasoning with end-to-end generative architectures.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Compositional Diffusion with Guided Search (CDGS).