Papers
Topics
Authors
Recent
Search
2000 character limit reached

InfiniteDiffusion: Infinite-Domain Generative Modeling

Updated 11 December 2025
  • InfiniteDiffusion is a class of generative algorithms that models infinite-dimensional signals using stochastic differential equations for rigorous function synthesis.
  • It employs a two-phase approach with training via denoising score matching and sampling via Euler–Maruyama discretization to efficiently generate unbounded outputs.
  • The framework guarantees seed-consistency, unlimited spatial extent, and constant-time random access, making it ideal for real-time procedural content generation.

InfiniteDiffusion is a class of algorithms for generative modeling in infinite-dimensional or unbounded domains, designed to combine the high fidelity of diffusion models with properties essential for procedural synthesis: seamless infinite extent, deterministic seed-consistency, and constant-time random access. The framework was formalized in the context of infinite-dimensional stochastic differential equations (SDEs) for general function modeling (Pidstrigach et al., 2023), and instantiated algorithmically for real-time, infinite, and coherent terrain generation by extending windowed MultiDiffusion schemes to unbounded spatial domains (Goslin, 9 Dec 2025).

1. Mathematical Foundations of InfiniteDiffusion

InfiniteDiffusion is grounded in a rigorous infinite-dimensional SDE formalism. The data law μdata\mu_{\rm data} is defined on a separable Hilbert space (H,⟨⋅,⋅⟩H)(H,\langle\cdot,\cdot\rangle_H), capturing infinite-dimensional signals such as images or functions. The forward process is given by the SDE

(Forward){X0∼μdata, dXt=− 12 Xt dt+dWtU,\text{(Forward)}\quad \begin{cases} X_0\sim\mu_{\rm data},\ dX_t = -\,\tfrac12\,X_t\,dt + dW_t^U, \end{cases}

where WtUW_t^U is a CC-Wiener process with trace-class covariance operator C:H→HC:H\to H and Cameron–Martin space UU. The time-reversal Yt=XT−tY_t = X_{T-t} yields the reverse SDE

(Reverse){Y0∼PT, dYt=12 Yt dt+s(T−t,Yt) dt+dWtU,\text{(Reverse)}\quad \begin{cases} Y_0\sim\mathbb{P}_T,\ dY_t = \tfrac12\,Y_t\,dt + s(T-t,Y_t)\,dt + dW_t^U, \end{cases}

where the "score" s(t,x)s(t,x) is a Hilbert-space analog: s(t,x)=−11−e−t(x−e−t/2 E[X0∣Xt=x]).s(t,x) = -\frac{1}{1-e^{-t}}\left( x - e^{-t/2}\,\mathbb{E}[X_0\mid X_t=x] \right).

The existential and uniqueness results hold under mild regularity:

  • Time-reversal is well-defined in infinite dimensions if both μdata\mu_{\rm data} and N(0,C)\mathcal{N}(0,C) have support in HH.
  • Uniqueness is guaranteed if μdata\mu_{\rm data} is supported in a UU-ball or if it is absolutely continuous with respect to a Gaussian reference with Φ∈C1(H)\Phi\in C^1(H) and Lipschitz gradient.
  • Dimension-independent Wasserstein bounds formalize convergence and provide explicit error guarantees that scale independently of discretization dimension.

These theoretical contributions establish InfiniteDiffusion as a principled generative framework for functions, images, and other infinite-dimensional objects (Pidstrigach et al., 2023).

2. Algorithmic Structure and Implementation

The InfiniteDiffusion algorithm consists of analogous training and sampling phases, with a focus on scalability and rigorous loss formulations. In practice, infinite-dimensional operations are discretized via basis projections or finite grids:

Training Phase

  1. Sample mini-batch {x0i}\{x_0^i\} from data.
  2. For each sample, pick time tit^i, sample noise ξi∼N(0,C)\xi^i\sim\mathcal{N}(0,C).
  3. Form noisy inputs: xti=e−ti/2x0i+1−e−ti ξix_t^i = e^{-t^i/2} x_0^i + \sqrt{1-e^{-t^i}}\,\xi^i.
  4. Predict denoising score s~θ(ti,xti)\tilde s_\theta(t^i, x_t^i).
  5. Compute denoising-score-matching loss: ℓ=1B∑i∥s~θ(ti,xti)−xti−e−ti/2x0i1−e−ti∥K2\ell = \frac{1}{B}\sum_i \|\tilde s_\theta(t^i, x_t^i) - \frac{x_t^i - e^{-t^i/2}x_0^i}{1-e^{-t^i}}\|_K^2.
  6. Update parameters θ\theta by gradient descent.

Sampling Phase (Euler–Maruyama)

  1. Initialize Y~0∼N(0,C)\tilde Y_0 \sim \mathcal{N}(0,C).
  2. For m=M,…,1m=M,\ldots,1:
    • Δt=tm−tm−1\Delta t = t_m - t_{m-1}
    • ξ∼N(0,C)\xi \sim \mathcal{N}(0,C)
    • Update: Y~tm−1=Y~tm+Δt  s~θ(tm,Y~tm)+Δt ξ\tilde Y_{t_{m-1}} = \tilde Y_{t_m} + \Delta t\;\tilde s_\theta(t_m,\tilde Y_{t_m}) + \sqrt{\Delta t}\,\xi
  3. Output Y~t0\tilde Y_{t_0} as the generated sample.

In discretized implementations for high-dimensional data (e.g., images), CC becomes a D×DD\times D covariance matrix and Hilbert-space norms reduce to weighted Euclidean norms.

3. Hierarchical and Infinite Domain Extensions

InfiniteDiffusion supports generation over unbounded spatial domains by combining window-based denoising with a recursive, lazy evaluation scheme (Goslin, 9 Dec 2025). Windowed denoising operators Φ\Phi act locally, and at each timestep, only the finite set of windows overlapping the query region is evaluated and memoized in infinite tensor accumulators At−1,Bt−1A_{t-1},B_{t-1}. The key update for a query region RR at diffusion timestep tt is: Jt−1[R]=At−1[R]Bt−1[R],J_{t-1}[R] = \frac{A_{t-1}[R]}{B_{t-1}[R]}, with window updates

At−1[Ri]+=Wi⊙xi,Bt−1[Ri]+=Wi,xi=Φ(Jt[Ri]∣yi),A_{t-1}[R_i] \mathrel{+}= W_i \odot x_i, \quad B_{t-1}[R_i] \mathrel{+}= W_i, \quad x_i = \Phi(J_t[R_i] | y_i),

where WiW_i is a weight map, RiR_i is a spatial window, and κ(R)\kappa(R) is the set of windows overlapping RR.

Generation is organized hierarchically:

  1. A coarse planetary diffusion refines low-resolution global maps.
  2. Mid-scale latent diffusion synthesizes large tiles conditioned on planetary context.
  3. A high-fidelity consistency decoder upsamples latents to high-resolution outputs, with Laplacian encoding for stabilization.

No global image is materialized, and only the visible regions are maintained in memory.

4. Properties: Seed-Consistency, Infinite Extent, and Random Access

The InfiniteDiffusion framework satisfies critical properties for procedural world generation:

  • Seed-Consistency: For any finite region RR and seed ss, the output is a deterministic function of ss and RR, independent of query order [(Goslin, 9 Dec 2025), Appendix A.1].
  • Seamless Infinite Extent: The model supports generation over Z2\mathbb{Z}^2, ensuring outputs are globally seamless and coherent as new regions are synthesized.
  • Constant-Time Random Access: By restricting each window to overlap at most MM others, any finite query region can be answered with O(1)O(1) window evaluations, independent of absolute position [(Goslin, 9 Dec 2025), Appendix A.2].
  • Parallelization: Window evaluations are mutually independent at each timestep, supporting parallel processing across tiles [(Goslin, 9 Dec 2025), Appendix A.3].
  • Resource Efficiency: Memory usage scales as O(# visible tiles)O(\text{\# visible tiles}) and is independent of the total domain size.

5. Practical Design and Theoretical Guidelines

The design of InfiniteDiffusion algorithms is guided by the infinite-dimensional analysis:

  • Noise Covariance Selection (CC): Should be matched to μdata\mu_{\rm data} to minimize Wasserstein contraction and ensure both laws have maximal common support. For image data, the canonical choice is C=IC=I (white noise), but for structured functional data, smoother kernels (e.g., Matérn) and Sobolev norms are preferred.
  • Norm for Score-Matching (∥⋅∥K\|\cdot\|_K): Must balance finiteness and stability. Two main regimes:

    1. CC rough enough to ensure μdata⊂U\mu_{\rm data}\subset U; use the Cameron–Martin norm (∥⋅∥U\|\cdot\|_U) (IDDM1).
    2. Match CC to μdata\mu_{\rm data} and pick a common training norm (∥⋅∥K\|\cdot\|_K) (IDDM2).
  • Losses: Include denoising score matching, mean absolute error (L1), perceptual similarity (LPIPS), and Kullback–Leibler divergence as relevant to stage.

  • Data Handling: Discretization maps the infinite-dimensional SDE to finite computations with explicit guarantees of stability as the dimension increases (Pidstrigach et al., 2023).

6. Applications, Extensions, and Validation

InfiniteDiffusion has been empirically validated in several domains:

  • Image Formation: Canonical infinite-dimensional white-noise diffusion reproduces standard image diffusion approaches, with rigorous dimension-independent performance and stability.
  • Manifolds: Sampling from distributions on the Cameron–Martin sphere demonstrates the retention of smoothness properties at all scales, outperforming schemes that degrade with increasing resolution.
  • Bayesian Inverse Problems: The framework supports posterior sampling under Gaussian-process priors, matching the accuracy of Hamiltonian MCMC while producing smooth and consistent samples.

Terrain Diffusion instantiates InfiniteDiffusion for real-time, infinite terrain synthesis:

  • Hierarchical design couples planetary, mid-scale, and local structures using Laplacian encodings and consistency-distilled diffusion decoders.
  • Open-source infinite-tensor runtimes provide constant-memory generation.
  • Real-time performance is demonstrated: ≈7.6\approx 7.6 s to first 512×512512\times512 tile, subsequent tiles in $2.4$ s on RTX 3090 Ti.
  • Extensions support richer conditioning (land cover, climate), higher spatial fidelity, and transfer to other procedural domains such as textures or urban layouts (Goslin, 9 Dec 2025).

7. Limitations and Theoretical Guarantees

InfiniteDiffusion’s architecture is accompanied by formal guarantees:

  • Time-reversal SDEs admit unique strong solutions subject to regularity; drift approximations and discretization error are bounded via dimension-independent Wasserstein estimates.
  • Practical performance is robust to discretization refinement, in contrast to strictly finite-dimensional pipelines that deteriorate with grid refinement.
  • The seed-consistency and parallel window evaluation guarantee determinism, spatial coherence, and scalability to planetary or larger domains.

A strong implication of the theory is that constructing models directly in infinite-dimensional spaces enables provable fidelity and scalability, laying foundational methodology for learned synthesis in scientific computing and procedural content generation (Pidstrigach et al., 2023, Goslin, 9 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to InfiniteDiffusion Algorithm.