Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 57 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 176 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Neural Jump ODEs as Generative Models (2510.02757v1)

Published 3 Oct 2025 in stat.ML and cs.LG

Abstract: In this work, we explore how Neural Jump ODEs (NJODEs) can be used as generative models for It^o processes. Given (discrete observations of) samples of a fixed underlying It^o process, the NJODE framework can be used to approximate the drift and diffusion coefficients of the process. Under standard regularity assumptions on the It^o processes, we prove that, in the limit, we recover the true parameters with our approximation. Hence, using these learned coefficients to sample from the corresponding It^o process generates, in the limit, samples with the same law as the true underlying process. Compared to other generative machine learning models, our approach has the advantage that it does not need adversarial training and can be trained solely as a predictive model on the observed samples without the need to generate any samples during training to empirically approximate the distribution. Moreover, the NJODE framework naturally deals with irregularly sampled data with missing values as well as with path-dependent dynamics, allowing to apply this approach in real-world settings. In particular, in the case of path-dependent coefficients of the It^o processes, the NJODE learns their optimal approximation given the past observations and therefore allows generating new paths conditionally on discrete, irregular, and incomplete past observations in an optimal way.

Summary

  • The paper introduces Neural Jump ODEs to estimate drift and diffusion from irregular, discrete data, enabling generation of sample paths that match the true process.
  • It employs a predictive, non-adversarial training framework with bias correction, ensuring convergence and robustness even under missing data and path-dependence.
  • Empirical evaluations on GBM and OU processes demonstrate the method's ability to replicate both marginal and pathwise distributions accurately.

Neural Jump ODEs as Generative Models: Theory and Practice

Overview

"Neural Jump ODEs as Generative Models" (2510.02757) presents a framework for learning the law of Itô processes from discrete, potentially irregular and incomplete observations, using Neural Jump ODEs (NJODEs). The approach enables the estimation of drift and diffusion coefficients directly from data, facilitating the generation of new sample paths that match the true underlying process in law. The method is non-adversarial, prediction-based, and robust to missing data and path-dependent dynamics, with theoretical guarantees of convergence under standard regularity assumptions.

Problem Formulation and Motivation

The central problem is to learn the law of a dd-dimensional Itô process XX governed by the SDE:

dXt=μt(Xt)dt+σt(Xt)dWt,dX_t = \mu_t(X_{\cdot \wedge t})\,dt + \sigma_t(X_{\cdot \wedge t})\,dW_t,

where μt\mu_t and σt\sigma_t are unknown, possibly path-dependent drift and diffusion coefficients, and WtW_t is an mm-dimensional Brownian motion. The only available data are discrete, possibly irregular and incomplete, observations of independent sample paths of XX.

The objective is to generate new, independent trajectories of XX by learning estimators μ^t\hat\mu_t and σ^t\hat\sigma_t for the coefficients, such that the generated process X~\tilde X matches the law of XX as closely as possible.

NJODE Framework for Coefficient Estimation

NJODEs are continuous-time models designed to optimally predict stochastic processes from discrete, irregular, and incomplete observations. The key insight is that, by training NJODEs to approximate conditional expectations of XtX_t and XtXtX_t X_t^\top given the available information, one can construct estimators for the drift and diffusion coefficients.

Drift Estimation

Given observations up to time tt, the drift is estimated via:

μ^tΔ=E[Xt+ΔAt]XtΔ,\hat{\mu}_t^\Delta = \frac{E[X_{t+\Delta} | \mathcal{A}_t] - X_t}{\Delta},

where At\mathcal{A}_t is the σ\sigma-algebra generated by the available information up to tt. The NJODE is trained to predict E[Xt+ΔAt]E[X_{t+\Delta} | \mathcal{A}_t].

Diffusion Estimation

The diffusion is estimated using the conditional expectation of squared increments:

Σ^tΔ=1ΔE[(Xt+ΔXt)(Xt+ΔXt)At].\hat{\Sigma}_t^\Delta = \frac{1}{\Delta} E\left[(X_{t+\Delta} - X_t)(X_{t+\Delta} - X_t)^\top \mid \mathcal{A}_t\right].

To ensure positive semi-definiteness, the NJODE is trained to output a matrix GG and the estimator is taken as S=GGS = GG^\top.

Instantaneous Estimation

To reduce bias from finite Δ\Delta, the paper introduces direct estimation of instantaneous coefficients by training NJODEs to predict the quotient of increments (for drift) and squared increments (for diffusion), divided by the time step, and using right-limits at observation times.

Generative Procedure

Once the NJODEs are trained, the generative procedure is as follows:

  1. Initialization: Start from an initial observation or a sequence of observations (possibly with missing values).
  2. Iterative Generation: At each step, use the current history to compute μ^t\hat\mu_t and σ^t\hat\sigma_t via the NJODEs, then sample the next point using the Euler-Maruyama scheme:

X~t+Δ=X~t+μ^tΔ+σ^tΔWt.\tilde X_{t+\Delta} = \tilde X_t + \hat\mu_t \Delta + \hat\sigma_t \Delta W_t.

  1. Continuation: Repeat until the desired time horizon is reached.

This procedure can generate unconditional sample paths or paths conditioned on arbitrary observed histories.

Theoretical Guarantees

The paper provides rigorous convergence results:

  • Consistency: Under standard regularity and identifiability assumptions, as the NJODEs are trained to optimality and Δ0\Delta \to 0, the estimators μ^t\hat\mu_t and σ^t\hat\sigma_t converge in L2L^2 to the true coefficients (or their L2L^2-optimal projections given the available information).
  • Law Convergence: The law of the generated process X~\tilde X converges to the law of the true process XX as the estimators improve and the discretization is refined.
  • Robustness: The approach is robust to irregular sampling, missing data, and path-dependent coefficients.

Empirical Evaluation

The framework is validated on synthetic datasets, including geometric Brownian motion (GBM) and Ornstein-Uhlenbeck (OU) processes. Four estimation strategies are compared: baseline, joint baseline with bias reduction, instantaneous, and joint instantaneous with bias reduction. Figure 1

Figure 1: Plot of true training paths and generated (with joint instantaneous method) paths, with 1000 samples each.

Figure 2

Figure 2

Figure 2: Distribution of XtX_t at t=0.5t=0.5 and t=T=1t=T=1 of true training paths and generated (with joint instantaneous method) paths.

Figure 3

Figure 3: True and estimated (with joint instantaneous method) drift and diffusion coefficients along one generated path.

Figure 4

Figure 4: 1000 generated (with joint instantaneous method) path continuations, starting from the history of the first training path until t=0.55t=0.55.

Figure 5

Figure 5

Figure 5: Distribution of XtX_t at t=0.5t=0.5 and t=T=1t=T=1 of true training paths and generated (with joint instantaneous method) paths.

Key empirical findings:

  • The joint instantaneous method with bias reduction yields generated samples whose estimated parameters closely match those of the training data, with negligible invalid paths.
  • Marginal and pathwise distributions of generated samples are visually and quantitatively indistinguishable from the true process.
  • The method is effective even when the underlying process violates some theoretical assumptions (e.g., unbounded coefficients in GBM).

Implementation Considerations

  • Model Architecture: The NJODE is implemented as a neural ODE with jump updates at observation times, using feedforward networks for the drift, jump, and output mappings.
  • Training: Models are trained with MSE-type losses on conditional expectations, with optional bias correction for diffusion estimation.
  • Computational Requirements: Training is efficient due to the non-adversarial, prediction-based loss, and does not require sample generation during training.
  • Scalability: The approach is scalable to high-dimensional and path-dependent processes, and naturally accommodates missing and irregular data.

Unlike adversarial generative models (e.g., neural SDE-GANs), the NJODE approach is purely predictive, avoids issues of mode collapse and instability, and provides theoretical convergence guarantees. It also improves upon direct coefficient learning in neural SDEs by handling incomplete data and providing law-level convergence, not just marginal matching.

Implications and Future Directions

The NJODE-based generative modeling framework offers a principled, efficient, and robust approach for learning and simulating complex stochastic processes from partial, irregular data. Its ability to handle path-dependence and missingness makes it particularly suitable for real-world applications in finance, physics, and biology.

Potential future developments include:

  • Extension to jump-diffusion and Lévy processes.
  • Integration with control and reinforcement learning for stochastic systems.
  • Application to high-dimensional, multivariate time series with complex dependencies.
  • Further improvements in estimator bias correction and uncertainty quantification.

Conclusion

The paper establishes NJODEs as a theoretically sound and practically effective tool for generative modeling of Itô processes from discrete, irregular, and incomplete observations. The framework's convergence guarantees, empirical performance, and flexibility position it as a strong alternative to adversarial and marginal-matching generative models for stochastic processes.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 posts and received 57 likes.

alphaXiv

  1. Neural Jump ODEs as Generative Models (5 likes, 0 questions)