Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 88 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 175 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Approximation schemes for viscosity solutions of fully nonlinear stochastic partial differential equations (1802.04740v6)

Published 13 Feb 2018 in math.AP

Abstract: The aim of this paper is to develop a general method for constructing approximation schemes for viscosity solutions of fully nonlinear pathwise stochastic partial differential equations, and for proving their convergence. Our results apply to approximations such as explicit finite difference schemes and Trotter-Kato type mixing formulas. The irregular time dependence disrupts the usual methods from the classical viscosity theory for creating schemes that are both monotone and convergent, an obstacle that cannot be overcome by incorporating higher order correction terms, as is done for numerical approximations of stochastic or rough ordinary differential equations. The novelty here is to regularize those driving paths with non-trivial quadratic variation in order to guarantee both monotonicity and convergence. We present qualitative and quantitative results, the former covering a wide variety of schemes for second-order equations. An error estimate is established in the Hamilton-Jacobi case, its merit being that it depends on the path only through the modulus of continuity, and not on the derivatives or total variation. As a result, it is possible to choose a regularization of the path so as to obtain efficient rates of convergence. This is demonstrated in the specific setting of equations with multiplicative white noise in time, in which case the convergence holds with probability one. We also present an example using scaled random walks that exhibits convergence in distribution.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube