Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 218 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Parallel optimized sampling for stochastic equations (1502.07186v2)

Published 23 Feb 2015 in math.NA

Abstract: Stochastic equations play an important role in computational science, due to their ability to treat a wide variety of complex statistical problems. However, current algorithms are strongly limited by their sampling variance, which scales proportionate to 1/N_S for N_S samples. In this paper, we obtain a new class of variance reduction methods for treating stochastic equations, called parallel optimized sampling. The objective of parallel optimized sampling is to reduce the sampling variance in the observables of an ensemble of stochastic trajectories. This is achieved through calculating a finite set of observables - typically statistical moments - in parallel, and minimizing the errors compared to known values. The algorithm is both numerically efficient and unbiased. Importantly, it does not increase the errors in higher order moments, and generally reduces such errors as well. The same procedure is applied both to initial ensembles and to changes in a finite time-step. Results of these methods show that errors in initially optimized moments can be reduced to the machine precision level, typically around 10-16 in current hardware. For nonlinear stochastic equations, sampled moment errors during time-evolution are larger than this, due to error propagation effects. Even so, we provide evidence for error reductions of up to two orders of magnitude in a nonlinear equation example, for low order moments, which is a large practical benefit. The sampling variance typically scales as 1/N_S, but with the advantage of a very much smaller prefactor than for standard, non-optimized methods.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.