Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 79 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 45 tok/s
GPT-5 High 43 tok/s Pro
GPT-4o 103 tok/s
GPT OSS 120B 475 tok/s Pro
Kimi K2 215 tok/s Pro
2000 character limit reached

Some variance reduction methods for numerical stochastic homogenization (1509.02389v1)

Published 8 Sep 2015 in math.NA and math.PR

Abstract: We overview a series of recent works devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires solving a set of problems at the micro scale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte-Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behavior. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts of the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube