Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stochastic Algorithmic Differentiation of (Expectations of) Discontinuous Functions (Indicator Functions) (1811.05741v5)

Published 14 Nov 2018 in q-fin.CP, cs.NA, and math.NA

Abstract: In this paper, we present a method for the accurate estimation of the derivative (aka.~sensitivity) of expectations of functions involving an indicator function by combining a stochastic algorithmic differentiation and a regression. The method is an improvement of the approach presented in [Risk Magazine April 2018]. The finite difference approximation of a partial derivative of a Monte-Carlo integral of a discontinuous function is known to exhibit a high Monte-Carlo error. The issue is evident since the Monte-Carlo approximation of a discontinuous function is just a finite sum of discontinuous functions and as such, not even differentiable. The algorithmic differentiation of a discontinuous function is problematic. A natural approach is to replace the discontinuity by continuous functions. This is equivalent to replacing a path-wise automatic differentiation by a (local) finite difference approximation. We present an improvement (in terms of variance reduction) by decoupling the integration of the Dirac delta and the remaining conditional expectation and estimating the two parts by separate regressions. For the algorithmic differentiation, we derive an operator that can be injected seamlessly - with minimal code changes - into the algorithm resulting in the exact result.

Citations (1)

Summary

We haven't generated a summary for this paper yet.