Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic Backward Differentiation for American Monte-Carlo Algorithms (Conditional Expectation) (1707.04942v1)

Published 16 Jul 2017 in q-fin.CP, cs.DS, and math.NA

Abstract: In this note we derive the backward (automatic) differentiation (adjoint [automatic] differentiation) for an algorithm containing a conditional expectation operator. As an example we consider the backward algorithm as it is used in Bermudan product valuation, but the method is applicable in full generality. The method relies on three simple properties: 1) a forward or backward (automatic) differentiation of an algorithm containing a conditional expectation operator results in a linear combination of the conditional expectation operators; 2) the differential of an expectation is the expectation of the differential $\frac{d}{dx} E(Y) = E(\frac{d}{dx}Y)$; 3) if we are only interested in the expectation of the final result (as we are in all valuation problems), we may use $E(A \cdot E(B\vert\mathcal{F})) = E(E(A\vert\mathcal{F}) \cdot B)$, i.e., instead of applying the (conditional) expectation operator to a function of the underlying random variable (continuation values), it may be applied to the adjoint differential. \end{enumerate} The methodology not only allows for a very clean and simple implementation, but also offers the ability to use different conditional expectation estimators in the valuation and the differentiation.

Citations (3)

Summary

We haven't generated a summary for this paper yet.