A note on the strong formulation of stochastic control problems with model uncertainty (1402.4415v2)
Abstract: We consider a Markovian stochastic control problem with model uncertainty. The controller (intelligent player) observes only the state, and, therefore, uses feed-back (closed-loop) strategies. The adverse player (nature) who does not have a direct interest in the pay-off, chooses open-loop controls that parametrize Knightian uncertainty. This creates a two-step optimization problem (like half of a game) over feed-back strategies and open-loop controls. The main result is to show that, under some assumptions, this provides the same value as the (half of) the zero-sum symmetric game where the adverse player also plays feed-back strategies and actively tries to minimize the pay-off. The value function is independent of the filtration accessible to the adverse player. Aside from the modeling issue, the present note is a technical companion to [S^I3b].
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.