Reinforcement learning for irreversible reinsurance problems: the randomized singular control approach (2512.02769v1)
Abstract: This paper studies the continuous-time reinforcement learning for stochastic singular control with the application to an infinite-horizon irreversible reinsurance problems. The singular control is equivalently characterized as a pair of regions of time and the augmented states, called the singular control law. To encourage the exploration in the learning procedure, we propose a randomization method for the singular control laws, new to the literature, by considering an auxiliary singular control and entropy regularization. The exploratory singular control problem is formulated as a two-stage optimal control problem, where the time-inconsistency issue arises in the outer problem. In the specific model setup with known model coefficients, we provide the full characterization of the time-consistent equilibrium singular controls for the two-stage problem. Taking advantage of the solution structure, we can consider the proper parameterization of the randomized equilibrium policy and the value function when the model is unknown and further devise the actor-critic reinforcement learning algorithms. In the numerical experiment, we present the superior convergence of parameter iterations towards the true values based on the randomized equilibrium policy and illustrate how the exploration may advance the learning performance in the context of singular controls.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.