Infinite horizon discounted LQ optimal control problems for mean-field switching diffusions (2506.16033v1)
Abstract: This paper investigates an infinite horizon discounted linear-quadratic (LQ) optimal control problem for stochastic differential equations (SDEs) incorporating regime switching and mean-field interactions. The regime switching is modeled by a finite-state Markov chain acting as common noise, while the mean-field interactions are characterized by the conditional expectation of the state process given the history of the Markov chain. To address system stability in the infinite horizon setting, a discounted factor is introduced. Within this framework, the well-posedness of the state equation and adjoint equation -- formulated as infinite horizon mean-field forward and backward SDEs with Markov chains, respectively -- is established, along with the asymptotic behavior of their solutions as time approaches infinity. A candidate optimal feedback control law is formally derived based on two algebraic Riccati equations (AREs), which are introduced for the first time in this context. The solvability of these AREs is proven through an approximation scheme involving a sequence of Lyapunov equations, and the optimality of the proposed feedback control law is rigorously verified using the completion of squares method. Finally, numerical experiments are conducted to validate the theoretical findings, including solutions to the AREs, the optimal control process, and the corresponding optimal (conditional) state trajectory. This work provides a comprehensive framework for solving infinite horizon discounted LQ optimal control problems in the presence of regime switching and mean-field interactions, offering both theoretical insights and practical computational tools.