Beyond KL-divergence: Risk Aware Control Through Cross Entropy and Adversarial Entropy Regularization (2505.11068v1)
Abstract: While the idea of robust dynamic programming (DP) is compelling for systems affected by uncertainty, addressing worst-case disturbances generally results in excessive conservatism. This paper introduces a method for constructing control policies robust to adversarial disturbance distributions that relate to a provided empirical distribution. The character of the adversary is shaped by a regularization term comprising a weighted sum of (i) the cross-entropy between the empirical and the adversarial distributions, and (ii) the entropy of the adversarial distribution itself. The regularization weights are interpreted as the likelihood factor and the temperature respectively. The proposed framework leads to an efficient DP-like algorithm -- referred to as the minsoftmax algorithm -- to obtain the optimal control policy, where the disturbances follow an analytical softmax distribution in terms of the empirical distribution, temperature, and likelihood factor. It admits a number of control-theoretic interpretations and can thus be understood as a flexible tool for integrating complementary features of related control frameworks. In particular, in the linear model quadratic cost setting, with a Gaussian empirical distribution, we draw connections to the well-known $\mathcal{H}_{\infty}$-control. We illustrate our results through a numerical example.