Entropy-Guided Multiplicative Updates: KL Projections for Multi-Factor Target Exposures (2510.24607v1)
Abstract: We develop \emph{Entropy-Guided Multiplicative Updates} (EGMU), a convex optimization framework for constructing multi-factor target-exposure portfolios by minimizing Kullback--Leibler (KL) divergence from a benchmark subject to linear factor constraints. Our contributions are theoretical and algorithmic. (\emph{i}) We formalize feasibility and uniqueness: with strictly positive benchmark and feasible targets in the convex hull of exposures, the solution is unique and strictly positive. (\emph{ii}) We derive the dual concave program with gradient $t-\E_{w(\theta)}[x]$ and Hessian $-\Cov_{w(\theta)}(x)$, and give a precise sensitivity formula $\partial\theta*/\partial t=\Cov_{w*}(x){-1}$ and $\partial w*/\partial t=\mathrm{diag}(w*) (X-\1\mu\top)\Cov_{w*}(x){-1}$. (\emph{iii}) We present two provably convergent solvers: a damped \emph{dual Newton} method with global convergence and local quadratic rate, and a \emph{KL-projection} scheme based on IPF/Bregman--Dykstra for equalities and inequalities. (\emph{iv}) We further \textbf{generalize EGMU} with \emph{elastic targets} (strongly concave dual) and \emph{robust target sets} (support-function dual), and introduce a \emph{path-following ODE} for solution trajectories, all reusing the same dual-moment structure and solved via Newton or proximal-gradient schemes. (\emph{v}) We detail numerically stable and scalable implementations (LogSumExp, covariance regularization, half-space KL-projections). We emphasize theory and reproducible algorithms; empirical benchmarking is optional.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.