Optimization of Bregman Variational Learning Dynamics (2510.20227v1)
Abstract: We develop a general optimization-theoretic framework for Bregman-Variational Learning Dynamics (BVLD), a new class of operator-based updates that unify Bayesian inference, mirror descent, and proximal learning under time-varying environments. Each update is formulated as a variational optimization problem combining a smooth convex loss f_t with a Bregman divergence D_psi. We prove that the induced operator is averaged, contractive, and exponentially stable in the Bregman geometry. Further, we establish Fejer monotonicity, drift-aware convergence, and continuous-time equivalence via an evolution variational inequality (EVI). Together, these results provide a rigorous analytical foundation for well-posed and stability-guaranteed operator dynamics in nonstationary optimization.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.