Convergence under overreaction weights approaching Bayes
Prove that, under the conservative updating rule (ER) with overreaction weights (γ_n > 1) that satisfy γ_n → 1, the belief process (q_n) converges almost surely to the candidate process in 𝒫 that minimizes the Kullback–Leibler divergence from the true data-generating process p*, equivalently the process that maximizes the cross-entropy ∑_{x} p*(x) log p_θ(x).
References
We conjecture, however, that if $(\gamma_n)_n$ converges to 1, then we recover the convergence to the closest distribution in $\mathcal{P}$ to $p*$.
                — Non-Bayesian Learning in Misspecified Models
                
                (2503.18024 - Bervoets et al., 23 Mar 2025) in Discussion, Subsection Overreaction