Dice Question Streamline Icon: https://streamlinehq.com

Convergence under overreaction weights approaching Bayes

Prove that, under the conservative updating rule (ER) with overreaction weights (γ_n > 1) that satisfy γ_n → 1, the belief process (q_n) converges almost surely to the candidate process in 𝒫 that minimizes the Kullback–Leibler divergence from the true data-generating process p*, equivalently the process that maximizes the cross-entropy ∑_{x} p*(x) log p_θ(x).

Information Square Streamline Icon: https://streamlinehq.com

Background

While the main results establish improved predictive convergence under underreaction, the authors show that overreaction (γ_n > 1) can yield chaotic dynamics without general convergence guarantees. They nevertheless posit a specific limiting regime in which overreaction weights converge to the Bayesian case.

In that regime—weights γ_n > 1 tending to 1—the authors conjecture that the rule (ER) recovers the classical Berk (1966) convergence result: the belief concentrates on the KL-minimizing member of the misspecified family. Formal verification of this limiting convergence remains open.

References

We conjecture, however, that if $(\gamma_n)_n$ converges to 1, then we recover the convergence to the closest distribution in $\mathcal{P}$ to $p*$.

Non-Bayesian Learning in Misspecified Models (2503.18024 - Bervoets et al., 23 Mar 2025) in Discussion, Subsection Overreaction