Adaptive LISO with decreasing mixture weight

Develop a theoretical analysis for adaptive Laplace Importance Sampling Optimization (adaptive LISO) using mixture proposals q_n = (1−λ_n) q_{θ_n} + λ_n q_0 with an adaptively decreasing sequence of mixture weights {λ_n}, and establish variance or mean-squared error bounds that do not deteriorate as λ_n becomes small, thereby enabling the use of decreasing λ_n while maintaining control of the importance-weight variance.

Background

In the adaptive LISO scheme, proposals are formed as mixtures q_n = (1−λ) q_{θ_n} + λ q_0 to ensure heavy-tailed domination and control of the variance of importance weights. This choice allows verifying Assumption \ref{hyp:Q} with g_0 = λ q_0, which underpins Corollary \ref{cor:ALISO}.

However, the bound in Corollary \ref{cor:ALISO} worsens as λ → 0, effectively preventing the theoretically justified use of an adaptively decreasing sequence for λ in the current framework. Overcoming this limitation would allow the algorithm to gradually reduce exploration and increase exploitation while preserving rigorous guarantees.

References

However, the bound in Corollary~\ref{cor:ALISO} deteriorates as \lambda becomes too small, which precludes the use of an adaptively decreasing sequence for \lambda . Addressing this limitation is left for future work.

Importance Sampling Optimization with Laplace Principle  (2604.02882 - Dragomir et al., 3 Apr 2026) in Policies with mixture step paragraph, Section 3.4 (Analysis of adaptive LISO)