Validation of the α-selection strategy based on asymptotic inefficiency in MLMC for SNPE
Determine whether selecting the geometric level-distribution parameter α by minimizing asymptotic inefficiency (defined as the product of estimator variance and average computational cost) yields consistent improvement in training stability, convergence, and posterior approximation accuracy compared to the conventional choice that minimizes average cost, for the multi-level Monte Carlo estimators (RU-MLMC, GRR-MLMC, and TGRR-MLMC) used to estimate the nested Automatic Posterior Transformation (APT) loss and gradient in sequential neural posterior estimation with intractable likelihoods.
References
In this paper, our choice for hyper-parameter α deviates from the mainstream choice. While values of α closer to r2 is favored, aiming at cost minimization, given the fact that variance is the major focus, our focus lies on the asymptotic inefficiency by considering both variance and average cost. However, given that a thorough validation for this strategy through ablation experiments is still lacking, its improvement is still unclear.