Efficacy of sequential tempering–style neural-network strategies on hard sampling problems

Determine whether iterative neural-network-assisted tempering strategies such as Sequential Tempering—where a neural sampler is learned at high temperature and progressively updated as temperature is lowered—actually perform well on computationally hard sampling problems and can challenge already-existing algorithms, by establishing clear, rigorous benchmarks across hard instances.

Background

The paper discusses challenges in training neural networks to approximate Gibbs–Boltzmann distributions, particularly minimizing the Kullback–Leibler divergence D_KL(P_GB || P_NN) when direct sampling from the target distribution is difficult. One proposed solution is to adopt an iterative schedule—often termed Sequential Tempering—where the model is trained at high temperature and then gradually adapted to lower temperatures.

The authors note conflicting evidence in the literature regarding the performance of such approaches on genuinely hard problems, citing both positive and negative results. This motivates a precise and systematic assessment of whether these strategies can reliably match or surpass established baselines on hard instances.

References

It is still unclear whether this kind of strategies actually perform well on hard problems and challenge already-existing algorithms, with both positive and negative results.