Feasibility of a self-consistent simulated-tempering training procedure for 4N without external data

Ascertain whether it is possible to implement an iterative, self-consistent procedure that automatically trains the Nearest-Neighbours Neural Network (4N) to sample low-temperature Gibbs–Boltzmann distributions without relying on externally generated training configurations, by progressively decreasing temperature and updating the model within a simulated-tempering framework.

Background

In the present work, the 4N model is trained using equilibrium configurations generated by parallel tempering at each temperature, enabling a best-case assessment of the architecture’s expressivity. The authors highlight the next step of moving beyond reliance on external samplers by introducing a self-consistent training scheme that lowers temperature iteratively while adapting the model.

Such a procedure, if feasible, could yield a simulated tempering approach with favorable scaling in the number of variables, with potential impact on both disordered system simulations and optimization problems. The authors explicitly flag the feasibility of this approach as a topic for future investigation.

References

Having tested that $4N$ achieves the best possible scaling with system size while being expressive enough, the next step is to check whether it is possible to apply an iterative, self-consistent procedure to automatically train the network at lower and lower temperatures without the aid of training data generated with a different algorithm (here, parallel tempering). Whether setting up such a procedure is actually possible will be the focus of future work.