The long road to calibrated prediction uncertainty in computational chemistry (2201.01511v2)
Abstract: Uncertainty quantification (UQ) in computational chemistry (CC) is still in its infancy. Very few CC methods are designed to provide a confidence level on their predictions, and most users still rely improperly on the mean absolute error as an accuracy metric. The development of reliable uncertainty quantification methods is essential, notably for computational chemistry to be used confidently in industrial processes. A review of the CC-UQ literature shows that there is no common standard procedure to report nor validate prediction uncertainty. I consider here analysis tools using concepts (calibration and sharpness) developed in meteorology and machine learning for the validation of probabilistic forecasters. These tools are adapted to CC-UQ and applied to datasets of prediction uncertainties provided by composite methods, Bayesian Ensembles methods, machine learning and a posteriori statistical methods.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.