Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluation of Machine Learning Techniques for Forecast Uncertainty Quantification (2111.14844v5)

Published 29 Nov 2021 in cs.LG, cs.AI, and nlin.CD

Abstract: Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts with an estimation of their uncertainty. The main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty, particularly those associated with model errors. In this work we perform toy-model and state-of-the-art model experiments to analyze to what extent artificial neural networks (ANNs) are able to model the different sources of uncertainty present in a forecast. In particular those associated with the accuracy of the initial conditions and those introduced by the model error. We also compare different training strategies: one based on a direct training using the mean and spread of an ensemble forecast as target, the other ones rely on an indirect training strategy using an analyzed state as target in which the uncertainty is implicitly learned from the data. Experiments using the Lorenz'96 model show that the ANNs are able to emulate some of the properties of ensemble forecasts like the filtering of the most unpredictable modes and a state-dependent quantification of the forecast uncertainty. Moreover, ANNs provide a reliable estimation of the forecast uncertainty in the presence of model error. Preliminary experiments conducted with a state-of-the-art forecasting system also confirm the ability of ANNs to produce a reliable quantification of the forecast uncertainty.

Citations (9)

Summary

We haven't generated a summary for this paper yet.