- The paper introduces an AutoML tool that integrates statistical and machine learning models for probabilistic time series forecasting.
- The evaluation on 29 datasets demonstrates that the ensemble approach consistently achieves superior forecast accuracy while managing training times efficiently.
- The framework democratizes advanced forecasting by enabling minimal coding effort and offering customizable presets for diverse forecasting scenarios.
Overview of AutoGluon–TimeSeries: AutoML for Probabilistic Time Series Forecasting
The paper presents AutoGluon--TimeSeries (AG--TS), an open-source AutoML library designed for probabilistic time series forecasting. AG--TS aims to provide ease of use and robustness, enabling users to generate accurate point and quantile forecasts with minimal coding effort. Building on the design philosophy of AutoGluon, AG--TS utilizes ensembles of diverse forecasting models to achieve high accuracy efficiently.
Key Features
AG--TS integrates both conventional statistical models and modern machine-learning-based forecasting techniques. It automatically assembles an ensemble of models to enhance prediction accuracy while managing training times. The framework supports both point and probabilistic forecasts across univariate time series, and accommodates static and time-varying covariates, making it versatile for a wide range of forecasting scenarios.
Empirical Evaluation
The paper evaluates the performance of AG--TS on 29 benchmarking datasets, comparing its efficacy against other established forecasting methods and AutoML systems. The results indicate that AG--TS consistently outperforms various existing approaches in both point and probabilistic forecast accuracy. Notably, AG--TS often surpasses the best-in-hindsight combinations of prior methods. The ability of AG--TS to handle model failures by producing valid results with successfully trained models further underscores its robustness.
Design Principles
AG--TS follows key design principles prioritized by the AutoGluon framework, such as favoring ensembling over hyperparameter optimization (HPO) and neural architecture search. While supporting HPO, AG--TS typically relies on a wide array of models combined using ensemble selection to achieve balanced performance without excessive training times. It introduces user-friendly presets allowing users to adjust the trade-off between accuracy and training duration, while advanced users have options for deeper customization based on specific forecasting needs.
Forecasting Models and Techniques
AG--TS incorporates both local models like ARIMA and ETS, which are fitted individually to each time series, and global models like DeepAR and PatchTST, which leverage data across all series for forecasting. This combination ensures broader applicability and efficiency, particularly when scaling to larger datasets. The ensemble step involves a forward selection algorithm to compute a convex combination of model predictions, a method known as Vincentization in probabilistic forecasting, optimizing chosen metrics such as weighted quantile loss (wQL) or mean absolute scaled error (MASE).
Implications and Future Directions
The introduction of AG--TS presents significant implications for practitioners seeking to implement time series forecasting without extensive domain expertise. By providing an effective AutoML tool, AG--TS democratizes access to advanced forecasting capabilities, facilitating better decision-making across industries that rely on time-dependent data.
For theoretical development, AG--TS's seamless integration of diverse models opens avenues for exploring enhanced ensemble strategies, incorporating calibration techniques through methods like conformal predictions, and supporting novel forecasting challenges such as cold-start conditions or generating forecast explanations.
As demand for large-scale, automated forecasting solutions grows, future work could focus on further improving AG--TS's scalability, enhancing its ability to operate with even larger datasets, and integrating with broader AI systems, such as LLMs, to augment its utility in complex, real-world applications.