Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

High-Resolution Peak Demand Estimation Using Generalized Additive Models and Deep Neural Networks (2203.03342v2)

Published 7 Mar 2022 in cs.LG, cs.SY, econ.EM, eess.SY, stat.AP, and stat.ML

Abstract: This paper covers predicting high-resolution electricity peak demand features given lower-resolution data. This is a relevant setup as it answers whether limited higher-resolution monitoring helps to estimate future high-resolution peak loads when the high-resolution data is no longer available. That question is particularly interesting for network operators considering replacing high-resolution monitoring predictive models due to economic considerations. We propose models to predict half-hourly minima and maxima of high-resolution (every minute) electricity load data while model inputs are of a lower resolution (30 minutes). We combine predictions of generalized additive models (GAM) and deep artificial neural networks (DNN), which are popular in load forecasting. We extensively analyze the prediction models, including the input parameters' importance, focusing on load, weather, and seasonal effects. The proposed method won a data competition organized by Western Power Distribution, a British distribution network operator. In addition, we provide a rigorous evaluation study that goes beyond the competition frame to analyze the models' robustness. The results show that the proposed methods are superior to the competition benchmark concerning the out-of-sample root mean squared error (RMSE). This holds regarding the competition month and the supplementary evaluation study, which covers an additional eleven months. Overall, our proposed model combination reduces the out-of-sample RMSE by 57.4\% compared to the benchmark.

Citations (4)

Summary

We haven't generated a summary for this paper yet.