Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer Training Strategies for Forecasting Multiple Load Time Series (2306.10891v3)

Published 19 Jun 2023 in cs.LG and cs.AI

Abstract: In the smart grid of the future, accurate load forecasts on the level of individual clients can help to balance supply and demand locally and to prevent grid outages. While the number of monitored clients will increase with the ongoing smart meter rollout, the amount of data per client will always be limited. We evaluate whether a Transformer load forecasting model benefits from a transfer learning strategy, where a global univariate model is trained on the load time series from multiple clients. In experiments with two datasets containing load time series from several hundred clients, we find that the global training strategy is superior to the multivariate and local training strategies used in related work. On average, the global training strategy results in 21.8% and 12.8% lower forecasting errors than the two other strategies, measured across forecasting horizons from one day to one month into the future. A comparison to linear models, multi-layer perceptrons and LSTMs shows that Transformers are effective for load forecasting when they are trained with the global training strategy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Matthias Hertel (5 papers)
  2. Maximilian Beichter (8 papers)
  3. Benedikt Heidrich (10 papers)
  4. Oliver Neumann (12 papers)
  5. Benjamin Schäfer (47 papers)
  6. Ralf Mikut (55 papers)
  7. Veit Hagenmeyer (72 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.