Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Electricity Price Prediction for Energy Storage System Arbitrage: A Decision-focused Approach (2305.00362v1)

Published 30 Apr 2023 in cs.LG, cs.SY, and eess.SY

Abstract: Electricity price prediction plays a vital role in energy storage system (ESS) management. Current prediction models focus on reducing prediction errors but overlook their impact on downstream decision-making. So this paper proposes a decision-focused electricity price prediction approach for ESS arbitrage to bridge the gap from the downstream optimization model to the prediction model. The decision-focused approach aims at utilizing the downstream arbitrage model for training prediction models. It measures the difference between actual decisions under the predicted price and oracle decisions under the true price, i.e., decision error, by regret, transforms it into the tractable surrogate regret, and then derives the gradients to predicted price for training prediction models. Based on the prediction and decision errors, this paper proposes the hybrid loss and corresponding stochastic gradient descent learning method to learn prediction models for prediction and decision accuracy. The case study verifies that the proposed approach can efficiently bring more economic benefits and reduce decision errors by flattening the time distribution of prediction errors, compared to prediction models for only minimizing prediction errors.

Citations (21)

Summary

  • The paper introduces a decision-focused method that minimizes both prediction and decision errors by integrating the ESS optimization model into the training loop.
  • It quantifies decision error using surrogate regret and combines it with MSE in a hybrid loss function to balance accuracy with economic performance.
  • The approach achieves up to 47% higher average daily benefits over standard MSE-based models, highlighting its practical value in ESS arbitrage.

This paper proposes a decision-focused approach for electricity price prediction specifically tailored for optimizing Energy Storage System (ESS) arbitrage (2305.00362). It addresses the limitation of traditional prediction models that solely focus on minimizing prediction errors (like Mean Squared Error - MSE) without considering how these errors impact the downstream decision-making process (i.e., ESS charging/discharging schedule for maximizing profit).

The core idea is to integrate the downstream ESS arbitrage optimization model into the training loop of the upstream price prediction model. The goal is to minimize not just the prediction error but also the decision error, which is the difference in profitability between decisions made using predicted prices and decisions made using the actual (oracle) prices.

Methodology:

  1. Quantifying Decision Error (Regret): The paper uses regret to measure the decision error. Regret is defined as the difference between the optimal profit achievable with perfect knowledge of future prices (oracle profit) and the actual profit obtained using decisions based on predicted prices.

    regret(λ^,λ)=λTP(λ)λTP(λ^)regret(\hat{\lambda}, \lambda) = \lambda^T P^*(\lambda) - \lambda^T P^*(\hat{\lambda})

    where λ\lambda is the true price, λ^\hat{\lambda} is the predicted price, P(λ)P^*(\lambda) is the optimal ESS schedule under true prices, and P(λ^)P^*(\hat{\lambda}) is the optimal schedule under predicted prices.

  2. Tractable Surrogate Regret: Directly optimizing the prediction model to minimize regret is difficult because the regret function is discontinuous and non-differentiable with respect to the predicted price λ^\hat{\lambda}, especially for optimization problems involving binary variables like the ESS arbitrage model (a Mixed-Integer Linear Program - MILP). To overcome this, the paper derives a tractable, differentiable upper bound called surrogate regret (LregretL^{regret}), inspired by the Smart Predict-then-Optimize (SPO) framework [Elmachtoub2020].

    Lregret(λ^,λ)=(λ2λ^)TP(λ2λ^)2λ^TP(λ)+c(λ)L^{regret}(\hat{\lambda}, \lambda) = (\lambda - 2 \hat{\lambda})^T P^*(\lambda - 2 \hat{\lambda}) - 2 \hat{\lambda}^T P^*(\lambda) + c^*(\lambda)

    where c(λ)c^*(\lambda) is the optimal oracle profit, and P(λ2λ^)P^*(\lambda - 2 \hat{\lambda}) is the optimal ESS schedule calculated using a modified cost vector (λ2λ^)(\lambda - 2 \hat{\lambda}).

  3. Gradient Calculation: A key contribution is deriving the gradient of this surrogate regret with respect to the predicted price:

    Lregret(λ^,λ)λ^=2(P(λ)+P(λ2λ^))\frac{\partial L^{regret}(\hat{\lambda}, \lambda)}{\partial \hat{\lambda}} = - 2 (P^*(\lambda) + P^*( \lambda - 2\hat{\lambda}))

    This gradient requires solving the ESS arbitrage MILP twice: once with the true price λ\lambda and once with the modified price (λ2λ^)(\lambda - 2\hat{\lambda}).

  4. Hybrid Loss Function: To balance prediction accuracy and decision quality, a hybrid loss function (LcombL^{comb}) is proposed, combining the surrogate regret with the traditional MSE loss, weighted by a hyperparameter ϵ\epsilon:

    Lcomb(λ^,λ)=Lregret(λ^,λ)+ϵLMSE(λ^,λ)L^{comb}(\hat{\lambda}, \lambda) = L^{regret}(\hat{\lambda}, \lambda) +\epsilon L^{MSE}(\hat{\lambda}, \lambda)

    The hyperparameter ϵ\epsilon controls the trade-off: a higher ϵ\epsilon emphasizes prediction accuracy (lower MSE), while a lower ϵ\epsilon emphasizes decision quality (lower regret).

  5. Hybrid SGD Learning Method: A specialized Stochastic Gradient Descent (SGD) training procedure is introduced (Algorithm 1). In each training step for a batch of data:
    • The gradient of the weighted MSE term (ϵLMSE\epsilon L^{MSE}) is calculated using standard back-propagation (e.g., via PyTorch's Autograd).
    • The gradient of the surrogate regret term (LregretL^{regret}) is calculated explicitly using the formula derived above (requiring MILP solves).
    • Both gradients are accumulated.
    • The prediction model's parameters are updated once using the combined gradient. This involves two separate back-propagation passes before a single parameter update.

Implementation Details:

  • Prediction Models: The approach is demonstrated using both a simple Linear Regression model and a more complex Residual Neural Network (ResNet), showing applicability across models with different representational capacities.
  • ESS Arbitrage Model: A standard MILP formulation for ESS arbitrage is used, considering charging/discharging efficiencies, power limits, energy capacity limits, and constraints preventing simultaneous charging and discharging (using binary variables and the big-M method).
    • Maximize t=1TλtPtΔt\sum_{t=1}^T \lambda_t P_t \Delta_t
    • Subject to energy balance, capacity limits (EminE_{min}, EmaxE_{max}), power limits (PchmaxP_{ch}^{max}, PdismaxP_{dis}^{max}), efficiency (ηch\eta_{ch}, ηdis\eta_{dis}), and binary constraints.
  • Data & Features: Six years of hourly PJM market data (price, load, temperature) are used. Features include historical load/price, future temperature forecasts, and calendar features (day of week, holiday). Input features are standardized, and the output (price) is predicted in log scale.
  • Software: Implemented using Python with PyTorch for neural networks and Cvxpy (likely with a MILP solver like Gurobi or CPLEX) for the ESS optimization.
  • Hyperparameters: Key hyperparameters include the ResNet architecture (hidden layers, dropout), optimizer settings (Adam, learning rate), batch size, the hybrid loss weight ϵ\epsilon (tuned via experiments, found to be 25 for ResNet in the case paper), and ESS parameters (capacity, power ratings, efficiencies).

Results and Practical Implications:

  • The decision-focused prediction (DFP) model trained with the hybrid loss significantly outperforms models trained solely on MSE, especially in terms of realized arbitrage profit and reduced regret, even if its raw prediction accuracy (RMSE, MAPE) might be slightly worse than some highly tuned prediction-focused models (like MLP or Random Forest).
  • The DFP model achieved ~47% higher average daily benefits than the MSE-based ResNet model and ~6% higher benefits than a tuned MLP model for a 500 kWh ESS example.
  • The key reason for improved performance is that the hybrid loss encourages the prediction model to capture the timing of price fluctuations more accurately, even if the exact price magnitude prediction error is slightly higher. It flattens the distribution of prediction errors across the day, reducing large errors during critical high/low price periods crucial for arbitrage decisions.
  • The optimal value for ϵ\epsilon depends on the prediction model's capacity and the data characteristics, representing a trade-off. For simple models (Linear), a higher ϵ\epsilon might be needed to maintain reasonable prediction accuracy while improving decisions. For complex models (ResNet), a smaller ϵ\epsilon can effectively guide the model towards better decisions without significantly sacrificing prediction accuracy.
  • The approach provides a practical framework for aligning prediction model training directly with the economic objectives of the downstream task, leading to tangible improvements in application performance (higher profits for ESS operators).