General Slope Attack in Forecasting Models
- General Slope Attack is a gradient-based adversarial method that optimizes forecast endpoint slopes in financial deep learning models.
- It employs a specialized loss function and iterative optimization to double, reverse, or flatten predictive trends with minimal input alteration.
- Empirical evaluations show that GSA achieves effective trend manipulation with marginal error increase and improved stealth compared to methods like FGSM and BIM.
The General Slope Attack (GSA) is a targeted, gradient-based adversarial method designed to manipulate the trend of forecasts generated by deep learning models in financial time-series analysis. Specifically targeting N-HiTS forecasting architectures, the GSA optimizes the endpoint slope of multi-step predictions, doubling or reversing the trajectory with minimal perturbation to the original input sequence. This method leverages a loss formulation focused on the forecast trend, enabling highly stealthy manipulation that circumvents conventional discriminators and traditional adversarial defense mechanisms (Luszczynski, 24 Nov 2025).
1. Attack Objective and Problem Formulation
The GSA is constructed for a white-box scenario in which the attacker possesses full access to the forecasting model . Given a clean input time-series (e.g., prices across 300 trading days), the model outputs a horizon forecast (e.g., future days). The adversary crafts a perturbed input bounded by , with the goal that the model’s forecast slope at the final endpoints is “doubled” or reversed compared to the original prediction. Perturbation magnitude is constrained to remain inconspicuous, typically set to a low percentile of the median input value.
2. Mathematical Formulation and Optimization
The empirical slope is defined as: where and are the initial and final forecast outputs. The target direction encodes whether the adversary wishes to enforce an upward, downward, or flattened slope, respectively.
The GSA employs the following loss function:
- For slope augmentation/reversal :
- For slope neutralization :
The adversary minimizes subject to the perturbation constraint:
3. Iterative Attack Procedure
GSA is realized via iterative gradient-based optimization. Initialization sets and proceeds for iterations, each comprising:
- Gradient enablement for .
- Forward pass to obtain .
- Computation of empirical slope .
- Loss evaluation per GSA formulation.
- Backpropagation to compute .
- Update via .
- Perturbed input update: .
- Projection: clamp to .
- Gradient detachment for next iteration.
Final output after steps is .
4. Attack Hyperparameters and Their Roles
Hyperparameters critically control attack effectiveness and stealth:
- : Maximum perturbation (commonly 2% of the median adjusted price); modulates adversarial inconspicuity.
- : Number of iterations; governs convergence and computational load (typically –$50$).
- : Step size, usually ; higher values may induce oscillation, smaller values promote stability.
- : Loss scaling and amplification; heightens slope sensitivity, balances loss magnitude with the model’s gradient landscape.
- : Trend target; for upward, for downward, $0$ for slope flattening.
5. Empirical Evaluation and Stealth
GSA efficacy is validated under controlled conditions ( median, , , , ). The attack is benchmarked against classical approaches (FGSM, BIM, TIM) and normal predictions. The key outcomes are:
| Attack | MAE | RMSE | MAPE | Gen.Slope |
|---|---|---|---|---|
| Normal | 2.15 | 2.72 | 3.82e-2 | 3.37e-2 |
| FGSM | 2.57 | 3.21 | 4.51e-2 | 3.22e-2 |
| BIM | 3.38 | 3.99 | 5.68e-2 | 3.48e-2 |
| TIM (Up) | 2.49 | 3.21 | 4.52e-2 | 3.72e-2 |
| GSA (Up) | 2.26 | 2.88 | 4.03e-2 | 6.76e-2 |
| GSA (Down) | 2.23 | 2.83 | 3.89e-2 | -1.68e-4 |
GSA (Up) achieves a doubling in endpoint slope (from to $0.0676$) with marginal increase in MAE (). Downward slope reversal is also realized. Slope changes follow an approximate linear relationship with . Stealth evaluation reveals CNN-based discriminators reach only accuracy ( specificity), approaching random guessing, indicating limited detectability and robust adversarial obfuscation.
6. Comparison With Established Adversarial Methods
Classical adversarial algorithms such as FGSM (Fast Gradient Sign Method), BIM (Basic Iterative Method), and TIM (Trend Induction Method) primarily optimize pointwise forecast error (MAE) without explicit trend manipulation. TIM achieves shift in temporal pattern but via ad hoc means. In contrast, GSA directly enforces a specified slope within the loss layer, negating the need for synthetic forecast generation. The net benefits are:
- Fine-grained trend control, enabling explicit slope manipulation (doubling, reversal, flattening).
- Minimized disturbance in the central forecast interval, with perturbation focused on endpoints.
- Enhanced stealth, since the forecast shape is largely preserved except near the endpoints.
7. Applications and Implications
GSA has demonstrated practical utility as both an attack evaluation benchmark and a tool for assessing adversarial robustness in ML-driven financial forecasting systems. This suggests broader relevance for adversarial research in time-series contexts beyond finance, particularly in security-sensitive domains where trend manipulation is consequential. Its lightweight framework enables rapid deployment and investigation of model resilience. A plausible implication is the necessity for holistic ML-security, extending defensive focus from model internals to the complete data processing pipeline, particularly in view of malware-driven adversarial injections (Luszczynski, 24 Nov 2025).