Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models (2401.16521v1)
Abstract: This work undertakes studies to evaluate Interpretability Methods for Time-Series Deep Learning. Sensitivity analysis assesses how input changes affect the output, constituting a key component of interpretation. Among the post-hoc interpretation methods such as back-propagation, perturbation, and approximation, my work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances. Specifically, my work answers three research questions: 1) Do different sensitivity analysis (SA) methods yield comparable outputs and attribute importance rankings? 2) Using the same sensitivity analysis method, do different Deep Learning (DL) models impact the output of the sensitivity analysis? 3) How well do the results from sensitivity analysis methods align with the ground truth?
- Interpreting Time Series Transformer Models and Sensitivity Analysis of Population Age Groups to COVID-19 Infections. In 2024 The Association for the Advancement of Artificial Intelligence AI4TS Workshop.
- Population Age Group Sensitivity for COVID-19 Infections with Deep Learning. ArXiv:2307.00751 [cs.LG], arXiv:2307.00751.
- DLinear: A Simple Way to Improve the Performance and Interpretability of Deep Learning Models. arXiv preprint arXiv:1906.02799.
- PatchTST: A Transformer-based Model for Time Series Forecasting with Patch Tokenization and Splitting. arXiv preprint arXiv:2301.02628.
- Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1605.06063.
- Ablation Studies in Artificial Neural Networks. arXiv:1901.08644.
- Morris, M. D. 1991. Factorial Sampling Plans for Preliminary Computational Experiments. Technometrics, 33(2): 161–174.
- Model-agnostic interpretability of machine learning models. arXiv preprint arXiv:1606.05386.
- Learning Important Features Through Propagating Activation Differences. arXiv:1704.02685.
- Axiomatic Attribution for Deep Networks. arXiv:1703.01365.
- Autoformer: A Transformer-based Model for Time Series Forecasting. arXiv preprint arXiv:2108.06051.
- TimesNet: A Temporal 2D-variation Modeling Framework for Time Series Forecasting. arXiv preprint arXiv:2302.00666.
- Visualizing and understanding convolutional networks. In European Conference on Computer Vision, 818–833.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.