Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 195 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models (2401.16521v1)

Published 29 Jan 2024 in cs.LG and cs.AI

Abstract: This work undertakes studies to evaluate Interpretability Methods for Time-Series Deep Learning. Sensitivity analysis assesses how input changes affect the output, constituting a key component of interpretation. Among the post-hoc interpretation methods such as back-propagation, perturbation, and approximation, my work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances. Specifically, my work answers three research questions: 1) Do different sensitivity analysis (SA) methods yield comparable outputs and attribute importance rankings? 2) Using the same sensitivity analysis method, do different Deep Learning (DL) models impact the output of the sensitivity analysis? 3) How well do the results from sensitivity analysis methods align with the ground truth?

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Interpreting Time Series Transformer Models and Sensitivity Analysis of Population Age Groups to COVID-19 Infections. In 2024 The Association for the Advancement of Artificial Intelligence AI4TS Workshop.
  2. Population Age Group Sensitivity for COVID-19 Infections with Deep Learning. ArXiv:2307.00751 [cs.LG], arXiv:2307.00751.
  3. DLinear: A Simple Way to Improve the Performance and Interpretability of Deep Learning Models. arXiv preprint arXiv:1906.02799.
  4. PatchTST: A Transformer-based Model for Time Series Forecasting with Patch Tokenization and Splitting. arXiv preprint arXiv:2301.02628.
  5. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1605.06063.
  6. Ablation Studies in Artificial Neural Networks. arXiv:1901.08644.
  7. Morris, M. D. 1991. Factorial Sampling Plans for Preliminary Computational Experiments. Technometrics, 33(2): 161–174.
  8. Model-agnostic interpretability of machine learning models. arXiv preprint arXiv:1606.05386.
  9. Learning Important Features Through Propagating Activation Differences. arXiv:1704.02685.
  10. Axiomatic Attribution for Deep Networks. arXiv:1703.01365.
  11. Autoformer: A Transformer-based Model for Time Series Forecasting. arXiv preprint arXiv:2108.06051.
  12. TimesNet: A Temporal 2D-variation Modeling Framework for Time Series Forecasting. arXiv preprint arXiv:2302.00666.
  13. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, 818–833.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube