Inherently Interpretable Time Series Classification via Multiple Instance Learning (2311.10049v3)
Abstract: Conventional Time Series Classification (TSC) methods are often black boxes that obscure inherent interpretation of their decision-making processes. In this work, we leverage Multiple Instance Learning (MIL) to overcome this issue, and propose a new framework called MILLET: Multiple Instance Learning for Locally Explainable Time series classification. We apply MILLET to existing deep learning TSC models and show how they become inherently interpretable without compromising (and in some cases, even improving) predictive performance. We evaluate MILLET on 85 UCR TSC datasets and also present a novel synthetic dataset that is specially designed to facilitate interpretability evaluation. On these datasets, we show MILLET produces sparse explanations quickly that are of higher quality than other well-known interpretability methods. To the best of our knowledge, our work with MILLET, which is available on GitHub (https://github.com/JAEarly/MILTimeSeriesClassification), is the first to develop general MIL methods for TSC and apply them to an extensive variety of domains
- TimeSHAP: Explaining recurrent models through sequence perturbations. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 2565–2573, 2021.
- Jonathan Crabbé and Mihaela Van Der Schaar. Explaining time series predictions with dynamic masks. In International Conference on Machine Learning, pp. 2166–2177. PMLR, 2021.
- The UCR time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6):1293–1305, 2019.
- ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Mining and Knowledge Discovery, 34(5):1454–1495, 2020.
- Hydra: Competing convolutional kernels for fast and accurate time series classification. Data Mining and Knowledge Discovery, pp. 1–27, 2023.
- Janez Demšar. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research, 7:1–30, 2006.
- Multiple instance learning for efficient sequential data classification on resource-constrained devices. Advances in Neural Information Processing Systems, 31, 2018.
- Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2):31–71, 1997.
- Model agnostic interpretability for multiple instance learning. In International Conference on Learning Representations, 2021.
- Non-markovian reward modelling from trajectory labels via interpretable multiple instance learning. Advances in Neural Information Processing Systems, 35:27652–27663, 2022.
- Deep learning for time series classification and extrinsic regression: A current survey. arXiv preprint arXiv:2302.02515, 2023.
- Computational complexity evaluation of neural network applications in signal processing. arXiv preprint arXiv:2206.12191, 2022.
- Unsupervised model selection for time-series anomaly detection. In International Conference on Learning Representations, 2023.
- Matrix profile-based interpretable time series classifier. Frontiers in Artificial Intelligence, 4:699448, 2021.
- Explainable multivariate time series classification: a deep neural network which learns to attend to important variables as well as time intervals. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 607–615, 2021.
- Attention-based deep multiple instance learning. In International Conference on Machine Learning, pp. 2127–2136. PMLR, 2018.
- Benchmarking deep learning interpretability in time series predictions. Advances in neural information processing systems, 33:6441–6452, 2020.
- Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, 33(4):917–963, 2019.
- InceptionTime: Finding AlexNet for time series classification. Data Mining and Knowledge Discovery, 34(6):1936–1962, 2020.
- Vijay Manikandan Janakiraman. Explaining aviation safety incidents using deep temporal multiple instance learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 406–415, 2018.
- Additive MIL: Intrinsically interpretable multiple instance learning for pathology. Advances in Neural Information Processing Systems, 35:20689–20702, 2022.
- Temporal dependencies in feature importance for time series prediction. In The Eleventh International Conference on Learning Representations, 2022.
- A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 2017.
- HIVE-COTE 2.0: a new meta ensemble for time series classification. Machine Learning, 110(11-12):3211–3243, 2021.
- Bake off redux: a review and experimental evaluation of recent time series classification algorithms. arXiv preprint arXiv:2304.13029, 2023.
- Christoph Molnar. Interpretable machine learning. 2 edition, 2022. URL https://christophm.github.io/interpretable-ml-book.
- WindowSHAP: An efficient framework for explaining time-series classifiers based on Shapley values. Journal of Biomedical Informatics, 144:104438, 2023.
- Label propagation for learning with label proportions. In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6. IEEE, 2018.
- Active learning with label proportions. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3097–3101. IEEE, 2019.
- Encoding time-series explanations through self-supervised model behavior consistency. arXiv preprint arXiv:2306.02109, 2023.
- “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144, 2016.
- Time complexity in deep learning models. Procedia Computer Science, 215:202–210, 2022.
- Multiple instance learning for ECG risk stratification. In Machine Learning for Healthcare Conference, pp. 124–139. PMLR, 2019.
- XAI Methods for Neural Time Series Classification: A Brief Review. arXiv preprint arXiv:2108.08009, 2021.
- Limesegment: Meaningful, realistic time series explanations. In International Conference on Artificial Intelligence and Statistics, pp. 3418–3433. PMLR, 2022.
- Explainable AI for time series classification: a review, taxonomy and research directions. IEEE Access, 10:100700–100724, 2022.
- Attention is all you need. Advances in Neural Information Processing Systems, 30, 2017.
- In defense of LSTMs for addressing multiple instance learning problems. In Proceedings of the Asian Conference on Computer Vision, 2020.
- Revisiting multiple instance neural networks. Pattern Recognition, 74:15–24, 2018.
- Time series classification from scratch with deep neural networks: A strong baseline. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 1578–1585. IEEE, 2017.
- Matrix profile IV: using weakly labeled time series to predict outcomes. Proceedings of the VLDB Endowment, 10(12):1802–1812, 2017.
- Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, pp. 818–833. Springer, 2014.
- Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.
- Uncertainty-aware multiple instance learning from large-scale long time series data. In 2021 IEEE International Conference on Big Data (Big Data), pp. 1772–1778. IEEE, 2021.