Interpretable deep learning for spatio-temporal data mining

Develop interpretable deep learning models for spatio-temporal data mining that provide human-understandable explanations of model behavior across diverse spatio-temporal data types and representations, beyond current attention-based approaches.

Background

Deep learning models used for spatio-temporal data mining are often treated as black boxes, which limits trust and hinders deployment in sensitive domains. The complexity of spatio-temporal data types and their representations (e.g., sequences, graphs, tensors) makes interpretability especially challenging compared to image or text data.

Prior work has incorporated attention mechanisms to expose aspects such as periodicity and local spatial dependency, but a general approach to building more interpretable deep models for spatio-temporal tasks has not been established.

References

Although attention mechanisms are used in some previous works to increase the model interpretability such as periodicity and local spatial dependency , how to build a more interpretable deep learning model for STDM tasks is still not well studied and remains an open problem.

Deep Learning for Spatio-Temporal Data Mining: A Survey  (1906.04928 - Wang et al., 2019) in Section VI, Open Problems (Interpretable models)