Papers
Topics
Authors
Recent
Search
2000 character limit reached

Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing

Published 21 Feb 2024 in cs.LG | (2402.13791v2)

Abstract: In recent years, black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing. Despite the potential benefits of uncovering the inner workings of these models with explainable AI, a comprehensive overview summarizing the explainable AI methods used and their objectives, findings, and challenges in remote sensing applications is still missing. In this paper, we address this gap by performing a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches and emerging directions that tackle specific remote sensing challenges. We also reveal the common patterns of explanation interpretation, discuss the extracted scientific insights, and reflect on the approaches used for the evaluation of explainable AI methods. As such, our review provides a complete summary of the state-of-the-art of explainable AI in remote sensing. Further, we give a detailed outlook on the challenges and promising research directions, representing a basis for novel methodological development and a useful starting point for new researchers in the field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (379)
  1. “Deep Learning for the Earth Sciences: A Comprehensive Approach to Remote Sensing, Climate Science, and Geosciences” In Deep Learning for the Earth Sciences 1, 2021
  2. “Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources” In IEEE Geoscience and Remote Sensing Magazine 5.4 Institute of Electrical and Electronics Engineers Inc., 2017, pp. 8–36 DOI: 10.1109/MGRS.2017.2762307
  3. “Deep Learning and Process Understanding for Data-Driven Earth System Science” In Nature 566.7743 Nature Publishing Group, 2019, pp. 195–204 DOI: 10.1038/s41586-019-0912-1
  4. Hannah Ruschemeier “AI as a challenge for legal regulation–the scope of application of the artificial intelligence act proposal” In ERA Forum, 2023, pp. 1–16 Springer
  5. Sarvam P TerKonda and Eric M Fish “Artificial intelligence viewed through the lens of state regulation” In Intelligence-Based Medicine Elsevier, 2023, pp. 100088
  6. “Toward a Collective Agenda on AI for Earth Science Data Analysis” In IEEE Geoscience and Remote Sensing Magazine 9.2, 2021, pp. 88–104 DOI: 10.1109/MGRS.2020.3043504
  7. “Mission Critical – Satellite Data Is a Distinct Modality in Machine Learning” arXiv, 2024 arXiv:2402.01444 [cs]
  8. Michael F. Goodchild “Scale in GIS: An Overview” In Geomorphology 130.1, Scale Issues in Geomorphology, 2011, pp. 5–9 DOI: 10.1016/j.geomorph.2010.10.004
  9. Thomas Lillesand, Ralph W Kiefer and Jonathan Chipman “Remote sensing and image interpretation” John Wiley & Sons, 2015
  10. “Explainable Deep Learning: A Field Guide for the Uninitiated” In Journal of Artificial Intelligence Research 73, 2022, pp. 329–396 DOI: 10.1613/jair.1.13200
  11. “Explainable artificial intelligence: a comprehensive review” In Artificial Intelligence Review Springer, 2022, pp. 1–66
  12. Timo Speith “A review of taxonomies of explainable artificial intelligence (XAI) methods” In 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2239–2250
  13. Caroline M. Gevaert “Explainable AI for Earth Observation: A Review Including Societal and Regulatory Perspectives” In International Journal of Applied Earth Observation and Geoinformation 112, 2022, pp. 102869 DOI: 10.1016/j.jag.2022.102869
  14. “Goals and Stakeholder Involvement in XAI for Remote Sensing: A Structured Literature Review” In Artificial Intelligence XL, Lecture Notes in Computer Science Cham: Springer Nature Switzerland, 2023, pp. 519–525 DOI: 10.1007/978-3-031-47994-6˙47
  15. Ola Hall, Mattias Ohlsson and Thorsteinn Rögnvaldsson “A Review of Explainable AI in the Satellite Data, Deep Machine Learning, and Human Poverty Domain” In Patterns 3.10, 2022, pp. 100600 DOI: 10.1016/j.patter.2022.100600
  16. “The Challenges of Integrating Explainable Artificial Intelligence into GeoAI” In Transactions in GIS 27.3, 2023, pp. 626–645 DOI: 10.1111/tgis.13045
  17. “Explain It to Me-Facing Remote Sensing Challenges in the Bio-and Geosciences with Explainable Machine Learning” 5, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, pp. 817–824 DOI: 10.5194/isprs-Annals-V-3-2020-817-2020
  18. “Explainable Machine Learning for Scientific Insights and Discoveries” In IEEE Access 8 Institute of Electrical and Electronics Engineers Inc., 2020, pp. 42200–42216 DOI: 10.1109/ACCESS.2020.2976199
  19. Christoph Molnar “Interpretable Machine Learning. A Guide for Making Black Box Models Explainable.” In Book, 2019 URL: https://christophm.github.io/interpretable-ml-book
  20. “Machine Learning for the Geosciences: Challenges and Opportunities” In IEEE Transactions on Knowledge and Data Engineering 31.8 IEEE Computer Society, 2019, pp. 1544–1554 DOI: 10.1109/TKDE.2018.2861006
  21. A. Mamalakis, I. Ebert-Uphoff and E.A. Barnes “Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science” In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 13200 LNAI, 2022, pp. 315–339 DOI: 10.1007/978-3-031-04083-2˙16
  22. “Physics-Informed Machine Learning” In Nature Reviews Physics 3.6, 2021, pp. 422–440 DOI: 10.1038/s42254-021-00314-5
  23. “A Survey of Uncertainty in Deep Neural Networks” arXiv, 2022 DOI: 10.48550/arXiv.2107.03342
  24. “Causal Inference in Geoscience and Remote Sensing from Observational Data” In IEEE Transactions on Geoscience and Remote Sensing 57.3 Institute of Electrical and Electronics Engineers Inc., 2019, pp. 1502–1513 DOI: 10.1109/TGRS.2018.2867002
  25. “Inferring Causation from Time Series in Earth System Sciences” In Nature Communications 10.1 Nature Publishing Group, 2019, pp. 2553 DOI: 10.1038/s41467-019-10105-3
  26. “PRISMA 2020 Explanation and Elaboration: Updated Guidance and Exemplars for Reporting Systematic Reviews” In BMJ 372 British Medical Journal Publishing Group, 2021, pp. n160 DOI: 10.1136/bmj.n160
  27. “Which Academic Search Systems Are Suitable for Systematic Reviews or Meta-Analyses? Evaluating Retrieval Qualities of Google Scholar, PubMed and 26 Other Resources [OPEN ACCESS]” In Research Synthesis Methods 11, 2020, pp. 181–217 DOI: 10.1002/jrsm.1378
  28. “Explainable Artificial Intelligence: A Systematic Review” arXiv, 2020 arXiv:2006.00093 [cs]
  29. “The ERA5 Global Reanalysis” In Quarterly Journal of the Royal Meteorological Society 146.730, 2020, pp. 1999–2049 DOI: 10.1002/qj.3803
  30. “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)” In IEEE Access 6, 2018, pp. 52138–52160 DOI: 10.1109/ACCESS.2018.2870052
  31. “Explainable AI methods-a brief overview” In xxAI-Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, 2022, pp. 13–38 Springer
  32. “Explaining deep neural networks and beyond: A review of methods and applications” In Proceedings of the IEEE 109.3 IEEE, 2021, pp. 247–278
  33. “From Local Explanations to Global Understanding with Explainable AI for Trees” In Nature Machine Intelligence 2.1 Nature Publishing Group, 2020, pp. 56–67 DOI: 10.1038/s42256-019-0138-9
  34. “GLocalX - From Local to Global Explanations of Black Box AI Models” In Artificial Intelligence 294, 2021, pp. 103457 DOI: 10.1016/j.artint.2021.103457
  35. Huan Liu “Feature Selection” In Encyclopedia of Machine Learning Boston, MA: Springer US, 2010, pp. 402–406 DOI: 10.1007/978-0-387-30164-8“˙306
  36. “Understanding Representations Learned in Deep Architectures”, 2010
  37. K Simonyan, A Vedaldi and A Zisserman “Deep inside convolutional networks: visualising image classification models and saliency maps” In Proceedings of the International Conference on Learning Representations (ICLR), 2014 ICLR
  38. Mukund Sundararajan, Ankur Taly and Qiqi Yan “Axiomatic Attribution for Deep Networks” In Proceedings of the 34th International Conference on Machine Learning PMLR, 2017, pp. 3319–3328
  39. Matthew D. Zeiler, Graham W. Taylor and Rob Fergus “Adaptive Deconvolutional Networks for Mid and High Level Feature Learning” In 2011 International Conference on Computer Vision, 2011, pp. 2018–2025 DOI: 10.1109/ICCV.2011.6126474
  40. “Learning deep features for discriminative localization” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2921–2929
  41. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization” In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017
  42. “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation” In PloS one 10.7 Public Library of Science San Francisco, CA USA, 2015, pp. e0130140
  43. Matthew D Zeiler and Rob Fergus “Visualizing and understanding convolutional networks” In Computer Vision–ECCV 2014, 2014, pp. 818–833 Springer
  44. Jerome H Friedman “Greedy function approximation: a gradient boosting machine” In Annals of statistics Institute of Mathematical Statistics, 2001, pp. 1189–1232
  45. Daniel W Apley and Jingyu Zhu “Visualizing the effects of predictor variables in black box supervised learning models” In Journal of the Royal Statistical Society Series B: Statistical Methodology 82.4 Oxford University Press, 2020, pp. 1059–1086
  46. Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin “”Why Should I Trust You?”: Explaining the Predictions of Any Classifier” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining San Francisco California USA: ACM, 2016, pp. 1135–1144 DOI: 10.1145/2939672.2939778
  47. Scott M Lundberg and Su-In Lee “A Unified Approach to Interpreting Model Predictions” In Advances in Neural Information Processing Systems 30 Curran Associates, Inc., 2017
  48. “Automatic Rule Extraction from Long Short Term Memory Networks” In International Conference on Learning Representations, 2017 URL: https://openreview.net/forum?id=SJvYgH9xe
  49. Michael Harradon, Jeff Druce and Brian Ruttenberg “Causal learning and explanation of deep neural networks via autoencoded activations” In arXiv preprint arXiv:1802.00541, 2018
  50. “Distilling a neural network into a soft decision tree” In arXiv preprint arXiv:1711.09784, 2017
  51. “Considerations when learning additive explanations for black-box models” In arXiv preprint arXiv:1801.08640, 2018
  52. “Interpreting cnns via decision trees” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 6261–6270
  53. “Interpreting CNN knowledge via an explanatory graph” In Proceedings of the AAAI conference on artificial intelligence 32.1, 2018
  54. “Growing interpretable part graphs on convnets via multi-shot learning” In Proceedings of the AAAI Conference on Artificial Intelligence 31.1, 2017
  55. “The elements of statistical learning: data mining, inference, and prediction” Springer, 2009
  56. J Dobson Annette “Introduction to generalized linear models” Chapman & Hall CRC, 2018
  57. Trevor J Hastie “Generalized additive models” In Statistical models in S Routledge, 1992, pp. 249–307
  58. David M Blei, Andrew Y Ng and Michael I Jordan “Latent dirichlet allocation” In Journal of machine Learning research 3.Jan, 2003, pp. 993–1022
  59. Dzmitry Bahdanau, Kyunghyun Cho and Yoshua Bengio “Neural machine translation by jointly learning to align and translate” In arXiv preprint arXiv:1409.0473, 2014
  60. “Attention is all you need” In Advances in neural information processing systems 30, 2017
  61. “Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)” In International conference on machine learning, 2018, pp. 2668–2677 PMLR
  62. “Towards automatic concept-based explanations” In Advances in Neural Information Processing Systems 32, 2019
  63. “Concept bottleneck models” In International conference on machine learning, 2020, pp. 5338–5348 PMLR
  64. Diego Marcos, Sylvain Lobry and Devis Tuia “Semantically Interpretable Activation Maps: what-where-how explanations within CNNs” In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 4207–4215 IEEE
  65. “Prototype selection for interpretable classification”, 2011
  66. “This looks like that: deep learning for interpretable image recognition” In Advances in neural information processing systems 32, 2019
  67. Meike Nauta, Ron Van Bree and Christin Seifert “Neural prototype trees for interpretable fine-grained image recognition” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 14933–14943
  68. “Textual explanations for self-driving vehicles” In Proceedings of the European conference on computer vision (ECCV), 2018, pp. 563–578
  69. Sandra Wachter, Brent Mittelstadt and Chris Russell “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR” In Harvard Journal of Law & Technology (Harvard JOLT) 31, 2017, pp. 841
  70. “Distributed Representations of Words and Phrases and Their Compositionality” In Advances in Neural Information Processing Systems 26 Curran Associates, Inc., 2013
  71. “Towards best practice in explaining neural network decisions with LRP” In 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–7 IEEE
  72. “Layer-wise relevance propagation: an overview” In Explainable AI: interpreting, explaining and visualizing deep learning Springer, 2019, pp. 193–209
  73. “Explaining and Interpreting LSTMs” In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11700 LNCS Springer Verlag, 2019, pp. 211–238 DOI: 10.1007/978-3-030-28954-6“˙11/FIGURES/7
  74. Aaron Fisher, Cynthia Rudin and Francesca Dominici “All Models Are Wrong, but Many Are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously” In Journal of Machine Learning Research 20.177, 2019, pp. 1–81
  75. Lloyd S Shapley “A Value for n-Person Games” In Contributions to the Theory of Games II Princeton: Princeton University Press, 1953, pp. 307–317
  76. Scott M Lundberg, Gabriel G Erion and Su-In Lee “Consistent individualized feature attribution for tree ensembles” In arXiv preprint arXiv:1802.03888, 2018
  77. “Reverse Engineering the Neural Networks for Rule Extraction in Classification Problems” In Neural Processing Letters 35.2, 2012, pp. 131–150 DOI: 10.1007/s11063-011-9207-8
  78. “Medical Diagnosis with C4.5 Rule Preceded by Artificial Neural Network Ensemble” In IEEE Transactions on Information Technology in Biomedicine 7.1, 2003, pp. 37–42 DOI: 10.1109/TITB.2003.808498
  79. Xuan Liu, Xiaoguang Wang and Stan Matwin “Improving the Interpretability of Deep Neural Networks with Knowledge Distillation” In 2018 IEEE International Conference on Data Mining Workshops (ICDMW), 2018, pp. 905–912 DOI: 10.1109/ICDMW.2018.00132
  80. David Alvarez-Melis and Tommi S. Jaakkola “A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models” arXiv, 2017 arXiv:1707.01943 [cs]
  81. “Learning Fuzzy Decision Rules” In Fuzzy Sets in Approximate Reasoning and Information Systems, The Handbooks of Fuzzy Sets Series Boston, MA: Springer US, 1999, pp. 279–304 DOI: 10.1007/978-1-4615-5243-7“˙5
  82. Lotfi A Zadeh “Fuzzy sets” In Information and control 8.3 Elsevier, 1965, pp. 338–353
  83. “Classification and regression trees” In Wadsworth, Belmont, CA, 1984
  84. Zachary C Lipton “The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability Is Both Important and Slippery.” In Queue 16.3 ACM New York, NY, USA, 2018, pp. 31–57
  85. Laurens Van der Maaten and Geoffrey Hinton “Visualizing data using t-SNE.” In Journal of machine learning research 9.11, 2008
  86. “UMAP: Uniform Manifold Approximation and Projection” In Journal of Open Source Software 3.29, 2018
  87. “Approximating CNNs with Bag-of-local-Features Models Works Surprisingly Well on ImageNet” In International Conference on Learning Representations, 2019
  88. Tim Miller “Contrastive Explanation: A Structural-Model Approach” In The Knowledge Engineering Review 36 Cambridge University Press, 2021, pp. e14 DOI: 10.1017/S0269888921000102
  89. “LayerCAM: Exploring Hierarchical Class Activation Maps for Localization” In IEEE Transactions on Image Processing 30, 2021, pp. 5875–5888 DOI: 10.1109/TIP.2021.3089943
  90. Avanti Shrikumar, Peyton Greenside and Anshul Kundaje “Learning important features through propagating activation differences” In International conference on machine learning, 2017, pp. 3145–3153 PMLR
  91. “A general survey on attention mechanisms in deep learning” In IEEE Transactions on Knowledge and Data Engineering IEEE, 2021
  92. “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 OpenReview.net, 2021 URL: https://openreview.net/forum?id=YicbFdNTTy
  93. “Graph Attention Networks” In 6th International Conference on Learning Representations, 2017
  94. “What Does BERT Look at? An Analysis of BERT’s Attention” In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP Florence, Italy: Association for Computational Linguistics, 2019, pp. 276–286 DOI: 10.18653/v1/W19-4828
  95. “Emerging properties in self-supervised vision transformers” In Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 9650–9660
  96. “An Attentive Survey of Attention Models” In ACM Transactions on Intelligent Systems and Technology (TIST) 12.5 ACM New York, NY, 2021, pp. 1–32
  97. “Towards a rigorous science of interpretable machine learning” In arXiv preprint arXiv:1702.08608, 2017
  98. “From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI” Just Accepted In ACM Comput. Surv. New York, NY, USA: Association for Computing Machinery, 2023 DOI: 10.1145/3583558
  99. “Sanity Checks for Saliency Maps” In Advances in Neural Information Processing Systems 31 Curran Associates, Inc., 2018
  100. Umang Bhatt, Adrian Weller and José M.F. Moura “Evaluating and Aggregating Feature-based Model Explanations” Main track In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20 International Joint Conferences on Artificial Intelligence Organization, 2020, pp. 3016–3022 DOI: 10.24963/ijcai.2020/417
  101. “Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond” In Journal of Machine Learning Research 24.34, 2023, pp. 1–11
  102. “On the (in) fidelity and sensitivity of explanations” In Advances in Neural Information Processing Systems 32, 2019
  103. “A benchmark for interpretability methods in deep neural networks” In Advances in neural information processing systems 32, 2019
  104. “A consistent and efficient evaluation strategy for attribution methods” In arXiv preprint arXiv:2202.00449, 2022
  105. “Intuitively assessing ml model reliability through example-based explanations and editing model inputs” In 27th International Conference on Intelligent User Interfaces, 2022, pp. 767–781
  106. “Evaluating saliency map explanations for convolutional neural networks: a user study” In Proceedings of the 25th International Conference on Intelligent User Interfaces, 2020, pp. 275–285
  107. “Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations” In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, pp. 1–20 DOI: 10.1109/TPAMI.2023.3331846
  108. “Beyond explaining: Opportunities and challenges of XAI-based model improvement” In Information Fusion 92 Elsevier, 2023, pp. 154–176
  109. Laura N. Sotomayor, Matthew J. Cracknell and Robert Musk “Supervised Machine Learning for Predicting and Interpreting Dynamic Drivers of Plantation Forest Productivity in Northern Tasmania, Australia” In Computers and Electronics in Agriculture 209, 2023 DOI: 10.1016/j.compag.2023.107804
  110. Haifei Chen, Liping Yang and Qiusheng Wu “Enhancing Land Cover Mapping and Monitoring: An Interactive and Explainable Machine Learning Approach Using Google Earth Engine” In Remote Sensing 15.18, 2023 DOI: 10.3390/rs15184585
  111. “Uncertainty-Aware Interpretable Deep Learning for Slum Mapping and Monitoring” In Remote Sensing 14.13, 2022 DOI: 10.3390/rs14133072
  112. Sadeeka Layomi Jayasinghe and Lalit Kumar “Causes of Tea Land Dynamics in Sri Lanka between 1995 and 2030” In Regional Environmental Change 23.4, 2023, pp. 127 DOI: 10.1007/s10113-023-02123-1
  113. “Advancing Satellite Precipitation Retrievals with Data Driven Approaches: Is Black Box Model Explainable?” In Earth and Space Science 8.2, 2021 DOI: 10.1029/2020EA001423
  114. “Earthquake-Induced Building-Damage Mapping Using Explainable Ai (Xai)” In Sensors 21.13, 2021 DOI: 10.3390/s21134489
  115. “Estimating Daily Air Temperature and Pollution in Catalonia: A Comprehensive Spatiotemporal Modelling of Multiple Exposures” In Environmental Pollution 337, 2023 DOI: 10.1016/j.envpol.2023.122501
  116. “Modeling the Temporal Population Distribution of Ae. Aegypti Mosquito Using Big Earth Observation Data” In IEEE access : practical innovations, open solutions 8, 2020, pp. 14182–14194 DOI: 10.1109/ACCESS.2020.2966080
  117. “Explainable Machine Learning Models of Major Crop Traits from Satellite-Monitored Continent-Wide Field Trial Data” In Nature Plants 7.10, 2021, pp. 1354–1363 DOI: 10.1038/s41477-021-01001-0
  118. “Crop Type Classification Using Fusion of Sentinel-1 and Sentinel-2 Data: Assessing the Impact of Feature Selection, Optical Data Availability, and Parcel Sizes on the Accuracies” In Remote Sensing 12.17, 2020 DOI: 10.3390/RS12172779
  119. J.N.S. Rubí and Paulo R.L. Gondim “A Performance Comparison of Machine Learning Models for Wildfire Occurrence Risk Prediction in the Brazilian Federal District Region” In Environment Systems and Decisions, 2023 DOI: 10.1007/s10669-023-09921-2
  120. “Estimate of Near-Surface NO2 Concentrations in Fenwei Plain, China, Based on TROPOMI Data and Random Forest Model” In Environmental Monitoring and Assessment 195.11, 2023, pp. 1379 DOI: 10.1007/s10661-023-11993-1
  121. “Towards Interpreting Multi-Temporal Deep Learning Models in Crop Mapping” In Remote Sensing of Environment 264, 2021 DOI: 10.1016/j.rse.2021.112599
  122. “A Spatial-Temporal Interpretable Deep Learning Model for Improving Interpretability and Predictive Accuracy of Satellite-Based PM2.5” In Environmental Pollution 273, 2021 DOI: 10.1016/j.envpol.2021.116459
  123. “Estimation of Atmospheric PM10 Concentration in China Using an Interpretable Deep Learning Model and Top-of-the-Atmosphere Reflectance Data from China’s New Generation Geostationary Meteorological Satellite, FY-4A” In Journal of Geophysical Research: Atmospheres 127.9, 2022 DOI: 10.1029/2021JD036393
  124. “Estimation of Near-Surface Ozone Concentration and Analysis of Main Weather Situation in China Based on Machine Learning Model and Himawari-8 TOAR Data” In Science of the Total Environment 864 Elsevier B.V., 2023 DOI: 10.1016/j.scitotenv.2022.160928
  125. “Exploring High-Resolution near-Surface CO Concentrations Based on Himawari-8 Top-of-Atmosphere Radiation Data: Assessing the Distribution of City-Level CO Hotspots in China” In Atmospheric Environment 312, 2023 DOI: 10.1016/j.atmosenv.2023.120021
  126. Ozlem Sen and Hacer Yalim Keles “On the Evaluation of CNN Models in Remote-Sensing Scene Classification Domain” In PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science 88.6, 2020, pp. 477–492 DOI: 10.1007/s41064-020-00129-6
  127. “Resilience and Plasticity of Deep Network Interpretations for Aerial Imagery” In IEEE access : practical innovations, open solutions 8, 2020, pp. 127491–127506 DOI: 10.1109/ACCESS.2020.3008323
  128. “Interpretable Deep Learning Framework for Land Use and Land Cover Classification in Remote Sensing Using SHAP” In IEEE Geoscience and Remote Sensing Letters Institute of Electrical and Electronics Engineers Inc., 2023, pp. 1–1 DOI: 10.1109/LGRS.2023.3251652
  129. S.-C. Hung, H.-C. Wu and M.-H. Tseng “Remote Sensing Scene Classification and Explanation Using RSSCNet and LIME” In Applied Sciences (Switzerland) 10.18, 2020 DOI: 10.3390/app10186151
  130. “Identifying Critical Infrastructure in Imagery Data Using Explainable Convolutional Neural Networks” In Remote Sensing 14.21, 2022 DOI: 10.3390/rs14215331
  131. “Scalable Approach for High-Resolution Land Cover: A Case Study in the Mediterranean Basin” In Journal of Big Data 10.1, 2023 DOI: 10.1186/s40537-023-00770-z
  132. Alexander Brenning “Interpreting Machine-Learning Models in Transformed Feature Space with an Application to Remote-Sensing Classification” In Machine Learning 112.9, 2023, pp. 3455–3471 DOI: 10.1007/s10994-023-06327-8
  133. “Counterfactual Explanations for Remote Sensing Time Series Data: An Application to Land Cover Classification” In Machine Learning and Knowledge Discovery in Databases: Applied Data Science and Demo Track, Lecture Notes in Computer Science Cham: Springer Nature Switzerland, 2023, pp. 20–36 DOI: 10.1007/978-3-031-43430-3˙2
  134. “Example-Based Explainable AI and Its Application for Remote Sensing Image Classification” In International Journal of Applied Earth Observation and Geoinformation 118 Elsevier B.V., 2023 DOI: 10.1016/j.jag.2023.103215
  135. Pattathal V. Arun and Arnon Karnieli “Learning of Physically Significant Features from Earth Observation Data: An Illustration for Crop Classification and Irrigation Scheme Detection” In Neural Computing and Applications 34.13, 2022, pp. 10929–10948 DOI: 10.1007/s00521-022-07019-5
  136. “An Explainable XGBoost Model Improved by SMOTE-ENN Technique for Maize Lodging Detection Based on Multi-Source Unmanned Aerial Vehicle Images” In Computers and Electronics in Agriculture 194, 2022 DOI: 10.1016/j.compag.2022.106804
  137. “Combined Analysis of Satellite and Ground Data for Winter Wheat Yield Forecasting” In Smart Agricultural Technology 3, 2023, pp. 100107 DOI: 10.1016/j.atech.2022.100107
  138. “Extreme Gradient Boosting for Yield Estimation Compared with Deep Learning Approaches” In Computers and Electronics in Agriculture 202, 2022, pp. 107346 DOI: 10.1016/j.compag.2022.107346
  139. “Identifying Causes of Crop Yield Variability with Interpretive Machine Learning” In Computers and Electronics in Agriculture 192, 2022, pp. 106632 DOI: 10.1016/j.compag.2021.106632
  140. “Simulation of Multispectral Data Using Hyperspectral Data for Crop Stress Studies” In Lecture Notes in Electrical Engineering 970 Springer Science and Business Media Deutschland GmbH, 2023, pp. 43–52 DOI: 10.1007/978-981-19-7698-8˙5
  141. “Identifying Crop Yield Gaps with Site- and Season-Specific Data-Driven Models of Yield Potential” In Precision Agriculture 23.2, 2022, pp. 578–601 DOI: 10.1007/s11119-021-09850-7
  142. “Interpretable Machine Learning Methods to Explain On-Farm Yield Variability of High Productivity Wheat in Northwest India” In Field Crops Research 287, 2022, pp. 108640 DOI: 10.1016/j.fcr.2022.108640
  143. “Understanding Deep Learning in Land Use Classification Based on Sentinel-2 Time Series” In Scientific Reports 10.1 Nature Publishing Group, 2020, pp. 17188 DOI: 10.1038/s41598-020-74215-5
  144. “Interpretable Long-Short Term Memory Networks for Crop Yield Estimation” In IEEE Geoscience and Remote Sensing Letters, 2023, pp. 1–1 DOI: 10.1109/LGRS.2023.3244064
  145. “Interpretability of Deep Learning Models for Crop Yield Forecasting” In Computers and Electronics in Agriculture 206, 2023, pp. 107663 DOI: 10.1016/j.compag.2023.107663
  146. “Exploring Self-Attention for Crop-type Classification Explainability” arXiv, 2022 DOI: 10.48550/arXiv.2210.13167
  147. “Self-Attention for Raw Optical Satellite Time Series Classification” In ISPRS Journal of Photogrammetry and Remote Sensing 169, 2020, pp. 421–435 DOI: 10.1016/j.isprsjprs.2020.06.006
  148. “Satellite Image Time Series Classification With Pixel-Set Encoders and Temporal Self-Attention” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 12322–12331 DOI: 10.1109/CVPR42600.2020.01234
  149. “Estimating and Understanding Crop Yields with Explainable Deep Learning in the Indian Wheat Belt” In Environmental Research Letters 15.2, 2020 DOI: 10.1088/1748-9326/ab68ac
  150. Laura Elena Cue La Rosa, Dario Augusto Borges Oliveira and Pedram Ghamisi “Learning Crop Type Mapping from Regional Label Proportions in Large-Scale SAR and Optical Imagery” In IEEE Transactions on Geoscience and Remote Sensing, 2023, pp. 1–1 DOI: 10.1109/TGRS.2023.3321156
  151. “Learning Main Drivers of Crop Progress and Failure in Europe with Interpretable Machine Learning” In International Journal of Applied Earth Observation and Geoinformation 104, 2021 DOI: 10.1016/j.jag.2021.102574
  152. Laura Martínez-Ferrer, Maria Piles and Gustau Camps-Valls “Crop Yield Estimation and Interpretability With Gaussian Processes” In IEEE Geoscience and Remote Sensing Letters 18.12, 2021, pp. 2043–2047 DOI: 10.1109/LGRS.2020.3016140
  153. “Explainable Artificial Intelligence for Cotton Yield Prediction with Multisource Data” In IEEE Geoscience and Remote Sensing Letters 20, 2023, pp. 1–5 DOI: 10.1109/LGRS.2023.3303643
  154. Johnny Vega, Fabio Humberto Sepúlveda-Murillo and Melissa Parra “Landslide Modeling in a Tropical Mountain Basin Using Machine Learning Algorithms and Shapley Additive Explanations” In Air, Soil and Water Research 16, 2023, pp. 11786221231195824 DOI: 10.1177/11786221231195824
  155. “A Novel Method Using Explainable Artificial Intelligence (XAI)-Based Shapley Additive Explanations for Spatial Landslide Prediction Using Time-Series SAR Dataset” In Gondwana Research, 2022 DOI: 10.1016/j.gr.2022.08.004
  156. Saeed Alqadhi, Javed Mallick and Meshel Alkahtani “Integrated Deep Learning with Explainable Artificial Intelligence for Enhanced Landslide Management” In Natural Hazards, 2023 DOI: 10.1007/s11069-023-06260-y
  157. “Explainable Artificial Intelligence in Geoscience: A Glimpse into the Future of Landslide Susceptibility Modeling” Earth and Space Science Open Archive, 2022 DOI: 10.1002/essoar.10512130.1
  158. Muhammad Sakib Khan Inan and Istiakur Rahman “Explainable AI Integrated Feature Selection for Landslide Susceptibility Mapping Using TreeSHAP” In SN Computer Science 4.5, 2023, pp. 482 DOI: 10.1007/s42979-023-01960-5
  159. “Insights into Geospatial Heterogeneity of Landslide Susceptibility Based on the SHAP-XGBoost Model” In Journal of environmental management 332 NLM (Medline), 2023, pp. 117357 DOI: 10.1016/j.jenvman.2023.117357
  160. “Assessment of Landslide Susceptibility along Mountain Highways Based on Different Machine Learning Algorithms and Mapping Units by Hybrid Factors Screening and Sample Optimization” In Gondwana Research, 2023 DOI: 10.1016/j.gr.2022.07.013
  161. A.E. Maxwell, M. Sharma and K.A. Donaldson “Explainable Boosting Machines for Slope Failure Spatial Predictive Modeling” In Remote Sensing 13.24, 2021 DOI: 10.3390/rs13244991
  162. “A New Approach to Spatial Landslide Susceptibility Prediction in Karst Mining Areas Based on Explainable Artificial Intelligence” In Sustainability (Switzerland) 15.4 MDPI, 2023 DOI: 10.3390/su15043094
  163. “Landslide Susceptibility Modeling by Interpretable Neural Network” arXiv, 2022 DOI: 10.48550/arXiv.2201.06837
  164. “An Attribution Deep Learning Interpretation Model for Landslide Susceptibility Mapping in the Three Gorges Reservoir Area” In IEEE Transactions on Geoscience and Remote Sensing, 2023, pp. 1–1 DOI: 10.1109/TGRS.2023.3323668
  165. Halit Enes Aydin and Muzaffer Can Iban “Predicting and Analyzing Flood Susceptibility Using Boosting-Based Ensemble Machine Learning Algorithms with SHapley Additive exPlanations” In Natural Hazards, 2022 DOI: 10.1007/s11069-022-05793-y
  166. Junfei Liu, Kai Liu and Ming Wang “A Residual Neural Network Integrated with a Hydrological Model for Global Flood Susceptibility Mapping Based on Remote Sensing Datasets” In Remote Sensing 15.9, 2023 DOI: 10.3390/rs15092447
  167. “An XGBoost-SHAP Approach to Quantifying Morphological Impact on Urban Flooding Susceptibility” In Ecological Indicators 156, 2023, pp. 111137 DOI: 10.1016/j.ecolind.2023.111137
  168. “An Interpretable Deep Semantic Segmentation Method for Earth Observation” In 2022 IEEE 11th International Conference on Intelligent Systems (IS), 2022, pp. 1–8 DOI: 10.1109/IS57118.2022.10019621
  169. “Explainable Artificial Intelligence (XAI) for Interpreting the Contributing Factors Feed into the Wildfire Susceptibility Prediction Model” In Science of the Total Environment 879, 2023 DOI: 10.1016/j.scitotenv.2023.163004
  170. “Explainable Artificial Intelligence (XAI) Detects Wildfire Occurrence in the Mediterranean Countries of Southern Europe” In Scientific Reports 12.1 Nature Publishing Group, 2022, pp. 16349 DOI: 10.1038/s41598-022-20347-9
  171. “Spatial Patterns of Nineteenth Century Fire Severity Persist after Fire Exclusion and a Twenty-First Century Wildfire in a Mixed Conifer Forest Landscape, Southern Cascades, USA” In Landscape Ecology 35.12, 2020, pp. 2777–2790 DOI: 10.1007/s10980-020-01118-1
  172. Nandini Saini, Chiranjoy Chattopadhyay and Debasis Das “E2AlertNet: An Explainable, Efficient, and Lightweight Model for Emergency Alert from Aerial Imagery” In Remote Sensing Applications: Society and Environment 29 Elsevier B.V., 2023 DOI: 10.1016/j.rsase.2022.100896
  173. Teo Beker, Qian Song and Xiao Xiang Zhu “An Analysis of the Gap between Hybrid and Real Data for Volcanic Deformation Detection” In IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, 2023, pp. 825–828 DOI: 10.1109/IGARSS52108.2023.10281964
  174. “Tunnel Geothermal Disaster Susceptibility Evaluation Based on Interpretable Ensemble Learning: A Case Study in Ya’an–Changdu Section of the Sichuan–Tibet Traffic Corridor” In Engineering Geology 313 Elsevier B.V., 2023 DOI: 10.1016/j.enggeo.2023.106985
  175. “Explainable Artificial Intelligence (XAI) Model for Earthquake Spatial Probability Assessment in Arabian Peninsula” In Remote Sensing 15.9, 2023 DOI: 10.3390/rs15092248
  176. A. Levering, D. Marcos and D. Tuia “On the Relation between Landscape Beauty and Land Cover: A Case Study in the U.K. at Sentinel-2 Resolution with Interpretable AI” In ISPRS Journal of Photogrammetry and Remote Sensing 177, 2021, pp. 194–203 DOI: 10.1016/j.isprsjprs.2021.04.020
  177. “Predicting the Liveability of Dutch Cities with Aerial Images and Semantic Intermediate Concepts” In Remote Sensing of Environment 287, 2023, pp. 113454 DOI: 10.1016/j.rse.2023.113454
  178. “Investigating Impacts of Ambient Air Pollution on the Terrestrial Gross Primary Productivity (GPP) from Remote Sensing” In IEEE Geoscience and Remote Sensing Letters 19, 2022, pp. 1–5 DOI: 10.1109/LGRS.2022.3163775
  179. “Vegetation Change as Related to Terrain Factors at Two Glacier Forefronts, Glacier National Park, Montana, U.S.A.” In Journal of Mountain Science 17.1, 2020, pp. 1–15 DOI: 10.1007/s11629-019-5603-8
  180. “Relative Importance of Climatic Variables, Soil Properties and Plant Traits to Spatial Variability in Net CO2 Exchange across Global Forests and Grasslands” In Agricultural and Forest Meteorology 307, 2021, pp. 108506 DOI: 10.1016/j.agrformet.2021.108506
  181. “Improved Understanding of How Catchment Properties Control Hydrological Partitioning Through Machine Learning” In Water Resources Research 58.4, 2022, pp. e2021WR031412 DOI: 10.1029/2021WR031412
  182. “Feature-Free Explainable Data Mining in SAR Images Using Latent Dirichlet Allocation” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 2021, pp. 676–689 DOI: 10.1109/JSTARS.2020.3039012
  183. “A Framework for Interactive Visual Interpretation of Remote Sensing Data” In IEEE Geoscience and Remote Sensing Letters 19, 2022, pp. 1–5 DOI: 10.1109/LGRS.2022.3161959
  184. “A Fully Automatic, Interpretable and Adaptive Machine Learning Approach to Map Burned Area from Remote Sensing” In ISPRS International Journal of Geo-Information 10.8, 2021 DOI: 10.3390/ijgi10080546
  185. “A Big Bang-Big Crunch Type-2 Fuzzy Logic System for Explainable Semantic Segmentation of Trees in Satellite Images Using HSV Color Space” In 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2020, pp. 1–7 DOI: 10.1109/FUZZ48607.2020.9177611
  186. “Explainable AI for Understanding Decisions and Data-Driven Optimization of the Choquet Integral” In 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2018, pp. 1–8 DOI: 10.1109/FUZZ-IEEE.2018.8491501
  187. “Enabling Explainable Fusion in Deep Learning With Fuzzy Integral Neural Networks” In IEEE Transactions on Fuzzy Systems 28.7, 2020, pp. 1291–1300 DOI: 10.1109/TFUZZ.2019.2917124
  188. “Information Fusion-2-Text: Explainable Aggregation via Linguistic Protoforms” In Communications in Computer and Information Science 1239 CCIS, 2020, pp. 114–127 DOI: 10.1007/978-3-030-50153-2˙9
  189. “Fuzzy Choquet Integration of Deep Convolutional Neural Networks for Remote Sensing” In Computational Intelligence for Pattern Recognition Cham: Springer International Publishing, 2018, pp. 1–28 DOI: 10.1007/978-3-319-89629-8˙1
  190. “Deep Networks under Scene-Level Supervision for Multi-Class Geospatial Object Detection from Remote Sensing Images” In ISPRS Journal of Photogrammetry and Remote Sensing 146, 2018, pp. 182–196 DOI: 10.1016/j.isprsjprs.2018.09.014
  191. “An Interpretable Fusion Siamese Network for Multi-modality Remote Sensing Ship Image Retrieval” In IEEE Transactions on Circuits and Systems for Video Technology, 2022, pp. 1–1 DOI: 10.1109/TCSVT.2022.3224068
  192. “Visual Explanations with Detailed Spatial Information for Remote Sensing Image Classification via Channel Saliency” In International Journal of Applied Earth Observation and Geoinformation 118, 2023, pp. 103244 DOI: 10.1016/j.jag.2023.103244
  193. “Imagenet: A large-scale hierarchical image database” In 2009 IEEE conference on computer vision and pattern recognition, 2009, pp. 248–255 Ieee
  194. “Self-Matching CAM: A Novel Accurate Visual Explanation of CNNs for SAR Image Interpretation” In Remote Sensing 13.9 Multidisciplinary Digital Publishing Institute, 2021, pp. 1772 DOI: 10.3390/rs13091772
  195. “Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 24–25
  196. “Crown-CAM: Interpretable Visual Explanations for Tree Crown Detection in Aerial Images” In IEEE Geoscience and Remote Sensing Letters 20, 2023, pp. 1–5 DOI: 10.1109/LGRS.2023.3271649
  197. G. De Lucia, M. Lapegna and D. Romano “Towards Explainable AI for Hyperspectral Image Classification in Edge Computing Environments” In Computers and Electrical Engineering 103, 2022 DOI: 10.1016/j.compeleceng.2022.108381
  198. “Median-Pooling Grad-CAM: An Efficient Inference Level Visual Explanation for CNN Networks in Remote Sensing Image Classification” In MultiMedia Modeling Cham: Springer International Publishing, 2021, pp. 134–146
  199. “Interpretable Deep Learning Method Combining Temporal Backscattering Coefficients and Interferometric Coherence for Rice Area Mapping” In IEEE Geoscience and Remote Sensing Letters 20, 2023, pp. 1–5 DOI: 10.1109/LGRS.2023.3321770
  200. “This Looks Like That There: Interpretable Neural Networks for Image Tasks When Location Matters” In Artificial Intelligence for the Earth Systems 1.3, 2022, pp. e220001 DOI: 10.1175/AIES-D-22-0001.1
  201. “Better Visual Interpretation for Remote Sensing Scene Classification” In IEEE Geoscience and Remote Sensing Letters 19, 2022 DOI: 10.1109/LGRS.2021.3132920
  202. Alexander Brenning “Spatial Machine-Learning Model Diagnostics: A Model-Agnostic Distance-Based Approach” In International Journal of Geographical Information Science 37.3, 2023, pp. 584–606 DOI: 10.1080/13658816.2022.2131789
  203. Giuseppina Andresini, Annalisa Appice and Donato Malerba “SILVIA: An eXplainable Framework to Map Bark Beetle Infestation in Sentinel-2 Images” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, pp. 1–17 DOI: 10.1109/JSTARS.2023.3312521
  204. “Interpretable Socioeconomic Status Inference from Aerial Imagery through Urban Patterns” In Nature Machine Intelligence 2.11, 2020, pp. 684–692 DOI: 10.1038/s42256-020-00243-5
  205. “Explainable Dimensionality Reduction (XDR) to Unbox AI ‘Black Box’ Models: A Study of AI Perspectives on the Ethnic Styles of Village Dwellings” In Humanities and Social Sciences Communications 10.1 Palgrave, 2023, pp. 1–13 DOI: 10.1057/s41599-023-01505-4
  206. Katalin Blix, Gustau Camps-Valls and Robert Jenssen “Gaussian Process Sensitivity Analysis for Oceanic Chlorophyll Estimation” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10.4, 2017, pp. 1265–1277 DOI: 10.1109/JSTARS.2016.2641583
  207. “Feature Importance Measure of a Multilayer Perceptron Based on the Presingle-Connection Layer” In Knowledge and Information Systems, 2023 DOI: 10.1007/s10115-023-01959-7
  208. “Evaluating Explainable Artificial Intelligence Methods for Multi-Label Deep Learning Classification Tasks in Remote Sensing” In International Journal of Applied Earth Observation and Geoinformation 103, 2021 DOI: 10.1016/j.jag.2021.102520
  209. “Federated Onboard-Ground Station Computing with Weakly Supervised Cascading Pyramid Attention Network for Satellite Image Analysis” In IEEE access : practical innovations, open solutions 10, 2022, pp. 117315–117333 DOI: 10.1109/ACCESS.2022.3219879
  210. “Tackling the Accuracy-Interpretability Trade-off: Interpretable Deep Learning Models for Satellite Image-Based Real Estate Appraisal” In ACM Transactions on Management Information Systems 14.1, 2023 DOI: 10.1145/3567430
  211. “Which CAM Is Better for Extracting Geographic Objects? A Perspective From Principles and Experiments” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15, 2022, pp. 5623–5635 DOI: 10.1109/JSTARS.2022.3188493
  212. “Urban Vegetation Mapping from Aerial Imagery Using Explainable AI (XAI)” In Sensors 21.14, 2021 DOI: 10.3390/s21144738
  213. “Wildfire Danger Prediction and Understanding with Deep Learning” In Geophysical Research Letters n/a.n/a, 2022, pp. e2022GL099368 DOI: 10.1029/2022GL099368
  214. “Insights into the Vulnerability of Vegetation to Tephra Fallouts from Interpretable Machine Learning and Big Earth Observation Data” In Natural Hazards and Earth System Sciences 22.9, 2022, pp. 2829–2855 DOI: 10.5194/nhess-22-2829-2022
  215. “Exploring Wilderness Using Explainable Machine Learning in Satellite Imagery” In arXiv:2203.00379 [cs], 2022 arXiv:2203.00379 [cs]
  216. Sam J. Silva, Christoph A. Keller and Joseph Hardin “Using an Explainable Machine Learning Approach to Characterize Earth System Model Errors: Application of SHAP Analysis to Modeling Lightning Flash Occurrence” In Journal of Advances in Modeling Earth Systems 14.4, 2022, pp. e2021MS002881 DOI: 10.1029/2021MS002881
  217. B. Hosseiny, A.M. Abdi and S. Jamali “Urban Land Use and Land Cover Classification with Interpretable Machine Learning – A Case Study Using Sentinel-2 and Auxiliary Data” In Remote Sensing Applications: Society and Environment 28, 2022 DOI: 10.1016/j.rsase.2022.100843
  218. “Explaining the Effects of Clouds on Remote Sensing Scene Classification” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15, 2022, pp. 9976–9986 DOI: 10.1109/JSTARS.2022.3221788
  219. “High-Resolution Downscaling with Interpretable Deep Learning: Rainfall Extremes over New Zealand” In Weather and Climate Extremes 38, 2022, pp. 100525 DOI: 10.1016/j.wace.2022.100525
  220. Patrick W Keys, Elizabeth A Barnes and Neil H Carter “A Machine-Learning Approach to Human Footprint Index Estimation with Applications to Sustainable Development” In Environmental Research Letters 16.4, 2021, pp. 044061 DOI: 10.1088/1748-9326/abe00a
  221. “Deep Learning for Subtle Volcanic Deformation Detection with InSAR Data in Central Volcanic Zone” In IEEE Transactions on Geoscience and Remote Sensing 61, 2023, pp. 1–20 DOI: 10.1109/TGRS.2023.3318469
  222. “Disentangled Latent Transformer for Interpretable Monocular Height Estimation” arXiv, 2022 DOI: 10.48550/arXiv.2201.06357
  223. “Representation Learning with a Variational Autoencoder for Predicting Nitrogen Requirement in Rice” In Remote Sensing 14.23 MDPI, 2022 DOI: 10.3390/rs14235978
  224. “Explainable and Spatial Dependence Deep Learning Model for Satellite-Based O3 Monitoring in China” In Atmospheric Environment, 2022, pp. 119370 DOI: 10.1016/j.atmosenv.2022.119370
  225. Julio J. Valdés and Antonio Pou “Explainable AI Applied to the Analysis of the Climatic Behavior of 11 Years of Meteosat Water Vapor Images” In 2022 IEEE Symposium Series on Computational Intelligence (SSCI), 2022, pp. 846–853 DOI: 10.1109/SSCI51031.2022.10022301
  226. “New Interpretable Deep Learning Model to Monitor Real-Time PM2.5 Concentrations from Satellite Data” In Environment International 144, 2020 DOI: 10.1016/j.envint.2020.106060
  227. “Understanding Global Changes in Fine-Mode Aerosols during 2008–2017 Using Statistical Methods and Deep Learning Approach” In Environment International 149, 2021 DOI: 10.1016/j.envint.2021.106392
  228. Paolo Maranzano, Philipp Otto and Alessandro Fassò “Adaptive LASSO Estimation for Functional Hidden Dynamic Geostatistical Models” In Stochastic Environmental Research and Risk Assessment 37.9, 2023, pp. 3615–3637 DOI: 10.1007/s00477-023-02466-5
  229. “Determining the Contribution of Environmental Factors in Controlling Dust Pollution during Cold and Warm Months of Western Iran Using Different Data Mining Algorithms and Game Theory” In Ecological Indicators 132, 2021, pp. 108287 DOI: 10.1016/j.ecolind.2021.108287
  230. “Estimation of Daily NO2 with Explainable Machine Learning Model in China, 2007–2020” In Atmospheric Environment 314, 2023 DOI: 10.1016/j.atmosenv.2023.120111
  231. “Estimating Particulate Matter Concentrations and Meteorological Contributions in China during 2000–2020” In Chemosphere 330, 2023 DOI: 10.1016/j.chemosphere.2023.138742
  232. L. Zipfel, H. Andersen and J. Cermak “Machine-Learning Based Analysis of Liquid Water Path Adjustments to Aerosol Perturbations in Marine Boundary Layer Clouds Using Satellite Observations” In Atmosphere 13.4, 2022 DOI: 10.3390/atmos13040586
  233. “Integrating Low-Cost Sensor Monitoring, Satellite Mapping, and Geospatial Artificial Intelligence for Intra-Urban Air Pollution Predictions” In Environmental Pollution 331, 2023 DOI: 10.1016/j.envpol.2023.121832
  234. “Evaluation of MACC Total Aerosol Optical Depth and Its Correction Model Based on the Random Forest Regression” In Theoretical and Applied Climatology 152.3-4, 2023, pp. 1243–1258 DOI: 10.1007/s00704-023-04455-8
  235. Julio J. Valdés and Antonio Pou “A Machine Learning - Explainable AI Approach to Tropospheric Dynamics Analysis Using Water Vapor Meteosat Images” In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), 2021, pp. 1–8 DOI: 10.1109/SSCI50451.2021.9660188
  236. “A Data-Augmentation Approach to Deriving Long-Term Surface SO2 across Northern China: Implications for Interpretable Machine Learning” In Science of the Total Environment 827, 2022 DOI: 10.1016/j.scitotenv.2022.154278
  237. C.-S. Cheng, A.H. Behzadan and A. Noshadravan “Uncertainty-Aware Convolutional Neural Network for Explainable Artificial Intelligence-Assisted Disaster Damage Assessment” In Structural Control and Health Monitoring 29.10, 2022 DOI: 10.1002/stc.3019
  238. “Unboxing the Black Box of Attention Mechanisms in Remote Sensing Big Data Using XAI” In Remote Sensing 14.24 MDPI, 2022 DOI: 10.3390/rs14246254
  239. “BDD-Net+: A Building Damage Detection Framework Based on Modified Coat-Net” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 16, 2023, pp. 4232–4247 DOI: 10.1109/JSTARS.2023.3267847
  240. “Leveraging Involution and Convolution in an Explainable Building Damage Detection Framework” In European Journal of Remote Sensing 0.0 Taylor & Francis, 2023, pp. 2252166 DOI: 10.1080/22797254.2023.2252166
  241. “Local Interpretation of Machine Learning Models in Remote Sensing with SHAP: The Case of Global Climate Constraints on Photosynthesis Phenology” In International Journal of Remote Sensing 44.10, 2023, pp. 3160–3173 DOI: 10.1080/01431161.2023.2217982
  242. “Exploring the Individualized Effect of Climatic Drivers on MODIS Net Primary Productivity through an Explainable Machine Learning Framework” In Remote Sensing 14.17, 2022 DOI: 10.3390/rs14174401
  243. “Explainable Machine Learning Confirms the Global Terrestrial CO2 Fertilization Effect from Space” In IEEE Geoscience and Remote Sensing Letters 20, 2023, pp. 1–5 DOI: 10.1109/LGRS.2023.3298373
  244. “Jane Jacobs in the Sky: Predicting Urban Vitality with Open Satellite Data” In Proceedings of the ACM on Human-Computer Interaction 5.CSCW1, 2021, pp. 1–25 DOI: 10.1145/3449257
  245. “Measuring Impacts of Urban Environmental Elements on Housing Prices Based on Multisource Data—A Case Study of Shanghai, China” In ISPRS International Journal of Geo-Information 9.2 Multidisciplinary Digital Publishing Institute, 2020, pp. 106 DOI: 10.3390/ijgi9020106
  246. “Residential Greenness and Cardiac Conduction Abnormalities: Epidemiological Evidence and an Explainable Machine Learning Modeling Study” In Chemosphere 339, 2023 DOI: 10.1016/j.chemosphere.2023.139671
  247. “Modeling and Predicting Urban Expansion in South Korea Using Explainable Artificial Intelligence (XAI) Model” In Applied Sciences (Switzerland) 12.18, 2022 DOI: 10.3390/app12189169
  248. “MedSat: A Public Health Dataset for England Featuring Medical Prescriptions and Satellite Imagery” In Thirty-Seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023
  249. “Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI) and Remote Sensing” In Remote Sensing 14.13, 2022 DOI: 10.3390/rs14133074
  250. “Unpacking the Inter- and Intra-Urban Differences of the Association between Health and Exposure to Heat and Air Quality in Australia Using Global and Local Machine Learning Models” In Science of The Total Environment 871, 2023, pp. 162005 DOI: 10.1016/j.scitotenv.2023.162005
  251. “Predicting Intra-Urban Well-Being from Space with Nonlinear Machine Learning” In Regional Science Policy and Practice 14.4, 2022, pp. 891–913 DOI: 10.1111/rsp3.12478
  252. “Enhanced Joint Hybrid Deep Neural Network Explainable Artificial Intelligence Model for 1-Hr Ahead Solar Ultraviolet Index Prediction” In Computer Methods and Programs in Biomedicine 241, 2023 DOI: 10.1016/j.cmpb.2023.107737
  253. “Learning Relevant Features of Optical Water Types” In IEEE Geoscience and Remote Sensing Letters 19, 2022, pp. 1–5 DOI: 10.1109/LGRS.2021.3072049
  254. “Towards Transparent Deep Learning for Surface Water Detection from SAR Imagery” In International Journal of Applied Earth Observation and Geoinformation 118, 2023 DOI: 10.1016/j.jag.2023.103287
  255. “Monitoring the Vertical Distribution of HABs Using Hyperspectral Imagery and Deep Learning Models” In Science of the Total Environment 794, 2021 DOI: 10.1016/j.scitotenv.2021.148592
  256. “Explainable Deep Learning for Insights in El Niño and River Flows” In Nature Communications 14.1 Nature Research, 2023 DOI: 10.1038/s41467-023-35968-5
  257. “Interpretable Deep Learning Applied to Rip Current Detection and Localization” In Remote Sensing 14.23 MDPI, 2022 DOI: 10.3390/rs14236048
  258. Elif Ozlem Yilmaz, Hasan Tonbul and Taskin Kavzoglu “Marine Mucilage Mapping with Explained Deep Learning Model Using Water-Related Spectral Indices: A Case Study of Dardanelles Strait, Turkey” In Stochastic Environmental Research and Risk Assessment, 2023 DOI: 10.1007/s00477-023-02560-8
  259. “Interpreting Runoff Forecasting of Long Short-Term Memory Network: An Investigation Using the Integrated Gradient Method on Runoff Data from the Han River Basin” In Journal of Hydrology: Regional Studies 50, 2023, pp. 101549 DOI: 10.1016/j.ejrh.2023.101549
  260. “Cyanobacteria Cell Prediction Using Interpretable Deep Learning Model with Observed, Numerical, and Sensing Data Assemblage” In Water Research 203, 2021 DOI: 10.1016/j.watres.2021.117483
  261. “Water Storage Changes (2003–2020) in the Ordos Basin, China, Explained by GRACE Data and Interpretable Deep Learning” In Hydrogeology Journal, 2023 DOI: 10.1007/s10040-023-02713-7
  262. “Water Depth Estimation from Sentinel-2 Imagery Using Advanced Machine Learning Methods and Explainable Artificial Intelligence” In Geomatics, Natural Hazards and Risk 14.1, 2023 DOI: 10.1080/19475705.2023.2225691
  263. S.-C. Hung, H.-C. Wu and M.-H. Tseng “Integrating Image Quality Enhancement Methods and Deep Learning Techniques for Remote Sensing Scene Classification” In Applied Sciences (Switzerland) 11.24, 2021 DOI: 10.3390/app112411659
  264. “Recursive Visual Explanations Mediation Scheme Based on DropAttention Model with Multiple Episodes Pool” In IEEE access : practical innovations, open solutions 11 Institute of Electrical and Electronics Engineers Inc., 2023, pp. 4306–4321 DOI: 10.1109/ACCESS.2023.3235332
  265. “PolSAR Image Land Cover Classification Based on Hierarchical Capsule Network” In Remote Sensing 13.16 MDPI AG, 2021 DOI: 10.3390/rs13163132
  266. Daniel Guidici and Matthew L. Clark “One-Dimensional Convolutional Neural Network Land-Cover Classification of Multi-Seasonal Hyperspectral Imagery in the San Francisco Bay Area, California” In Remote Sensing 9.6 Multidisciplinary Digital Publishing Institute, 2017, pp. 629 DOI: 10.3390/rs9060629
  267. “Weakly Supervised Learning for Land Cover Mapping of Satellite Image Time Series via Attention-Based CNN” In IEEE access : practical innovations, open solutions 8, 2020, pp. 179547–179560 DOI: 10.1109/ACCESS.2020.3024133
  268. “Explaining a Deep SpatioTemporal Land Cover Classifier with Attention and Redescription Mining” 43, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, 2022, pp. 673–680 DOI: 10.5194/isprs-archives-XLIII-B3-2022-673-2022
  269. J. Feng, D. Wang and Z. Gu “Bidirectional Flow Decision Tree for Reliable Remote Sensing Image Scene Classification” In Remote Sensing 14.16, 2022 DOI: 10.3390/rs14163943
  270. “Physically Explainable CNN for SAR Image Classification” In ISPRS Journal of Photogrammetry and Remote Sensing 190, 2022, pp. 25–37 DOI: 10.1016/j.isprsjprs.2022.05.008
  271. “Multicrop Fusion Strategy Based on Prototype Assignment for Remote Sensing Image Scene Classification” In IEEE Transactions on Geoscience and Remote Sensing 60, 2022, pp. 1–12 DOI: 10.1109/TGRS.2022.3216831
  272. “Explainable Custom CNN Architecture for Land Use Classification Using Satellite Images” In 2021 Sixth International Conference on Image Information Processing (ICIIP) 6, 2021, pp. 304–309 DOI: 10.1109/ICIIP53038.2021.9702698
  273. Muzaffer Can Iban and Suleyman Sefa Bilgilioglu “Snow Avalanche Susceptibility Mapping Using Novel Tree-Based Machine Learning Algorithms (XGBoost, NGBoost, and LightGBM) with eXplainable Artificial Intelligence (XAI) Approach” In Stochastic Environmental Research and Risk Assessment, 2023 DOI: 10.1007/s00477-023-02392-6
  274. “Interpreting Conv-LSTM for Spatio-Temporal Soil Moisture Prediction in China” In Agriculture (Switzerland) 13.5, 2023 DOI: 10.3390/agriculture13050971
  275. “Soil Salinity Estimation for South Kazakhstan Based on SAR Sentinel-1 and Landsat-8,9 OLI Data with Machine Learning Models” In Remote Sensing 15.17, 2023 DOI: 10.3390/rs15174269
  276. “A New Method to Evaluate Gold Mineralisation-Potential Mapping Using Deep Learning and an Explainable Artificial Intelligence (XAI) Model” In Remote Sensing 14.18, 2022 DOI: 10.3390/rs14184486
  277. “Identification of Soil Texture Classes under Vegetation Cover Based on Sentinel-2 Data with SVM and SHAP Techniques” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15, 2022, pp. 3758–3770 DOI: 10.1109/JSTARS.2022.3164140
  278. I.A. Smorkalov “Soil Respiration Variability: Contributions of Space and Time Estimated Using the Random Forest Algorithm” In Russian Journal of Ecology 53.4, 2022, pp. 295–307 DOI: 10.1134/S1067413622040051
  279. “Predicting Slowdowns in Decadal Climate Warming Trends with Explainable Neural Networks” In Geophysical Research Letters 49.9, 2022 DOI: 10.1029/2022GL098173
  280. “Pixel Level Spatial Variability Modeling Using SHAP Reveals the Relative Importance of Factors Influencing LST” In Environmental Monitoring and Assessment 195.3, 2023, pp. 407 DOI: 10.1007/s10661-023-10950-2
  281. Minjun Kim, Dongbeom Kim and Geunhan Kim “Examining the Relationship between Land Use/Land Cover (LULC) and Land Surface Temperature (LST) Using Explainable Artificial Intelligence (XAI) Models: A Case Study of Seoul, South Korea” In International Journal of Environmental Research and Public Health 19.23 MDPI, 2022 DOI: 10.3390/ijerph192315926
  282. “Understanding the Relationship between 2D/3D Variables and Land Surface Temperature in Plain and Mountainous Cities: Relative Importance and Interaction Effects” In Building and Environment, 2023, pp. 110959 DOI: 10.1016/j.buildenv.2023.110959
  283. “Using GeoAI to Reveal the Contribution of Urban Park Green Space Features to Mitigate the Heat Island Effect” 2, Proceedings of the International Conference on Education and Research in Computer Aided Architectural Design in Europe, 2023, pp. 49–58
  284. “Automatic Bridge Detection of SAR Images Based on Interpretable Deep Learning Algorithm” 2562, Journal of Physics: Conference Series, 2023 DOI: 10.1088/1742-6596/2562/1/012013
  285. “Glassboxing Deep Learning to Enhance Aircraft Detection from Sar Imagery” In Remote Sensing 13.18 MDPI, 2021 DOI: 10.3390/rs13183650
  286. “G2Grad-CAMRL: An Object Detection and Interpretation Model Based on Gradient-Weighted Class Activation Mapping and Reinforcement Learning in Remote Sensing Images” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, pp. 1–16 DOI: 10.1109/JSTARS.2023.3241405
  287. “SAR-BagNet: An Ante-hoc Interpretable Recognition Model Based on Deep Network for SAR Image” In Remote Sensing 14.9 Multidisciplinary Digital Publishing Institute, 2022, pp. 2150 DOI: 10.3390/rs14092150
  288. “Clutter-Invariant Regularization for DNN-based SAR Target Recognition” In 2023 6th International Conference on Electronics Technology (ICET), 2023, pp. 1456–1461 DOI: 10.1109/ICET58434.2023.10211978
  289. “LIME-Based Data Selection Method for SAR Images Generation Using GAN” In Remote Sensing 14.1 Multidisciplinary Digital Publishing Institute, 2022, pp. 204 DOI: 10.3390/rs14010204
  290. “PAN: Part Attention Network Integrating Electromagnetic Characteristics for Interpretable SAR Vehicle Target Recognition” In IEEE Transactions on Geoscience and Remote Sensing, 2023, pp. 1–1 DOI: 10.1109/TGRS.2023.3256399
  291. “SAR Automatic Target Recognition Based on Supervised Deep Variational Autoencoding Model” In IEEE Transactions on Aerospace and Electronic Systems 57.6, 2021, pp. 4313–4328 DOI: 10.1109/TAES.2021.3096868
  292. “SAR-AD-BagNet: An Interpretable Model for SAR Image Recognition Based on Adversarial Defense” In IEEE Geoscience and Remote Sensing Letters, 2022, pp. 1–1 DOI: 10.1109/LGRS.2022.3230243
  293. “Local Attention Networks for Occluded Airplane Detection in Remote Sensing Images” In IEEE Geoscience and Remote Sensing Letters 17.3, 2020, pp. 381–385 DOI: 10.1109/LGRS.2019.2924822
  294. “SHAP-Based Interpretable Object Detection Method for Satellite Imagery” In Remote Sensing 14.9, 2022 DOI: 10.3390/rs14091970
  295. Mandeep, Husanbir Singh Pannu and Avleen Malhi “Deep Learning-Based Explainable Target Classification for Synthetic Aperture Radar Images” In 2020 13th International Conference on Human System Interaction (HSI), 2020, pp. 34–39 DOI: 10.1109/HSI49210.2020.9142658
  296. “LIME-Assisted Automatic Target Recognition with SAR Images: Toward Incremental Learning and Explainability” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 16, 2023, pp. 9175–9192 DOI: 10.1109/JSTARS.2023.3318675
  297. “SAR Target Classification Based on Integration of ASC Parts Model and Deep Learning Algorithm” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 2021, pp. 10213–10225 DOI: 10.1109/JSTARS.2021.3116979
  298. “Explainable Identification and Mapping of Trees Using UAV RGB Image and Deep Learning” In Scientific Reports 11.1 Nature Publishing Group, 2021, pp. 903 DOI: 10.1038/s41598-020-79653-9
  299. “Studying and Exploiting the Relationship between Model Accuracy and Explanation Quality” In Machine Learning and Knowledge Discovery in Databases. Research Track Cham: Springer International Publishing, 2021, pp. 699–714
  300. T.-A. Nguyen, B. Kellenberger and D. Tuia “Mapping Forest in the Swiss Alps Treeline Ecotone with Explainable Deep Learning” In Remote Sensing of Environment 281, 2022 DOI: 10.1016/j.rse.2022.113217
  301. “Assessment of the Regeneration of Landslides Areas Using Unsupervised and Supervised Methods and Explainable Machine Learning Models” In Landslides, 2023 DOI: 10.1007/s10346-023-02154-z
  302. “Widespread Increasing Vegetation Sensitivity to Soil Moisture” In Nature Communications 13.1 Nature Publishing Group, 2022, pp. 3959 DOI: 10.1038/s41467-022-31667-9
  303. “Toward a Better Understanding of Coastal Salt Marsh Mapping: A Case from China Using Dual-Temporal Images” In Remote Sensing of Environment 295, 2023 DOI: 10.1016/j.rse.2023.113664
  304. “Identifying Mangroves through Knowledge Extracted from Trained Random Forest Models: An Interpretable Mangrove Mapping Approach (IMMA)” In ISPRS Journal of Photogrammetry and Remote Sensing 201, 2023, pp. 209–225 DOI: 10.1016/j.isprsjprs.2023.05.025
  305. “Higher Crop Diversity in Less Diverse Landscapes”, 2023 DOI: 10.21203/rs.3.rs-3410387/v1
  306. “Features Predisposing Forest to Bark Beetle Outbreaks and Their Dynamics during Drought” In Forest Ecology and Management 523, 2022, pp. 120480 DOI: 10.1016/j.foreco.2022.120480
  307. Kyle A. Hilburn, Imme Ebert-Uphoff and Steven D. Miller “Development and Interpretation of a Neural-Network-Based Synthetic Radar Reflectivity Estimator Using GOES-R Satellite Observations” In Journal of Applied Meteorology and Climatology 60.1 American Meteorological Society, 2020, pp. 3–21 DOI: 10.1175/JAMC-D-20-0084.1
  308. Zane K. Martin, Elizabeth A. Barnes and Eric Maloney “Using Simple, Explainable Neural Networks to Predict the Madden-Julian Oscillation” In Journal of Advances in Modeling Earth Systems 14.5, 2022 DOI: 10.1029/2021MS002774
  309. Kirsten J. Mayer and Elizabeth A. Barnes “Subseasonal Forecasts of Opportunity Identified by an Explainable Neural Network” In Geophysical Research Letters 48.10, 2021 DOI: 10.1029/2020GL092092
  310. “Predictor Selection for CNN-based Statistical Downscaling of Monthly Precipitation” In Advances in Atmospheric Sciences, 2023 DOI: 10.1007/s00376-022-2119-x
  311. “Classifying Precipitation from GEO Satellite Observations: Diagnostic Model” In Quarterly Journal of the Royal Meteorological Society 147.739, 2021, pp. 3318–3334 DOI: 10.1002/qj.4130
  312. “Understanding the Drivers of Drought Onset and Intensification in the Canadian Prairies: Insights from Explainable Artificial Intelligence (XAI)” In Journal of Hydrometeorology -1.aop American Meteorological Society, 2023 DOI: 10.1175/JHM-D-23-0036.1
  313. “Comparison of Data-Driven Methods for Linking Extreme Precipitation Events to Local and Large-Scale Meteorological Variables” In Stochastic Environmental Research and Risk Assessment 37.11, 2023, pp. 4337–4357 DOI: 10.1007/s00477-023-02511-3
  314. “MIIDAPS-AI: An Explainable Machine-Learning Algorithm for Infrared and Microwave Remote Sensing and Data Assimilation Preprocessing - Application to LEO and GEO Sensors” In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 2021, pp. 8566–8576 DOI: 10.1109/JSTARS.2021.3104389
  315. “AN EXPLAINABLE CONVOLUTIONAL AUTOENCODER MODEL for UNSUPERVISED CHANGE DETECTION” 43, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, 2020, pp. 1513–1519 DOI: 10.5194/isprs-archives-XLIII-B2-2020-1513-2020
  316. “True Global Error Maps for SMAP, SMOS, and ASCAT Soil Moisture Data Based on Machine Learning and Triple Collocation Analysis” In Remote Sensing of Environment 298, 2023 DOI: 10.1016/j.rse.2023.113776
  317. Dae-Seong Lee, Da-Yeong Lee and Young-Seuk Park “Interpretable Machine Learning Approach to Analyze the Effects of Landscape and Meteorological Factors on Mosquito Occurrences in Seoul, South Korea” In Environmental Science and Pollution Research, 2022 DOI: 10.1007/s11356-022-22099-5
  318. “Using Explainable Machine Learning to Understand How Urban Form Shapes Sustainable Mobility” In Transportation Research Part D: Transport and Environment 111, 2022, pp. 103442 DOI: 10.1016/j.trd.2022.103442
  319. “Data-Driven and Interpretable Machine-Learning Modeling to Explore the Fine-Scale Environmental Determinants of Malaria Vectors Biting Rates in Rural Burkina Faso” In Parasites and Vectors 14.1, 2021 DOI: 10.1186/s13071-021-04851-x
  320. “Pitfalls to Avoid When Interpreting Machine Learning Models” In XXAI: Extending Explainable AI beyond Deep Models and Classifiers, ICML 2020 Workshop, 2020
  321. “The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective” arXiv, 2022 DOI: 10.48550/arXiv.2202.01602
  322. Cynthia Rudin “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead” In Nature Machine Intelligence 1.5 Nature Publishing Group, 2019, pp. 206–215 DOI: 10.1038/s42256-019-0048-x
  323. Leon Sixt, Maximilian Granz and Tim Landgraf “When Explanations Lie: Why Many Modified BP Attributions Fail” In 37th International Conference on Machine Learning, ICML 2020 PartF16814 International Machine Learning Society (IMLS), 2020, pp. 8993–9004 DOI: 10.48550/arxiv.1912.09818
  324. Antonios Mamalakis, Elizabeth A. Barnes and Imme Ebert-Uphoff “Investigating the Fidelity of Explainable Artificial Intelligence Methods for Applications of Convolutional Neural Networks in Geoscience” In Artificial Intelligence for the Earth Systems 1.4, 2022, pp. e220012 DOI: 10.1175/AIES-D-22-0012.1
  325. “Finding the Right XAI Method – A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science” arXiv, 2023 DOI: 10.48550/arXiv.2303.00652
  326. “On Baselines for Local Feature Attributions” arXiv, 2021 DOI: 10.48550/arXiv.2101.00905
  327. Antonios Mamalakis, Elizabeth A. Barnes and Imme Ebert-Uphoff “Carefully Choose the Baseline: Lessons Learned from Applying XAI Attribution Methods for Regression Tasks in Geoscience” In Artificial Intelligence for the Earth Systems 2.1 American Meteorological Society, 2023 DOI: 10.1175/AIES-D-22-0058.1
  328. “How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review” In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’22 New York, NY, USA: Association for Computing Machinery, 2022, pp. 78–91 DOI: 10.1145/3514094.3534164
  329. Raymond S. Nickerson “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises” In Review of General Psychology 2.2, 1998, pp. 175–220 DOI: 10.1037/1089-2680.2.2.175
  330. “What Do We Want from Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research” In Artificial Intelligence 296, 2021, pp. 103473 DOI: 10.1016/j.artint.2021.103473
  331. “Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark” arXiv, 2022 DOI: 10.48550/arXiv.2207.14160
  332. “OpenXAI: Towards a Transparent Evaluation of Model Explanations” arXiv, 2023 DOI: 10.48550/arXiv.2206.11104
  333. “Transitioning to Human Interaction with AI Systems: New Challenges and Opportunities for HCI Professionals to Enable Human-Centered AI” In International Journal of Human–Computer Interaction 39.3 Taylor & Francis, 2023, pp. 494–518 DOI: 10.1080/10447318.2022.2041900
  334. Leila Arras, Ahmed Osman and Wojciech Samek “CLEVR-XAI: A Benchmark Dataset for the Ground Truth Evaluation of Neural Network Explanations” In Information Fusion 81, 2022, pp. 14–40 DOI: 10.1016/j.inffus.2021.11.008
  335. Antonios Mamalakis, Imme Ebert-Uphoff and Elizabeth A. Barnes “Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset” In Environmental Data Science 1, 2022, pp. e8 DOI: 10.1017/eds.2022.7
  336. “Detecting Climate Signals Using Explainable AI with Single-Forcing Large Ensembles” In Journal of Advances in Modeling Earth Systems 13.6, 2021 DOI: 10.1029/2021MS002464
  337. “Synthetic Data Generation to Mitigate the Low/No-Shot Problem in Machine Learning” In 2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), 2019, pp. 1–7 DOI: 10.1109/AIPR47015.2019.9174596
  338. “RarePlanes: Synthetic Data Takes Flight” In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 207–217 DOI: 10.1109/WACV48630.2021.00025
  339. “Efficient Generation of Image Chips for Training Deep Learning Algorithms” In Automatic Target Recognition XXVII 10202 SPIE, 2017, pp. 15–23 DOI: 10.1117/12.2261702
  340. “SyntEO: Synthetic Dataset Generation for Earth Observation and Deep Learning – Demonstrated for Offshore Wind Farm Detection” In ISPRS Journal of Photogrammetry and Remote Sensing 189, 2022, pp. 163–184 DOI: 10.1016/j.isprsjprs.2022.04.029
  341. “Contrastive-Regulated CNN in the Complex Domain: A Method to Learn Physical Scattering Signatures from Flexible PolSAR Images” In IEEE Transactions on Geoscience and Remote Sensing 57.12, 2019, pp. 10116–10135 DOI: 10.1109/TGRS.2019.2931620
  342. “But Are You Sure? An Uncertainty-Aware Perspective on Explainable AI”
  343. “Reliable Post Hoc Explanations: Modeling Uncertainty in Explainability” In Advances in Neural Information Processing Systems 34 Curran Associates, Inc., 2021, pp. 9391–9404 URL: https://proceedings.neurips.cc/paper/2021/hash/4e246a381baf2ce038b3b0f82c7d6fb4-Abstract.html
  344. “Uncertainty Exploration: Towards Explainable SAR Target Detection” In IEEE Transactions on Geoscience and Remote Sensing, 2023, pp. 1–1 DOI: 10.1109/TGRS.2023.3247898
  345. “Feedback-Assisted Automatic Target and Clutter Discrimination Using a Bayesian Convolutional Neural Network for Improved Explainability in SAR Applications” In Remote Sensing 14.23 MDPI, 2022 DOI: 10.3390/rs14236096
  346. Judea Pearl “Causality” Cambridge university press, 2009
  347. “On causal and anticausal learning” In International Conference on Machine Learning, 2012 URL: https://api.semanticscholar.org/CorpusID:17675972
  348. “Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 4088–4099
  349. Gunnar Carlsson “Topology and Data” In Bulletin of the American Mathematical Society 46.2, 2009, pp. 255–308 DOI: 10.1090/S0273-0979-09-01249-X
  350. Gunnar Carlsson and Rickard Brüel Gabrielsson “Topological Approaches to Deep Learning” In Topological Data Analysis 15 Cham: Springer International Publishing, 2020, pp. 119–146 DOI: 10.1007/978-3-030-43408-3˙5
  351. Felix Hensel, Michael Moor and Bastian Rieck “A Survey of Topological Machine Learning Methods” In Frontiers in Artificial Intelligence 4, 2021
  352. Ludovic Duponchel “When Remote Sensing Meets Topological Data Analysis” In Journal of Spectral Imaging, 2018, pp. a1 DOI: 10.1255/jsi.2018.a1
  353. “Topological Learning for Semi-Supervised Anomaly Detection in Hyperspectral Imagery” 2019-July, Proceedings of the IEEE National Aerospace Electronics Conference, NAECON, 2019, pp. 560–564 DOI: 10.1109/NAECON46414.2019.9058127
  354. “Benchmarking Deep Learning Interpretability in Time Series Predictions” In Advances in Neural Information Processing Systems 2020-Decem.NeurIPS, 2020, pp. 1–12 arXiv:2010.13924
  355. “Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions” In IEEE Access 10, 2022, pp. 100700–100724 DOI: 10.1109/ACCESS.2022.3207765
  356. “From” where” to” what”: Towards human-understandable explanations through concept relevance propagation” In arXiv preprint arXiv:2206.03208, 2022
  357. “xxAI-beyond explainable artificial intelligence” In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, 2020, pp. 3–10 Springer
  358. “On completeness-aware concept-based explanations in deep neural networks” In Advances in neural information processing systems 33, 2020, pp. 20554–20565
  359. Ying Ji, Yu Wang and Jien Kato “Spatial-temporal Concept based Explanation of 3D ConvNets” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 15444–15453
  360. “Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 10932–10941
  361. “Earthformer: Exploring Space-Time Transformers for Earth System Forecasting” arXiv, 2023 DOI: 10.48550/arXiv.2207.05833
  362. Michail Tarasiou, Erik Chavez and Stefanos Zafeiriou “ViTs for SITS: Vision Transformers for Satellite Image Time Series” arXiv, 2023 DOI: 10.48550/arXiv.2301.04944
  363. “ViViT: A Video Vision Transformer” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 6836–6846
  364. “Multiscale Vision Transformers” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 6824–6835
  365. “Video Swin Transformer” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3202–3211
  366. “Video Transformer Network” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3163–3172
  367. Petar Veličković “Everything is connected: Graph neural networks” In Current Opinion in Structural Biology 79 Elsevier, 2023, pp. 102538
  368. “Graph neural networks: A review of methods and applications” In AI open 1 Elsevier, 2020, pp. 57–81
  369. Thomas N Kipf and Max Welling “Semi-supervised classification with graph convolutional networks” In arXiv preprint arXiv:1609.02907, 2016
  370. “Graph convolutional networks for hyperspectral image classification” In IEEE Transactions on Geoscience and Remote Sensing 59.7 IEEE, 2020, pp. 5966–5978
  371. “Dual interactive graph convolutional networks for hyperspectral image classification” Publisher Copyright: IEEE Copyright: Copyright 2021 Elsevier B.V., All rights reserved. In IEEE Transactions on Geoscience and Remote Sensing 60 IEEE, Institute of ElectricalElectronics Engineers, 2021 DOI: 10.1109/TGRS.2021.3075223
  372. “Two-Branch Deeper Graph Convolutional Network for Hyperspectral Image Classification” In IEEE Transactions on Geoscience and Remote Sensing 61 IEEE, 2023, pp. 1–14
  373. “Nonlocal graph convolutional networks for hyperspectral image classification” In IEEE Transactions on Geoscience and Remote Sensing 58.12 IEEE, 2020, pp. 8246–8257
  374. “Hyperspectral Image Classification With Contrastive Graph Convolutional Network” In IEEE Transactions on Geoscience and Remote Sensing 61 IEEE, 2023, pp. 1–15
  375. Ninghao Liu, Qizhang Feng and Xia Hu “Interpretability in Graph Neural Networks” In Graph Neural Networks: Foundations, Frontiers, and Applications Singapore: Springer Singapore, 2022, pp. 121–147
  376. “Explainability in graph neural networks: A taxonomic survey” In IEEE transactions on pattern analysis and machine intelligence 45.5 IEEE, 2022, pp. 5782–5799
  377. “Self-supervised learning in remote sensing: A review” In arXiv preprint arXiv:2206.13188, 2022
  378. “Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models” In Medical Image Computing and Computer Assisted Intervention – MICCAI 2023, Lecture Notes in Computer Science Cham: Springer Nature Switzerland, 2023, pp. 596–606 DOI: 10.1007/978-3-031-43895-0˙56
  379. Sina Mohseni, Niloofar Zarei and Eric D. Ragan “A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems” In ACM Transactions on Interactive Intelligent Systems 11.3-4, 2021, pp. 24:1–24:45 DOI: 10.1145/3387166

Summary

  • The paper identifies a growing trend in applying xAI methods, notably SHAP and CAM, to enhance transparency in remote sensing applications.
  • The paper highlights how adaptations in xAI techniques address RS-specific challenges such as scale, spectral properties, and temporal dependencies.
  • The paper calls for standardized, quantitative evaluation frameworks to improve trust and validate xAI methods in ML-driven Earth Observation.

Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing

Introduction to the Study

Recent advancements have witnessed a surge in applying ML methods, notably black-box models, across various Earth Observation (EO) tasks. These models, while powerful, lack inherent transparency, making it difficult to interpret or trust their decisions fully. This gap underscores the need for explainable AI (xAI) methods within the remote sensing (RS) domain, which this paper addresses through a comprehensive systematic review.

Key Findings and Methodological Insights

The analysis reveals a sharp increase in xAI application within RS, focusing on method utilization, development trends, and practical implications. Interestingly, the review identifies a significant increase in publications related to xAI in RS, indicating growing interest and application in this field. Among the xAI methods applied, SHapley Additive exPlanations (SHAP) stands out for its frequent use, attributed to its model-agnostic nature and ability to provide local and global explanations.

One pivotal finding is the adoption of model-specific xAI methods, particularly Class Activation Mapping (CAM) variants, which have seen tailored modifications to address unique RS challenges such as scale, topology, spectral properties, and temporal dependencies. These adaptations highlight the need for xAI methods that account for the characteristic complexities of RS data.

Challenges in Applying xAI to RS

Critical challenges remain in fully integrating xAI within RS, particularly concerning data properties unique to RS, such as scale and temporal dependencies. These challenges underscore the limitations of current xAI methods, primarily designed for natural images and not directly transferable to RS imagery. Furthermore, the review discloses a gap in methodological developments for handling the temporal and spectral richness of RS data.

Evaluation and Validation of xAI Methods

A noteworthy aspect of existing literature is the reliance on anecdotal evidence for valuating xAI outcomes, lacking a standardized and robust evaluation framework. This review calls attention to the necessity for quantitative metrics and user studies to validate and compare xAI methods objectively. Hence, fostering the development of such evaluation frameworks emerges as a critical need to move the field forward.

Practical Implications and Future Directions

This review illuminates the practical value of xAI in RS, extending beyond model interpretation to include enhancing model reliability, facilitating scientific discovery, and ensuring compliance with regulatory standards. Furthermore, it signals promising research directions, particularly in developing xAI approaches tailored to the RS domain, standardizing evaluation methodologies, and integrating xAI with physics-aware ML and uncertainty quantification.

Concluding Remarks

In sum, this study provides a foundational overview of the state-of-the-art in xAI application within the RS domain. By highlighting key trends, challenges, and potential research directions, it sets the stage for future methodological advancements that could significantly impact how RS data is analyzed and interpreted. Bridging the identified gaps and developing xAI methods attuned to the nuances of RS data promises to unlock deeper insights and foster trust in ML-driven EO applications.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 4 tweets with 14 likes about this paper.