Interactive Mars Image Content-Based Search with Interpretable Machine Learning
Abstract: The NASA Planetary Data System (PDS) hosts millions of images of planets, moons, and other bodies collected throughout many missions. The ever-expanding nature of data and user engagement demands an interpretable content classification system to support scientific discovery and individual curiosity. In this paper, we leverage a prototype-based architecture to enable users to understand and validate the evidence used by a classifier trained on images from the Mars Science Laboratory (MSL) Curiosity rover mission. In addition to providing explanations, we investigate the diversity and correctness of evidence used by the content-based classifier. The work presented in this paper will be deployed on the PDS Image Atlas, replacing its non-interpretable counterpart.
- Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI communications, 7(1): 39–59.
- Sanity checks for saliency maps. Advances in neural information processing systems, 31.
- Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6541–6549.
- GAN Dissection: Visualizing and Understanding Generative Adversarial Networks. In Proceedings of the International Conference on Learning Representations (ICLR).
- This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems, 32.
- DARPA’s explainable AI (XAI) program: A retrospective. Applied AI Letters, 2(4): e61.
- On calibration of modern neural networks. In International conference on machine learning, 1321–1330. PMLR.
- iGOS++ integrated gradient optimized saliency by bilateral perturbations. In Proceedings of the Conference on Health, Inference, and Learning, 174–182.
- Adam: A Method for Stochastic Optimization. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
- Kolodner, J. L. 1992. An introduction to case-based reasoning. Artificial intelligence review, 6(1): 3–34.
- “How do I fool you?” Manipulating User Trust via Misleading Black Box Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 79–85.
- RSI-Grad-CAM: Visual explanations from deep networks via Riemann-Stieltjes integrated gradient-based localization. In International Symposium on Visual Computing, 262–274. Springer.
- Obtaining interpretable fuzzy classification rules from medical data. Artificial intelligence in medicine, 16(2): 149–169.
- RISE: Randomized Input Sampling for Explanation of Black-box Models. In Proceedings of the British Machine Vision Conference (BMVC).
- Rudin, C. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5): 206–215.
- Relevance feedback: a power tool for interactive content-based image retrieval. IEEE Transactions on Circuits and Systems for Video Technology, 8(5): 644–655.
- ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3): 211–252.
- Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on computer vision, 618–626.
- Explainable, interactive content-based image retrieval. Applied AI Letters, 2(4): e41.
- Mars image content classification: Three years of NASA deployment and recent advances. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 15204–15213.
- Visualizing and understanding convolutional networks. In European conference on computer vision, 818–833. Springer.
- Part-based R-CNNs for fine-grained category detection. In European conference on computer vision, 834–849. Springer.
- Learning multi-attention convolutional neural network for fine-grained image recognition. In Proceedings of the IEEE international conference on computer vision, 5209–5217.
- Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2921–2929.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.