Taking a PEEK into YOLOv5 for Satellite Component Recognition via Entropy-based Visual Explanations (2311.01703v2)
Abstract: The escalating risk of collisions and the accumulation of space debris in Low Earth Orbit (LEO) has reached critical concern due to the ever increasing number of spacecraft. Addressing this crisis, especially in dealing with non-cooperative and unidentified space debris, is of paramount importance. This paper contributes to efforts in enabling autonomous swarms of small chaser satellites for target geometry determination and safe flight trajectory planning for proximity operations in LEO. Our research explores on-orbit use of the You Only Look Once v5 (YOLOv5) object detection model trained to detect satellite components. While this model has shown promise, its inherent lack of interpretability hinders human understanding, a critical aspect of validating algorithms for use in safety-critical missions. To analyze the decision processes, we introduce Probabilistic Explanations for Entropic Knowledge extraction (PEEK), a method that utilizes information theoretic analysis of the latent representations within the hidden layers of the model. Through both synthetic in hardware-in-the-loop experiments, PEEK illuminates the decision-making processes of the model, helping identify its strengths, limitations and biases.
- Johnson, N. L., “Operation Burnt Frost: A View From Inside,” Space Policy, Vol. 56, 2021, p. 101411. 10.1016/j.spacepol.2021.101411.
- Kestenbaum, D., “Chinese Missile Destroys Satellite in 500-Mile Orbit,” , Jan. 2007. URL https://www.npr.org/2007/01/19/6923805/chinese-missile-destroys-satellite-in-500-mile-orbit.
- Henry, C., “India ASAT debris spotted above 2,200 kilometers, will remain a year or more in orbit,” , Apr. 2019. URL https://spacenews.com/india-asat-debris-spotted-above-2200-kilometers-will-last-a-year-or-more/.
- U.S. Space Command Public Affairs Office, “Russian direct-ascent anti-satellite missile test creates significant, long-lasting space debris,” , Nov. 2021. URL https://www.spacecom.mil/Newsroom/News/Article-Display/Article/2842957/russian-direct-ascent-anti-satellite-missile-test-creates-significant-long-last/.
- United Nations General Assembly, “Resolution: Destructive direct-ascent anti-satellite missile testing,” , Dec. 2022. URL http://digitallibrary.un.org/record/3996915.
- Cheng, A. F., Rivkin, A. S., Michel, P., Atchison, J., Barnouin, O., Benner, L., Chabot, N. L., Ernst, C., Fahnestock, E. G., Kueppers, M., Pravec, P., Rainey, E., Richardson, D. C., Stickle, A. M., and Thomas, C., “AIDA DART asteroid deflection test: Planetary defense and science objectives,” Planetary and Space Science, Vol. 157, 2018, pp. 104–115. 10.1016/j.pss.2018.02.015.
- Davis, T., Baker, M. T., Belchak, T., and Larsen, W., “XSS-10 micro-satellite flight demonstration program,” 2003.
- Air Force Research Laboratory, “XSS-11 Micro Satellite,” Tech. rep., 2011. URL https://www.kirtland.af.mil/Portals/52/documents/AFD-111103-035.pdf?ver=2016-06-28-110256-797.
- Air Force Research Laboratory, “Automated Navigation and Guidance Experiment for Local Space (ANGELS),” Tech. rep., 2014.
- Sheetz, M., “For the first time ever, a robotic spacecraft caught an old satellite and extended its life,” , Apr. 2020. URL https://www.cnbc.com/2020/04/17/northrop-grumman-mev-1-spacecraft-services-intelsat-901-satellite.html.
- Northrop Grumman, “Northrop Grumman and Intelsat Make History with Docking of Second Mission Extension Vehicle to Extend Life of Satellite,” , Apr. 2021.
- Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L., “ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255. 10.1109/CVPR.2009.5206848.
- Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L., “Microsoft COCO: Common objects in context,” Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, Springer, 2014, pp. 740–755.
- Krizhevsky, A., Sutskever, I., and Hinton, G. E., “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, Vol. 60, No. 6, 2017, pp. 84–90. 10.1145/3065386.
- Simonyan, K., and Zisserman, A., “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
- Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A., “Going deeper with convolutions,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9. 10.1109/CVPR.2015.7298594.
- He, K., Zhang, X., Ren, S., and Sun, J., “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- Szegedy, C., Toshev, A., and Erhan, D., “Deep Neural Networks for Object Detection,” Advances in Neural Information Processing Systems, Vol. 26, edited by C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, Curran Associates, Inc., 2013. URL https://proceedings.neurips.cc/paper_files/paper/2013/file/f7cade80b7cc92b991cf4d2806d6bd78-Paper.pdf.
- URL http://arxiv.org/abs/1311.2524.
- Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and LeCun, Y., “OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks,” , 2013. 10.48550/ARXIV.1312.6229.
- He, K., Zhang, X., Ren, S., and Sun, J., “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, Vol. 37, No. 9, 2015, pp. 1904–1916.
- Ren, S., He, K., Girshick, R. B., and Sun, J., “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Vol. abs/1506.01497, 2015. URL http://arxiv.org/abs/1506.01497.
- Redmon, J., Divvala, S. K., Girshick, R. B., and Farhadi, A., “You Only Look Once: Unified, Real-Time Object Detection,” Vol. abs/1506.02640, 2015. URL http://arxiv.org/abs/1506.02640.
- Redmon, J., and Farhadi, A., “YOLO9000: Better, Faster, Stronger,” arXiv preprint arXiv:1612.08242, 2016.
- Redmon, J., and Farhadi, A., “YOLOv3: An Incremental Improvement,” arXiv preprint arXiv:1804.02767, 2018.
- Bochkovskiy, A., Wang, C.-Y., and Liao, H.-Y. M., “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
- Ge, Z., Liu, S., Wang, F., Li, Z., and Sun, J., “Yolox: Exceeding yolo series in 2021,” arXiv preprint arXiv:2107.08430, 2021.
- Jocher, G., Ayush Chaurasia, Stoken, A., Borovec, J., NanoCode012, Yonghye Kwon, Kalen Michael, TaoXie, Jiacong Fang, Imyhxy, , Lorna, Zeng Yifu, Wong, C., Abhiram V, Montes, D., Zhiqiang Wang, Fati, C., Jebastin Nadar, Laughing, UnglvKitDe, Sonck, V., Tkianai, YxNONG, Skalski, P., Hogan, A., Dhruv Nair, Strobel, M., and Jain, M., “ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation,” , 2022. 10.5281/ZENODO.7347926.
- Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., Ke, Z., Li, Q., Cheng, M., Nie, W., et al., “YOLOv6: A single-stage object detection framework for industrial applications,” arXiv preprint arXiv:2209.02976, 2022.
- Wang, C.-Y., Bochkovskiy, A., and Liao, H.-Y. M., “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv preprint arXiv:2207.02696, 2022.
- Jocher, G., Chaurasia, A., and Qiu, J., “YOLO by Ultralytics,” , Jan. 2023. URL https://github.com/ultralytics/ultralytics.
- Caruso, B., Mahendrakar, T., Nguyen, V. M., White, R. T., and Steffen, T., “3D Reconstruction of Non-cooperative Resident Space Objects using Instant NGP-accelerated NeRF and D-NeRF,” arXiv preprint arXiv:2301.09060, 2023.
- Mahendrakar, T., White, R. T., Wilde, M., Kish, B., and Silver, I., “Real-time Satellite Component Recognition with YOLO-V5,” 35th Annual Small Satellite Conference, 2021a.
- Mahendrakar, T., Attzs, M. N., Tisaranni, A. L., Duarte, J. M., White, R. T., and Wilde, M., “Impact of Intra-class Variance on YOLOv5 Model Performance for Autonomous Navigation around Non-Cooperative Targets,” AIAA SCITECH 2023 Forum, 2023a, p. 2374.
- Mahendrakar, T., Ekblad, A., Fischer, N., White, R., Wilde, M., Kish, B., and Silver, I., “Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation around Non-Cooperative Targets,” 2022 IEEE Aerospace Conference (AERO), IEEE, 2022, pp. 1–12.
- Mahendrakar, T., Wilde, M., and White, R., “Use of artificial intelligence for feature recognition and flightpath planning around non-cooperative resident space objects,” ASCEND 2021, 2021b, p. 4123.
- Mahendrakar, T., Holmberg, S., Ekblad, A., Conti, E., White, R. T., Wilde, M., and Silver, I., “Autonomous Rendezvous with Non-Cooperative Target Objects with Swarm Chasers and Observers,” arXiv preprint arXiv:2301.09059, 2023b.
- Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D., “Grad-cam: Visual explanations from deep networks via gradient-based localization,” Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
- Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. A., “Striving for Simplicity: The All Convolutional Net,” 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, edited by Y. Bengio and Y. LeCun, 2015. URL http://arxiv.org/abs/1412.6806.
- Chattopadhay, A., Sarkar, A., Howlader, P., and Balasubramanian, V. N., “Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018, pp. 839–847. 10.1109/WACV.2018.00097.
- Mopuri, K. R., Garg, U., and Babu, R. V., “Cnn fixations: an unraveling approach to visualize the discriminative image regions,” IEEE Transactions on Image Processing, Vol. 28, No. 5, 2018, pp. 2116–2125.
- Shrikumar, A., Greenside, P., and Kundaje, A., “Learning important features through propagating activation differences,” International conference on machine learning, PMLR, 2017, pp. 3145–3153.
- Muhammad, M. B., and Yeasin, M., “Eigen-cam: Class activation map using principal components,” 2020 international joint conference on neural networks (IJCNN), IEEE, 2020, pp. 1–7.
- Meni, M. J., White, R. T., Mayo, M., and Pilkiewicz, K., “Entropy-based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance,” arXiv preprint arXiv:2308.14938, 2023.
- Marr, D., and Hildreth, E., “Theory of edge detection,” Proceedings of the Royal Society of London. Series B. Biological Sciences, Vol. 207, No. 1167, 1980, pp. 187–217.
- 10.1017/CBO9780511804441.
- Attzs, M. N. J., Mahendrakar, T., Meni, M. J., White, R. T., and Silver, I., “Comparison of Tracking-By-Detection Algorithms for Real-Time Satellite Component Tracking,” 37th Annual Small Satellite Conference, 2023.
- Wilde, M., Kaplinger, B., Go, T., Gutierrez, H., and Kirk, D., “ORION: A simulation environment for spacecraft formation flight, capture, and orbital robotics,” 2016 IEEE Aerospace Conference, IEEE, 2016, pp. 1–14.
- Wang, C.-Y., Liao, H.-Y. M., Wu, Y.-H., Chen, P.-Y., Hsieh, J.-W., and Yeh, I.-H., “CSPNet: A new backbone that can enhance learning capability of CNN,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 390–391.
- Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S., “Feature pyramid networks for object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117–2125.
- Liu, S., Qi, L., Qin, H., Shi, J., and Jia, J., “Path aggregation network for instance segmentation,” Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8759–8768.