Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Attentional Recurrent Neural Network for Occlusion-Aware Proactive Anomaly Detection in Field Robot Navigation (2309.16826v1)

Published 28 Sep 2023 in cs.RO

Abstract: The use of mobile robots in unstructured environments like the agricultural field is becoming increasingly common. The ability for such field robots to proactively identify and avoid failures is thus crucial for ensuring efficiency and avoiding damage. However, the cluttered field environment introduces various sources of noise (such as sensor occlusions) that make proactive anomaly detection difficult. Existing approaches can show poor performance in sensor occlusion scenarios as they typically do not explicitly model occlusions and only leverage current sensory inputs. In this work, we present an attention-based recurrent neural network architecture for proactive anomaly detection that fuses current sensory inputs and planned control actions with a latent representation of prior robot state. We enhance our model with an explicitly-learned model of sensor occlusion that is used to modulate the use of our latent representation of prior robot state. Our method shows improved anomaly detection performance and enables mobile field robots to display increased resilience to predicting false positives regarding navigation failure during periods of sensor occlusion, particularly in cases where all sensors are briefly occluded. Our code is available at: https://github.com/andreschreiber/roar

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. E. Kayacan, Z. Zhang, and G. Chowdhary, “Embedded high precision control and corn stand counting algorithms for an ultra-compact 3d printed field robot,” in Proc. Robotics: Sci. Syst., 2018.
  2. R. Xu, C. Li, and J. M. Velni, “Development of an autonomous ground robot for field high throughput phenotyping,” IFAC-PapersOnLine, vol. 51, no. 17, pp. 70–74, 2018.
  3. T. Ji, R. Dong, and K. Driggs-Campbell., “Traversing supervisor problem: An approximately optimal approach to multi-robot assistance,” in Proc. Robotics: Sci. Syst., 2022.
  4. T. Ji, A. N. Sivakumar, G. Chowdhary, and K. Driggs-Campbell, “Proactive anomaly detection for robot navigation with multi-sensor fusion,” IEEE Robot. Automat. Lett., vol. 7, no. 2, pp. 4975–4982, 2022.
  5. P. Malhotra, A. Ramakrishnan, G. Anand, L. Vig, P. Agarwal, and G. Shroff, “LSTM-based encoder-decoder for multi-sensor anomaly detection,” 2016, arXiv:1607.00148.
  6. D. Park, Y. Hoshi, and C. C. Kemp, “A multimodal anomaly detector for robot-assisted feeding using an lstm-based variational autoencoder,” IEEE Robot. Automat. Lett., vol. 3, no. 3, pp. 1544–1551, 2018.
  7. T. Ji, S. T. Vuppala, G. Chowdhary, and K. Driggs-Campbell, “Multimodal anomaly detection for unstructured and uncertain environments,” in Proc. Conf. Robot Learn., 2020, pp. 1443–1455.
  8. K. Weerakoon, A. J. Sathyamoorthy, J. Liang, T. Guan, U. Patel, and D. Manocha, “GrASPE: Graph based multimodal fusion for robot navigation in unstructured outdoor environments,” 2022, arXiv:2209.05722.
  9. G. Kahn, P. Abbeel, and S. Levine, “LaND: Learning to navigate from disengagements,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 1872–1879, 2021.
  10. G. Kahn, P. Abbeel, and S. Levine, “BADGR: An autonomous self-supervised learning-based navigation system,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 1312–1319, 2021.
  11. A. Nguyen, N. Nguyen, K. Tran, E. Tjiputra, and Q. D. Tran, “Autonomous navigation in complex environments with deep multimodal fusion network,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2020, pp. 5824–5830.
  12. G.-H. Liu, A. Siravuru, S. Prabhakar, M. Veloso, and G. Kantor, “Learning end-to-end multimodal sensor policies for autonomous navigation,” in Proc. Conf. Robot Learn., 2017, pp. 249–261.
  13. N. Neverova, C. Wolf, G. W. Taylor, and F. Nebout, “ModDrop: Adaptive multi-modal gesture recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 8, pp. 1692–1706, 2016.
  14. H. Ma, W. Li, X. Zhang, S. Gao, and S. Lu, “AttnSense: Multi-level attention mechanism for multimodal human activity recognition,” in Proc. Int. Conf. Art. Intell., 2019, pp. 3109–3115.
  15. W. Tao, H. Chen, M. Moniruzzaman, M. C. Leu, Z. Yi, and R. Qin, “Attention-based sensor fusion for human activity recognition using IMU signals,” 2021, arXiv:2112.11224.
  16. Y. Zhao, S. Guo, Z. Chen, Q. Shen, Z. Meng, and H. Xu, “Marfusion: An attention-based multimodal fusion model for human activity recognition in real-world scenarios,” Appl. Sci., vol. 12, no. 11, p. 5408, 2022.
  17. A. Palffy, J. F. P. Kooij, and D. M. Gavrila, “Occlusion aware sensor fusion for early crossing pedestrian detection,” in Proc. IEEE Intell. Veh. Symp., 2019, pp. 1768–1774.
  18. H. Ryu, M. Yoon, D. Park, and S.-E. Yoon, “Confidence-based robot navigation under sensor occlusion with deep reinforcement learning,” in Proc. IEEE Int. Conf. Robot. Automat., 2022, pp. 8231–8237.
  19. J. An and S. Cho, “Variational autoencoder based anomaly detection using reconstruction probability,” Spec Lec. IE, pp. 1–18, 2015.
  20. S. Lin, R. Clark, R. Birke, S. Schönborn, N. Trigoni, and S. Roberts, “Anomaly detection for time series using VAE-LSTM hybrid model,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2020, pp. 4322–4326.
  21. J. Tack, S. Mo, J. Jeong, and J. Shin, “CSI: novelty detection via contrastive learning on distributionally shifted instances,” in Proc. Adv. Neural Inf. Process. Syst., 2020, pp. 11 839–11 852.
  22. H. Cho, J. Seol, and S. goo Lee, “Masked contrastive learning for anomaly detection,” in Proc. Int. Conf. Art. Intell., 2021, pp. 1434–1441.
  23. F. van Wyk, Y. Wang, A. Khojandi, and N. Masoud, “Real-time sensor anomaly detection and identification in automated vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 3, pp. 1264–1276, 2020.
  24. T. He, L. Zhang, F. Kong, and A. Salekin, “Exploring inherent sensor redundancy for automotive anomaly detection,” in Proc. ACM/IEEE Design Automat. Conf., 2020, pp. 1–6.
  25. Y. Yoo, C.-Y. Lee, and B.-T. Zhang, “Multimodal anomaly detection based on deep auto-encoder for object slip perception of mobile manipulation robots,” in Proc. IEEE Int. Conf. Robot. Automat., 2021, pp. 11 443–11 449.
  26. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
  27. A. N. Sivakumar, S. Modi, M. V. Gasparino, C. Ellis, A. E. B. Velasquez, G. Chowdhary, and S. Gupta, “Learned visual navigation for under-canopy agricultural robots,” in Proc. Robotics: Sci. Syst, 2021.
  28. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 56, pp. 1929–1958, 2014.
  29. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. Learn. Rep., 2015.
  30. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. Int. Conf. Learn. Rep., 2017.
  31. S. Brody, U. Alon, and E. Yahav, “How attentive are graph attention networks,” in Proc. Int. Conf. Learn. Rep., 2022.
  32. J. Davis and M. Goadrich, “The relationship between precision-recall and ROC curves,” in Proc. Int. Conf. Mach. Learn., 2006, p. 233–240.
Citations (1)

Summary

We haven't generated a summary for this paper yet.