Papers
Topics
Authors
Recent
2000 character limit reached

Experimental Insights Towards Explainable and Interpretable Pedestrian Crossing Prediction

Published 5 Dec 2023 in cs.LG, cs.AI, cs.NE, cs.SY, and eess.SY | (2312.02872v1)

Abstract: In the context of autonomous driving, pedestrian crossing prediction is a key component for improving road safety. Presently, the focus of these predictions extends beyond achieving trustworthy results; it is shifting towards the explainability and interpretability of these predictions. This research introduces a novel neuro-symbolic approach that combines deep learning and fuzzy logic for an explainable and interpretable pedestrian crossing prediction. We have developed an explainable predictor (ExPedCross), which utilizes a set of explainable features and employs a fuzzy inference system to predict whether the pedestrian will cross or not. Our approach was evaluated on both the PIE and JAAD datasets. The results offer experimental insights into achieving explainability and interpretability in the pedestrian crossing prediction task. Furthermore, the testing results yield a set of guidelines and recommendations regarding the process of dataset selection, feature selection, and explainability.

Citations (2)

Summary

  • The paper presents ExPedCross, a neuro-symbolic predictor merging deep feature extraction with a Takagi-Sugeno fuzzy inference system.
  • It leverages JAAD and PIE datasets, using advanced fuzzy rule learning to extract and interpret key pedestrian behavior features.
  • Experimental results show that careful data selection and multifaceted feature analysis enhance both prediction generalization and transparency in autonomous driving.

In the field of autonomous driving, predicting pedestrian behavior is critical for the safety of both drivers and pedestrians. A recent study has introduced an innovative neuro-symbolic approach aimed at predicting whether a pedestrian will cross the road, dubbed as ExPedCross. This approach leverages both deep learning techniques for feature extraction and a fuzzy logic inference system to increase the explainability of the prediction.

The research emphasizes the importance of explainability and interpretability in autonomous driving systems, highlighting that a vehicle's decision-making process must be transparent and understandable, especially when it involves vulnerable road users such as pedestrians. Previous machine learning models often operated as "black boxes" with little insight into how decisions were made. This study breaks that mold by incorporating multiple explainable features extracted from widely-used datasets—the Joint Attention for Autonomous Driving (JAAD) and Pedestrian Intention Estimation (PIE)—to establish a system that is not only accurate but also allows for clear explanations of its predictions.

In their approach to developing an explainable predictor, researchers evaluated multiple fuzzy rule learning algorithms and ultimately utilized one that showed promise in generating meaningful fuzzy rules from a balanced meta-dataset, ensuring that the rules are informed by scenarios involving both crossing and non-crossing pedestrians. The employed fuzzy logic system followed a Takagi-Sugeno model where the crossing prediction was made spatiotemporally, analyzing pedestrian features over successive frames to anticipate their future crossing action.

The effectiveness of the ExPedCross model was extensively tested using various configurations and strategies. Researchers underscored that more data does not necessarily equate to better prediction generalization when it comes to explainable systems. Careful selection and deep analysis of videos were crucial, as were strategies that included data selection and filtering in creating meaningful fuzzy rules. The experiment results also revealed that features such as proximity, orientation, and action played a significant role in enhancing prediction capability, though these needed to be complemented with additional features to convey a more comprehensive depiction of pedestrian behavior.

This study provides valuable guidelines for both dataset and feature selection, which are crucial in the development of explainable and interpretable machine learning models. The guidelines encourage a focus on quality over quantity, a deep understanding and evaluation of datasets, and careful consideration of feature preprocessing and combination for more informative and transparent systems.

Moving forward, the research aims to explore more sophisticated strategies to further refine their explainable and interpretable pedestrian crossing action predictor. They also stress the importance of integrating features that encapsulate the pedestrian history, which may offer richer contextual information for future predictions.

In conclusion, this study represents a significant step towards more transparent AI systems in the autonomous driving industry, with the potential to improve safety outcomes and user trust through increased explainability and interpretability of machine actions.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.