Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Explainable Artificial Intelligence Approach for Unsupervised Fault Detection and Diagnosis in Rotating Machinery (2102.11848v1)

Published 23 Feb 2021 in cs.AI and cs.LG

Abstract: The monitoring of rotating machinery is an essential task in today's production processes. Currently, several machine learning and deep learning-based modules have achieved excellent results in fault detection and diagnosis. Nevertheless, to further increase user adoption and diffusion of such technologies, users and human experts must be provided with explanations and insights by the modules. Another issue is related, in most cases, with the unavailability of labeled historical data that makes the use of supervised models unfeasible. Therefore, a new approach for fault detection and diagnosis in rotating machinery is here proposed. The methodology consists of three parts: feature extraction, fault detection and fault diagnosis. In the first part, the vibration features in the time and frequency domains are extracted. Secondly, in the fault detection, the presence of fault is verified in an unsupervised manner based on anomaly detection algorithms. The modularity of the methodology allows different algorithms to be implemented. Finally, in fault diagnosis, Shapley Additive Explanations (SHAP), a technique to interpret black-box models, is used. Through the feature importance ranking obtained by the model explainability, the fault diagnosis is performed. Two tools for diagnosis are proposed, namely: unsupervised classification and root cause analysis. The effectiveness of the proposed approach is shown on three datasets containing different mechanical faults in rotating machinery. The study also presents a comparison between models used in machine learning explainability: SHAP and Local Depth-based Feature Importance for the Isolation Forest (Local- DIFFI). Lastly, an analysis of several state-of-art anomaly detection algorithms in rotating machinery is included.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
Citations (170)

Summary

An Explainable Artificial Intelligence for Fault Detection in Rotating Machinery

The paper presents a methodology for fault detection and diagnosis in rotating machinery, addressing critical areas in mechanical condition monitoring through Explainable Artificial Intelligence (XAI). It builds upon existing ML models while emphasizing the importance of explainability and unsupervised approaches. This approach not only facilitates fault detection but also contributes to diagnosis without reliance on labeled data, making it highly relevant to industries facing challenges with historical data annotation.

The methodology is framed within three stages: feature extraction, fault detection using unsupervised anomaly detection models, and fault diagnosis featuring XAI techniques like SHAP and Local-DIFFI. Feature extraction leverages existing knowledge and techniques within vibration analysis, focusing on time and frequency domain features that have been extensively studied for faults detection in machinery. Notably, the paper stresses the utilization of features tailored to specific fault types, reinforcing the approach’s adaptability to various types of machinery and fault modes.

For fault detection, the paper evaluates a comprehensive array of state-of-the-art anomaly detection models, including Isolation Forest (IF), Local Outlier Factor (LOF), and Minimum Covariance Determinant (MCD). These models are applied in an unsupervised context, where no prior labeling of data is required, demonstrating their effectiveness across laboratory test datasets. The results emphasize robust fault detection performance, particularly highlighted by models like IF and HBOS, which have shown strong numerical results in identifying faults at early stages. This method's unsupervised nature is significantly applicable in real-world industries where labeled datasets are scarce.

The final fault diagnosis stage leverages XAI models to interpret the machine learning outputs, explained through feature importance rankings. SHAP and Local-DIFFI are employed to provide insights into which features are most indicative of faults. This interpretable approach aids in classifying the type of fault (or identifying root causes) based on feature importance, offering practical value for preparing maintenance interventions or root cause analysis effectively. Importantly, Local-DIFFI, developed specifically for IF, offers an efficient alternative to SHAP, providing comparable results at a fraction of the computational cost. This computational advantage is critical in industrial applications where quick diagnostics are preferred.

The implications of this research are manifold. Practically, it proposes an AI-driven framework that can transform machinery condition monitoring from a labor-intensive task requiring expert annotation into an intelligent system that enables early fault detection, leading to predictive maintenance with well-informed decisions. Theoretically, it contributes to the ongoing dialogue on the necessity of model interpretability, reinforcing the role of XAI in bridging the gap between black-box machine learning models and actionable insights for industry professionals. Moreover, it opens pathways for future research into domain adaptation and transfer learning within the context of machinery diagnostics, allowing further refinement and applicability of the proposed methodology across diverse industrial domains.

Overall, while this research does not emphasize novelty in individual models, its coherence in implementing a practical, explainable unsupervised system deserves attention for its potential impact on condition monitoring technologies in industrial settings.