Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fault Detection and Classification of Aerospace Sensors using a VGG16-based Deep Neural Network (2207.13267v1)

Published 27 Jul 2022 in cs.CV and cs.LG

Abstract: Compared with traditional model-based fault detection and classification (FDC) methods, deep neural networks (DNN) prove to be effective for the aerospace sensors FDC problems. However, time being consumed in training the DNN is excessive, and explainability analysis for the FDC neural network is still underwhelming. A concept known as imagefication-based intelligent FDC has been studied in recent years. This concept advocates to stack the sensors measurement data into an image format, the sensors FDC issue is then transformed to abnormal regions detection problem on the stacked image, which may well borrow the recent advances in the machine vision vision realm. Although promising results have been claimed in the imagefication-based intelligent FDC researches, due to the low size of the stacked image, small convolutional kernels and shallow DNN layers were used, which hinders the FDC performance. In this paper, we first propose a data augmentation method which inflates the stacked image to a larger size (correspondent to the VGG16 net developed in the machine vision realm). The FDC neural network is then trained via fine-tuning the VGG16 directly. To truncate and compress the FDC net size (hence its running time), we perform model pruning on the fine-tuned net. Class activation mapping (CAM) method is also adopted for explainability analysis of the FDC net to verify its internal operations. Via data augmentation, fine-tuning from VGG16, and model pruning, the FDC net developed in this paper claims an FDC accuracy 98.90% across 4 aircraft at 5 flight conditions (running time 26 ms). The CAM results also verify the FDC net w.r.t. its internal operations.

Citations (1)

Summary

  • The paper introduces an innovative imagefication technique that converts sensor data into images for enhanced fault detection using deep learning.
  • It employs a fine-tuned VGG16 model with data augmentation and pruning to achieve 98.90% accuracy and 26 ms real-time processing.
  • The study uses Class Activation Mapping to provide clear insights into model decisions, ensuring reliable fault detection in aerospace applications.

The paper "Fault Detection and Classification of Aerospace Sensors using a VGG16-based Deep Neural Network" presents an innovative approach to enhance the robustness and efficiency of fault detection and classification (FDC) in aerospace sensors using advanced deep learning techniques.

Key Contributions:

  1. Transformation to Image-based Data Representation: The authors propose converting sensor measurement data into image format, known as imagefication. This allows the FDC problem to be recast as an issue of detecting abnormal regions within these images, leveraging powerful image recognition and processing techniques from the machine vision domain.
  2. Addressing Image Size Limitations: Prior research on imagefication-based FDC typically struggled with low-resolution images, necessitating the use of small convolutional kernels and shallow neural networks, which consequently limited detection performance. To ameliorate this, the paper introduces a data augmentation method that increases the size of the stacked image to better fit the input requirements of deeper neural networks like VGG16.
  3. VGG16-based Model Training and Fine-Tuning: The paper employs VGG16, a well-known deep neural network from the machine vision field, as the backbone for the FDC task. By fine-tuning the pre-trained VGG16 on the augmented image data from the aerospace sensors, the authors achieve an enhanced classification performance.
  4. Model Pruning for Efficiency: To reduce the computational overhead, the authors implement model pruning on the fine-tuned VGG16 network. Model pruning helps in truncating and compressing the network, resulting in faster inference times without a significant drop in accuracy.
  5. Explainability with CAM: For interpretability, the paper utilizes Class Activation Mapping (CAM). CAM helps to visualize which regions of the input image are contributing to the final decision of the network, providing insights into the internal operations of the FDC neural network. This explainability is crucial for validating the model's decisions in safety-critical aerospace applications.

Results:

The proposed FDC neural network, developed through data augmentation, fine-tuning of VGG16, and model pruning:

  • Achieves a fault detection accuracy of 98.90% across four different aircraft and five flight conditions.
  • Demonstrates a running time of 26 milliseconds, making it suitable for real-time applications.
  • Provides adequate explainability through CAM, effectively verifying the model’s decision-making process.

Conclusion:

This paper introduces a novel and effective approach for FDC in aerospace sensors by leveraging advanced image-based data representation and deep learning techniques. The combination of data augmentation, deep network fine-tuning, and model pruning leads to high accuracy and operational efficiency, while the use of CAM provides valuable insights into the model's internal workings. This research not only pushes the boundaries of traditional model-based FDC methods but also opens avenues for more complex and reliable fault detection systems in aerospace applications.