- The paper introduces a multisensor fusion digital twin framework that predicts location-specific quality attributes in the LDED process.
- The methodology fuses acoustic, thermal, vision, and laser data into a unified 3D volumetric model using ROS and machine learning techniques.
- Implications include reduced waste, improved efficiency, and advancements toward fully automated, self-adaptive additive manufacturing systems.
Multisensor Fusion-Based Digital Twin in Additive Manufacturing: Enhancing Quality Monitoring and Defect Correction
This paper presents an innovative approach to quality monitoring and defect correction in additive manufacturing (AM) processes through a multisensor fusion-based digital twin framework, specifically targeting the laser direct energy deposition (LDED) process. The paper focuses on integrating various sensing modalities, including acoustic monitoring, infrared thermal imaging, coaxial vision, and laser-based surface scanning, to achieve a comprehensive, in-situ quality assessment and defect rectification.
The primary contribution of this research lies in its novel spatiotemporal data fusion method that aligns the multisensor data within a three-dimensional volumetric representation of the manufactured part. This method enables a synchronized and registered dataset that facilitates the prediction of location-specific quality attributes via machine learning techniques. This capability marks a significant advancement over single-sensor systems, which often struggle to encapsulate the multifaceted nature of melt pool dynamics and material interactions inherent in AM processes.
Methodology and System Architecture
The proposed framework consists of a detailed methodology involving several stages. Initially, a comprehensive in-situ monitoring system is developed, equipped with diverse sensor modalities. The data captured through these sensors are then subjected to spatiotemporal fusion, effectively synchronizing them with the part's volumetric data. This integration supports location-dependent quality prediction, targeting metrics such as porosity, microhardness, and geometric deviations. Subsequent to this prediction phase, regions necessitating material adjustment, either additive or subtractive, are identified, enabling automatic generation of toolpaths with optimally tuned processing parameters.
The system is built on a robust software platform leveraging the Robot Operating System (ROS), facilitating real-time data acquisition, processing, and feature extraction across various sensing channels. This software architecture supports the visualisation of key sensor-derived features in real time, enhancing the transparency and responsiveness of the monitoring process.
Implications and Future Directions
The multisensor fusion approach not only augments the reliability of monitoring systems in additive manufacturing but also significantly reduces waste, increases efficiency, and contributes to cleaner production processes. By allowing for real-time identification and correction of defects, this framework supports the move towards fully automated, self-adaptive AM systems within Cyber-Physical Production Systems (CPPSs).
The paper outlines potential avenues for future research, including the application of advanced machine learning models to further refine defect prediction accuracy. The envisioned development of transfer learning techniques across different sensing modalities could enhance the predictive capabilities of individual sensors, broadening the scope of defect recognition to include a wider array of material and process variables.
In conclusion, this paper establishes a substantial framework for integrating multisensor data in additive manufacturing, setting a precedent for future innovations in the field. The methodologies presented here promise to push the boundaries of self-adaptation and autonomous quality control in AM processes, heralding a new era of precision manufacturing technologies.