Abstract
Accurate fault detection in industrial environments with high-dimensional sensor data presents significant challenges. This paper presents an explainable AI framework that combines unsupervised deep representation learning with supervised classification for enhanced quality control in smart manufacturing systems. It utilizes a finely tuned deep autoencoder to convert raw data into a compressed latent representation, effectively capturing the underlying structure while removing irrelevant or noisy features. These representations are then used by a downstream classifier to predict faults. Experimental results on a high-dimensional dataset show that the proposed solution outperforms traditional classifiers that process raw features directly. In addition, the framework incorporates an interpretability phase. It adopts a game-theory-based technique to analyze latent space and identify the most influential features that contribute to accurate faulty predictions.
| Original language | English |
|---|---|
| Title of host publication | 2025 IEEE International Conference on Systems, Man, and Cybernetics (SMC) |
| ISBN (Electronic) | 9798331533588 |
| DOIs | |
| Publication status | Published (VoR) - 28 Jan 2026 |
Fingerprint
Dive into the research topics of 'Decoding the Black Box: Shedding Light on Manufacturing Processes with Explainable AI'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver