Enhancing AI transparency in IoT intrusion detection using explainable AI techniques

Yifan Wang (Corresponding / Lead Author), Muhammad Ajmal Azad* (Corresponding / Lead Author), Maham Zafar, Ammara Gul

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Internet of Things (IoT) networks continue to grow and have been integrated into critical applications such as healthcare, industrial control, and national infrastructure. The interconnected nature and resource-constrained devices can create numerous entry points for malicious actors who can bring about data breaches, unauthorised access, service disruptions, and even compromise critical infrastructure. Ensuring the security of these networks is essential to maintain the integrity and availability of services that could have serious social, economic, or operational consequences. Automated Intrusion Detection Systems (IDSs) have been widely used to identify threats with high accuracy and reduced detection time. However, the complexity of machine learning and deep learning models poses a serious challenge to the transparency and interpretability of the produced detection results. The lack of explainability in AI-driven IDS undermines user confidence and limits their practical deployment, especially among non-expert stakeholders. To address these challenges, this paper investigates the use of Explainable AI (XAI) techniques to enhance the interpretability of AI-based IDSs within IoT ecosystems. Specifically, it applies SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) to different Machine learning models. The models’ performance is evaluated using standard metrics such as accuracy, precision, and recall. The results show that incorporating XAI techniques significantly improves the transparency of IDS results, allowing users to understand and trust the reasoning behind AI decisions. This enhanced interpretability not only supports more informed cybersecurity practices but also makes AI systems more accessible to non-specialist users.
Original languageEnglish
JournalInternet of Things
Volume33
DOIs
Publication statusPublished (VoR) - 29 Jul 2025

Fingerprint

Dive into the research topics of 'Enhancing AI transparency in IoT intrusion detection using explainable AI techniques'. Together they form a unique fingerprint.

Cite this