TY - JOUR
T1 - Outlier-oriented poisoning attack: a grey-box approach to disturb decision boundaries by perturbing outliers in multiclass learning
AU - Paracha, Anum
AU - Arshad, Junaid
AU - Ismail, Khalid
AU - Ben Farah, Mohamed
PY - 2025/2/26
Y1 - 2025/2/26
N2 - Poisoning attacks are a primary threat to machine learning (ML) models, aiming to compromise their performance and reliability by manipulating training datasets. This paper introduces a novel attack—outlier-oriented poisoning (OOP) attack, which manipulates labels of most distanced samples from the decision boundaries. To ascertain the severity of the OOP attack for different degrees (5–25%) of poisoning to conduct a detailed analysis, we analyzed variance, accuracy, precision, recall, f1-score, and false positive rate for chosen ML models. Benchmarking the OOP attack, we have analyzed key characteristics of multiclass machine learning algorithms and their sensitivity to poisoning attacks. Our analysis helps understand behaviour of multiclass models against data poisoning attacks and contributes to effective mitigation against such attacks. Utilizing three publicly available datasets: IRIS, MNIST, and ISIC, our analysis shows that KNN and GNB are the most affected algorithms with a decrease in accuracy of 22.81% and 56.07% for IRIS dataset with 15% poisoning. Whereas, for same poisoning level and dataset, Decision Trees and Random Forest are the most resilient algorithms with the least accuracy disruption (12.28% and 17.52%). We have also analyzed the correlation between number of dataset classes and the performance degradation of models. Our analysis highlighted that number of classes are inversely proportional to the performance degradation, specifically the decrease in accuracy of the models, which is normalized with increasing number of classes. Further, our analysis identified that imbalanced dataset distribution can aggravate the impact of poisoning for machine learning models.
AB - Poisoning attacks are a primary threat to machine learning (ML) models, aiming to compromise their performance and reliability by manipulating training datasets. This paper introduces a novel attack—outlier-oriented poisoning (OOP) attack, which manipulates labels of most distanced samples from the decision boundaries. To ascertain the severity of the OOP attack for different degrees (5–25%) of poisoning to conduct a detailed analysis, we analyzed variance, accuracy, precision, recall, f1-score, and false positive rate for chosen ML models. Benchmarking the OOP attack, we have analyzed key characteristics of multiclass machine learning algorithms and their sensitivity to poisoning attacks. Our analysis helps understand behaviour of multiclass models against data poisoning attacks and contributes to effective mitigation against such attacks. Utilizing three publicly available datasets: IRIS, MNIST, and ISIC, our analysis shows that KNN and GNB are the most affected algorithms with a decrease in accuracy of 22.81% and 56.07% for IRIS dataset with 15% poisoning. Whereas, for same poisoning level and dataset, Decision Trees and Random Forest are the most resilient algorithms with the least accuracy disruption (12.28% and 17.52%). We have also analyzed the correlation between number of dataset classes and the performance degradation of models. Our analysis highlighted that number of classes are inversely proportional to the performance degradation, specifically the decrease in accuracy of the models, which is normalized with increasing number of classes. Further, our analysis identified that imbalanced dataset distribution can aggravate the impact of poisoning for machine learning models.
UR - https://www.open-access.bcu.ac.uk/16198/
U2 - 10.1007/s10207-025-00998-1
DO - 10.1007/s10207-025-00998-1
M3 - Article
SN - 1615-5262
VL - 24
JO - International Journal of Information Security
JF - International Journal of Information Security
M1 - 85
ER -