An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments

Weizheng Wang, Xiangqi Wang*, Xianmin Pan, Xingxing Gong, Jian Liang, Pradip Kumar Sharma, Osama Alfarraj, Wael Said

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

Image-denoising techniques are widely used to defend against Adversarial Examples (AEs). However, denoising alone cannot completely eliminate adversarial perturbations. The remaining perturbations tend to amplify as they propagate through deeper layers of the network, leading to misclassifications. Moreover, image denoising compromises the classification accuracy of original examples. To address these challenges in AE defense through image denoising, this paper proposes a novel AE detection technique. The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network (CNN) network structures. The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm. By analyzing the discrepancy between predictions made by the model on original examples and denoised examples, AEs are detected effectively. This technique reduces computational overhead without modifying the model structure or parameters, effectively avoiding the error amplification caused by denoising. The proposed approach demonstrates excellent detection performance against mainstream AE attacks. Experimental results show outstanding detection performance in well-known AE attacks, including Fast Gradient Sign Method (FGSM), Basic Iteration Method (BIM), DeepFool, and Carlini & Wagner (C&W), achieving a 94% success rate in FGSM detection, while only reducing the accuracy of clean examples by 4%.

Original languageEnglish
Pages (from-to)3859-3876
Number of pages18
JournalComputers, Materials and Continua
Volume76
Issue number3
DOIs
Publication statusPublished - 8 Oct 2023

Bibliographical note

Funding Information:
Funding Statement: This work was supported in part by the Natural Science Foundation of Hunan Province under Grant Nos. 2023JJ30316 and 2022JJ2029, in part by a project supported by Scientific Research Fund of Hunan Provincial Education Department under Grant No. 22A0686, and in part by the National Natural Science Foundation of China under Grant No. 62172058. This work was also funded by the Researchers Supporting Project (No. RSP2023R102) King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

The data underlying this article will be shared on reasonable request to the corresponding author.

Keywords

  • adversarial attack
  • adversarial example
  • adversarial example detection
  • Deep neural networks
  • image denoising
  • machine learning

Fingerprint

Dive into the research topics of 'An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments'. Together they form a unique fingerprint.

Cite this