Using traditional image kernels and image processing techniques to harden convolutional neural networks against adversarial attacks

Kiggins, Andrew and Edu, Jide; Manulis, Mark, ed. (2025) Using traditional image kernels and image processing techniques to harden convolutional neural networks against adversarial attacks. In: Applied Cryptography and Network Security Workshops. Lecture Notes in Computer Science . Springer, DEU, pp. 185-203. ISBN 9783032017994 (https://doi.org/10.1007/978-3-032-01799-4_11)

[thumbnail of Kiggins-Edu-Springer-ACNS-2025-Using-traditional-image-kernels-and-image-processing-techniques-to-harden-convolutional-neural-networks]
Preview
Text. Filename: Kiggins-Edu-Springer-ACNS-2025-Using-traditional-image-kernels-and-image-processing-techniques-to-harden-convolutional-neural-networks.pdf
Accepted Author Manuscript
License: Creative Commons Attribution 4.0 logo

Download (2MB)| Preview

Abstract

Convolutional Neural Networks (CNNs) are the primary image classification method, especially in safety-critical physical environments such as autonomous driving and industrial automation. However, CNNs are vulnerable to adversarial attacks in which noise could be added to an image to deceive the classifier. This can lead CNN to make incorrect predictions with high confidence, posing significant threats to physical systems. Although there are various defense mechanisms against adversarial attacks, many are unsuitable for safety-critical applications due to possible image degradation or high computational costs. In this research, we investigate the use of traditional image denoising techniques as a defense against adversarial attacks in environments with limited computational resources. We evaluated three denoising methods: Median filtering, Gaussian filtering, and the Markov chain Monte Carlo (MCMC) method, under “real-world conditions”. The results demonstrate that these three methods not only reduce the impact of adversarial attacks but also surpass the state-of-the-art defense technique, APE-GAN, in speed while preserving prediction accuracy. Our findings show that traditional denoising techniques could provide a practical and efficient defense against adversarial attacks in low-power, safety-critical systems.

ORCID iDs

Kiggins, Andrew and Edu, Jide ORCID logoORCID: https://orcid.org/0000-0003-1325-8740; Manulis, Mark