2019 12th International Conference on Intelligent Computation Technology and Automation (ICICTA)
Download PDF

Abstract

As is known to all, deep neural networks have shown superior performance in many image classification tasks, but it is vulnerable to attack by adversarial examples, which are added tiny perturbations on the original examples to fool deep neural networks. Many previous defenses focused on mdifying model structure to enhance the robustness of the neural networks. In this paper, an image squeezing method proposed can help to classify images correctly, before inputing images to deep neural networks, image webp squeezng can destory the perturbation of adversarial examples. Firstly, it is necessary to generate corresponding adversarial examples of the neural networks, which can completely fool classifier, then apply webp squeezing to adversarial examples, and put them into the classifier to regain the correct image classification results. Our experiments on CIFAR-10 dataset demonstrated that webp image squeezing can help neural networsk classify adversarial examples correctly. And our method proposed in this paper can effectively resist FGSM, One-pixel attack and C&W attack, and its recognition accuracy is improved by more than 5% compared with the HGD method.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles