Evaluating the Robustness of Defense Mechanisms based on AutoEncoder Reconstructions against Carlini-Wagner Adversarial Attacks

, ,

Proceedings of the Northern Lights Deep Learning Workshop. Tromsø, Norway: Proceedings of the Northern Lights Deep Learning Workshop (2020) .

Proceedings of the Northern Lights Deep Learning Workshop

Abstract

Adversarial Examples represent a serious problem affecting the security of machine learning systems. In this paper we focus on a defense mechanism based on reconstructing images before classification using an autoencoder. We experiment on several types of autoencoders and evaluate the impact of strategies such as injecting noise in the input during training and in the latent space at inference time. We tested the models on adversarial examples generated with the Carlini-Wagner attack, in a white-box scenario and on the stacked system composed by the autoencoder and the classifier.



Add your rating and review

If all scientific publications that you have read were ranked according to their scientific quality and importance from 0% (worst) to 100% (best), where would you place this publication? Please rate by selecting a range.


0% - 100%

This publication ranks between % and % of publications that I have read in terms of scientific quality and importance.


Keep my rating and review anonymous
Show publicly that I gave the rating and I wrote the review



Notice: Undefined index: publicationsCaching in /www/html/epistemio/application/controllers/PublicationController.php on line 2240