PaperSwipe

Black-box Adversarial Example Generation with Normalizing Flows

Published 5 years agoVersion 1arXiv:2007.02734

Authors

Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Categories

cs.LGcs.CRcs.CVstat.ML

Abstract

Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision. In this regard, the study of powerful adversarial attacks can help shed light on sources of this malicious behavior. In this paper, we propose a novel black-box adversarial attack using normalizing flows. We show how an adversary can be found by searching over a pre-trained flow-based model base distribution. This way, we can generate adversaries that resemble the original data closely as the perturbations are in the shape of the data. We then demonstrate the competitive performance of the proposed approach against well-known black-box adversarial attack methods.

Black-box Adversarial Example Generation with Normalizing Flows

5 years ago
v1
3 authors

Categories

cs.LGcs.CRcs.CVstat.ML

Abstract

Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision. In this regard, the study of powerful adversarial attacks can help shed light on sources of this malicious behavior. In this paper, we propose a novel black-box adversarial attack using normalizing flows. We show how an adversary can be found by searching over a pre-trained flow-based model base distribution. This way, we can generate adversaries that resemble the original data closely as the perturbations are in the shape of the data. We then demonstrate the competitive performance of the proposed approach against well-known black-box adversarial attack methods.

Authors

Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

arXiv ID: 2007.02734
Published Jul 6, 2020

Click to preview the PDF directly in your browser