We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CV

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computer Vision and Pattern Recognition

Title: Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries

Abstract: Adversarial examples are inputs intentionally generated for fooling a deep neural network. Recent studies have proposed unrestricted adversarial attacks that are not norm-constrained. However, the previous unrestricted attack methods still have limitations to fool real-world applications in a black-box setting. In this paper, we present a novel method for generating unrestricted adversarial examples using GAN where an attacker can only access the top-1 final decision of a classification model. Our method, Latent-HSJA, efficiently leverages the advantages of a decision-based attack in the latent space and successfully manipulates the latent vectors for fooling the classification model.
With extensive experiments, we demonstrate that our proposed method is efficient in evaluating the robustness of classification models with limited queries in a black-box setting. First, we demonstrate that our targeted attack method is query-efficient to produce unrestricted adversarial examples for a facial identity recognition model that contains 307 identities. Then, we demonstrate that the proposed method can also successfully attack a real-world celebrity recognition service.
Comments: Accepted to the ECCV 2022 Workshop on Adversarial Robustness in the Real World
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2208.11613 [cs.CV]
  (or arXiv:2208.11613v1 [cs.CV] for this version)

Submission history

From: Dongbin Na [view email]
[v1] Wed, 24 Aug 2022 15:28:46 GMT (1304kb,D)

Link back to: arXiv, form interface, contact.