We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Do Counterfactual Examples Complicate Adversarial Training?

Abstract: We leverage diffusion models to study the robustness-performance tradeoff of robust classifiers. Our approach introduces a simple, pretrained diffusion method to generate low-norm counterfactual examples (CEs): semantically altered data which results in different true class membership. We report that the confidence and accuracy of robust models on their clean training data are associated with the proximity of the data to their CEs. Moreover, robust models perform very poorly when evaluated on the CEs directly, as they become increasingly invariant to the low-norm, semantic changes brought by CEs. The results indicate a significant overlap between non-robust and semantic features, countering the common assumption that non-robust features are not interpretable.
Comments: Accepted as a short paper to the GCV Workshop at CVPR'24
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2404.10588 [cs.LG]
  (or arXiv:2404.10588v2 [cs.LG] for this version)

Submission history

From: Eric Yeats [view email]
[v1] Tue, 16 Apr 2024 14:13:44 GMT (4555kb,D)
[v2] Wed, 17 Apr 2024 12:09:17 GMT (4555kb,D)

Link back to: arXiv, form interface, contact.