References & Citations
Computer Science > Machine Learning
Title: Do Counterfactual Examples Complicate Adversarial Training?
(Submitted on 16 Apr 2024 (v1), last revised 17 Apr 2024 (this version, v2))
Abstract: We leverage diffusion models to study the robustness-performance tradeoff of robust classifiers. Our approach introduces a simple, pretrained diffusion method to generate low-norm counterfactual examples (CEs): semantically altered data which results in different true class membership. We report that the confidence and accuracy of robust models on their clean training data are associated with the proximity of the data to their CEs. Moreover, robust models perform very poorly when evaluated on the CEs directly, as they become increasingly invariant to the low-norm, semantic changes brought by CEs. The results indicate a significant overlap between non-robust and semantic features, countering the common assumption that non-robust features are not interpretable.
Submission history
From: Eric Yeats [view email][v1] Tue, 16 Apr 2024 14:13:44 GMT (4555kb,D)
[v2] Wed, 17 Apr 2024 12:09:17 GMT (4555kb,D)
Link back to: arXiv, form interface, contact.