References & Citations
Computer Science > Computer Vision and Pattern Recognition
Title: MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
(Submitted on 28 Mar 2024 (v1), last revised 2 Apr 2024 (this version, v3))
Abstract: Different from a unimodal model whose input is from a single modality, the input (called multi-modal input) of a multi-modal model is from multiple modalities such as image, 3D points, audio, text, etc. Similar to unimodal models, many existing studies show that a multi-modal model is also vulnerable to adversarial perturbation, where an attacker could add small perturbation to all modalities of a multi-modal input such that the multi-modal model makes incorrect predictions for it. Existing certified defenses are mostly designed for unimodal models, which achieve sub-optimal certified robustness guarantees when extended to multi-modal models as shown in our experimental results. In our work, we propose MMCert, the first certified defense against adversarial attacks to a multi-modal model. We derive a lower bound on the performance of our MMCert under arbitrary adversarial attacks with bounded perturbations to both modalities (e.g., in the context of auto-driving, we bound the number of changed pixels in both RGB image and depth image). We evaluate our MMCert using two benchmark datasets: one for the multi-modal road segmentation task and the other for the multi-modal emotion recognition task. Moreover, we compare our MMCert with a state-of-the-art certified defense extended from unimodal models. Our experimental results show that our MMCert outperforms the baseline.
Submission history
From: Yanting Wang [view email][v1] Thu, 28 Mar 2024 01:05:06 GMT (582kb,D)
[v2] Fri, 29 Mar 2024 02:31:10 GMT (578kb,D)
[v3] Tue, 2 Apr 2024 02:05:46 GMT (578kb,D)
Link back to: arXiv, form interface, contact.