Current browse context:
cs.LG
Change to browse by:
References & Citations
Computer Science > Machine Learning
Title: Image Hijacking: Adversarial Images can Control Generative Models at Runtime
(Submitted on 1 Sep 2023 (this version), latest version 22 Apr 2024 (v3))
Abstract: Are foundation models secure from malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control generative models at runtime. We introduce Behavior Matching, a general method for creating image hijacks, and we use it to explore three types of attacks. Specific string attacks generate arbitrary output of the adversary's choosing. Leak context attacks leak information from the context window into the output. Jailbreak attacks circumvent a model's safety training. We study these attacks against LLaVA-2, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all our attack types have above a 90\% success rate. Moreover, our attacks are automated and require only small image perturbations. These findings raise serious concerns about the security of foundation models. If image hijacks are as difficult to defend against as adversarial examples in CIFAR-10, then it might be many years before a solution is found -- if it even exists.
Submission history
From: Luke Bailey [view email][v1] Fri, 1 Sep 2023 03:53:40 GMT (1978kb,D)
[v2] Mon, 18 Sep 2023 17:59:23 GMT (1375kb,D)
[v3] Mon, 22 Apr 2024 20:18:47 GMT (777kb,D)
Link back to: arXiv, form interface, contact.