We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Image Hijacking: Adversarial Images can Control Generative Models at Runtime

Abstract: Are foundation models secure from malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control generative models at runtime. We introduce Behavior Matching, a general method for creating image hijacks, and we use it to explore three types of attacks. Specific string attacks generate arbitrary output of the adversary's choosing. Leak context attacks leak information from the context window into the output. Jailbreak attacks circumvent a model's safety training. We study these attacks against LLaVA-2, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all our attack types have above a 90\% success rate. Moreover, our attacks are automated and require only small image perturbations. These findings raise serious concerns about the security of foundation models. If image hijacks are as difficult to defend against as adversarial examples in CIFAR-10, then it might be many years before a solution is found -- if it even exists.
Comments: Code is available at this https URL
Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL); Cryptography and Security (cs.CR)
Cite as: arXiv:2309.00236 [cs.LG]
  (or arXiv:2309.00236v1 [cs.LG] for this version)

Submission history

From: Luke Bailey [view email]
[v1] Fri, 1 Sep 2023 03:53:40 GMT (1978kb,D)
[v2] Mon, 18 Sep 2023 17:59:23 GMT (1375kb,D)
[v3] Mon, 22 Apr 2024 20:18:47 GMT (777kb,D)

Link back to: arXiv, form interface, contact.