Image Hijacks: Adversarial Images can Control Generative Models at Runtime

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Image Hijacks: Adversarial Images can Control Generative Models at Runtime, by Luke Bailey and 3 other authors

View PDF

Abstract:Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. ‘the Eiffel Tower is now located in Rome’) using a generic, off-the-shelf dataset unrelated to our choice of prompt. We use Behaviour Matching to craft hijacks for four types of attack, forcing VLMs to generate outputs of the adversary’s choice, leak information from their context window, override their safety training, and believe false statements. We study these attacks against LLaVA, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types achieve a success rate of over 80%. Moreover, our attacks are automated and require only small image perturbations.

Submission history

From: Luke Bailey [view email]
[v1]
Fri, 1 Sep 2023 03:53:40 UTC (1,978 KB)
[v2]
Mon, 18 Sep 2023 17:59:23 UTC (1,375 KB)
[v3]
Mon, 22 Apr 2024 20:18:47 UTC (777 KB)
[v4]
Tue, 17 Sep 2024 19:56:09 UTC (858 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.