Jailbreak Large Vision-Language Models Through Multi-Modal Linkage

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Jailbreak Large Vision-Language Models Through Multi-Modal Linkage, by Yu Wang and 4 other authors

View PDF
HTML (experimental)

Abstract:With the significant advancement of Large Vision-Language Models (VLMs), concerns about their potential misuse and abuse have grown rapidly. Previous studies have highlighted VLMs’ vulnerability to jailbreak attacks, where carefully crafted inputs can lead the model to produce content that violates ethical and legal standards. However, existing methods struggle against state-of-the-art VLMs like GPT-4o, due to the over-exposure of harmful content and lack of stealthy malicious guidance. In this work, we propose a novel jailbreak attack framework: Multi-Modal Linkage (MML) Attack. Drawing inspiration from cryptography, MML utilizes an encryption-decryption process across text and image modalities to mitigate over-exposure of malicious information. To align the model’s output with malicious intent covertly, MML employs a technique called “evil alignment”, framing the attack within a video game production scenario. Comprehensive experiments demonstrate MML’s effectiveness. Specifically, MML jailbreaks GPT-4o with attack success rates of 97.80% on SafeBench, 98.81% on MM-SafeBench and 99.07% on HADES-Dataset. Our code is available at this https URL

Submission history

From: Yu Wang [view email]
[v1]
Sat, 30 Nov 2024 13:21:15 UTC (2,424 KB)
[v2]
Tue, 3 Dec 2024 07:13:51 UTC (2,424 KB)
[v3]
Sat, 7 Dec 2024 08:21:57 UTC (2,424 KB)
[v4]
Tue, 17 Dec 2024 06:09:08 UTC (2,423 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.