Unleashing the Unseen: Harnessing Benign Datasets for Jailbreaking Large Language Models

Every’s Master Plan


View a PDF of the paper titled Unleashing the Unseen: Harnessing Benign Datasets for Jailbreaking Large Language Models, by Wei Zhao and 3 other authors

View PDF
HTML (experimental)

Abstract:Despite significant ongoing efforts in safety alignment, large language models (LLMs) such as GPT-4 and LLaMA 3 remain vulnerable to jailbreak attacks that can induce harmful behaviors, including through the use of adversarial suffixes. Building on prior research, we hypothesize that these adversarial suffixes are not mere bugs but may represent features that can dominate the LLM’s behavior. To evaluate this hypothesis, we conduct several experiments. First, we demonstrate that benign features can be effectively made to function as adversarial suffixes, i.e., we develop a feature extraction method to extract sample-agnostic features from benign dataset in the form of suffixes and show that these suffixes may effectively compromise safety alignment. Second, we show that adversarial suffixes generated from jailbreak attacks may contain meaningful features, i.e., appending the same suffix to different prompts results in responses exhibiting specific characteristics. Third, we show that such benign-yet-safety-compromising features can be easily introduced through fine-tuning using only benign datasets. As a result, we are able to completely eliminate GPT’s safety alignment in a blackbox setting through finetuning with only benign data. Our code and data is available at url{this https URL}.

Submission history

From: Wei Zhao [view email]
[v1]
Tue, 1 Oct 2024 07:11:55 UTC (21,075 KB)
[v2]
Sat, 5 Oct 2024 17:14:09 UTC (21,074 KB)
[v3]
Thu, 19 Dec 2024 05:32:59 UTC (39,732 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.