Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling, by Yiran Zhao and 6 other authors

View PDF
HTML (experimental)

Abstract:Safety of Large Language Models (LLMs) has become a critical issue given their rapid progresses. Greedy Coordinate Gradient (GCG) is shown to be effective in constructing adversarial prompts to break the aligned LLMs, but optimization of GCG is time-consuming. To reduce the time cost of GCG and enable more comprehensive studies of LLM safety, in this work, we study a new algorithm called $texttt{Probe sampling}$. At the core of the algorithm is a mechanism that dynamically determines how similar a smaller draft model’s predictions are to the target model’s predictions for prompt candidates. When the target model is similar to the draft model, we rely heavily on the draft model to filter out a large number of potential prompt candidates. Probe sampling achieves up to $5.6$ times speedup using Llama2-7b-chat and leads to equal or improved attack success rate (ASR) on the AdvBench. Furthermore, probe sampling is also able to accelerate other prompt optimization techniques and adversarial methods, leading to acceleration of $1.8times$ for AutoPrompt, $2.4times$ for APE and $2.4times$ for AutoDAN.

Submission history

From: Yiran Zhao [view email]
[v1]
Sat, 2 Mar 2024 16:23:44 UTC (3,827 KB)
[v2]
Mon, 27 May 2024 07:02:28 UTC (4,022 KB)
[v3]
Fri, 8 Nov 2024 06:07:51 UTC (4,048 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.