Who’s asking? User personas and the mechanics of latent misalignment

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


[Submitted on 17 Jun 2024]

View a PDF of the paper titled Who’s asking? User personas and the mechanics of latent misalignment, by Asma Ghandeharioun and Ann Yuan and Marius Guerard and Emily Reif and Michael A. Lepori and Lucas Dixon

View PDF
HTML (experimental)

Abstract:Despite investments in improving model safety, studies show that misaligned capabilities remain latent in safety-tuned models. In this work, we shed light on the mechanics of this phenomenon. First, we show that even when model generations are safe, harmful content can persist in hidden representations and can be extracted by decoding from earlier layers. Then, we show that whether the model divulges such content depends significantly on its perception of who it is talking to, which we refer to as user persona. In fact, we find manipulating user persona to be even more effective for eliciting harmful content than direct attempts to control model refusal. We study both natural language prompting and activation steering as control methods and show that activation steering is significantly more effective at bypassing safety filters. We investigate why certain personas break model safeguards and find that they enable the model to form more charitable interpretations of otherwise dangerous queries. Finally, we show we can predict a persona’s effect on refusal given only the geometry of its steering vector.

Submission history

From: Asma Ghandeharioun [view email]
[v1]
Mon, 17 Jun 2024 21:15:12 UTC (4,434 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.