SCAR: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled SCAR: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs, by Ruben H”arle and 5 other authors

View PDF
HTML (experimental)

Abstract:Large Language Models (LLMs) have demonstrated remarkable capabilities in generating human-like text, but their output may not be aligned with the user or even produce harmful content. This paper presents a novel approach to detect and steer concepts such as toxicity before generation. We introduce the Sparse Conditioned Autoencoder (SCAR), a single trained module that extends the otherwise untouched LLM. SCAR ensures full steerability, towards and away from concepts (e.g., toxic content), without compromising the quality of the model’s text generation on standard evaluation benchmarks. We demonstrate the effective application of our approach through a variety of concepts, including toxicity, safety, and writing style alignment. As such, this work establishes a robust framework for controlling LLM generations, ensuring their ethical and safe deployment in real-world applications.

Submission history

From: Ruben Härle [view email]
[v1]
Mon, 11 Nov 2024 16:51:39 UTC (861 KB)
[v2]
Thu, 5 Dec 2024 10:45:02 UTC (861 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.