View a PDF of the paper titled ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors, by Zhexin Zhang and 10 other authors
Abstract:The safety of Large Language Models (LLMs) has gained increasing attention in recent years, but there still lacks a comprehensive approach for detecting safety issues within LLMs’ responses in an aligned, customizable and explainable manner. In this paper, we propose ShieldLM, an LLM-based safety detector, which aligns with common safety standards, supports customizable detection rules, and provides explanations for its decisions. To train ShieldLM, we compile a large bilingual dataset comprising 14,387 query-response pairs, annotating the safety of responses based on various safety standards. Through extensive experiments, we demonstrate that ShieldLM surpasses strong baselines across four test sets, showcasing remarkable customizability and explainability. Besides performing well on standard detection datasets, ShieldLM has also been shown to be effective as a safety evaluator for advanced LLMs. ShieldLM is released at url{this https URL} to support accurate and explainable safety detection under various safety standards.
Submission history
From: Zhexin Zhang [view email]
[v1]
Mon, 26 Feb 2024 09:43:02 UTC (9,893 KB)
[v2]
Tue, 5 Nov 2024 02:13:59 UTC (10,592 KB)
Source link
lol