PostMark: A Robust Blackbox Watermark for Large Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled PostMark: A Robust Blackbox Watermark for Large Language Models, by Yapei Chang and 4 other authors

View PDF
HTML (experimental)

Abstract:The most effective techniques to detect LLM-generated text rely on inserting a detectable signature — or watermark — during the model’s decoding process. Most existing watermarking methods require access to the underlying LLM’s logits, which LLM API providers are loath to share due to fears of model distillation. As such, these watermarks must be implemented independently by each LLM provider. In this paper, we develop PostMark, a modular post-hoc watermarking procedure in which an input-dependent set of words (determined via a semantic embedding) is inserted into the text after the decoding process has completed. Critically, PostMark does not require logit access, which means it can be implemented by a third party. We also show that PostMark is more robust to paraphrasing attacks than existing watermarking methods: our experiments cover eight baseline algorithms, five base LLMs, and three datasets. Finally, we evaluate the impact of PostMark on text quality using both automated and human assessments, highlighting the trade-off between quality and robustness to paraphrasing. We release our code, outputs, and annotations at this https URL.

Submission history

From: Yapei Chang [view email]
[v1]
Thu, 20 Jun 2024 17:27:14 UTC (2,667 KB)
[v2]
Fri, 11 Oct 2024 16:19:55 UTC (2,670 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.