Large Language Models as Carriers of Hidden Messages

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Large Language Models as Carriers of Hidden Messages, by Jakub Hoscilowicz and 4 other authors

View PDF
HTML (experimental)

Abstract:Simple fine-tuning can embed hidden text into large language models (LLMs), which is revealed only when triggered by a specific query. Applications include LLM fingerprinting, where a unique identifier is embedded to verify licensing compliance, and steganography, where the LLM carries hidden messages disclosed through a trigger query.

Our work demonstrates that embedding hidden text via fine-tuning, although seemingly secure due to the vast number of potential triggers, is vulnerable to extraction through analysis of the LLM’s output decoding process. We introduce an extraction attack called Unconditional Token Forcing (UTF), which iteratively feeds tokens from the LLM’s vocabulary to reveal sequences with high token probabilities, indicating hidden text candidates. We also present Unconditional Token Forcing Confusion (UTFC), a defense paradigm that makes hidden text resistant to all known extraction attacks without degrading the general performance of LLMs compared to standard fine-tuning. UTFC has both benign (improving LLM fingerprinting) and malign applications (using LLMs to create covert communication channels).

Submission history

From: Jakub Hościłowicz [view email]
[v1]
Tue, 4 Jun 2024 16:49:06 UTC (532 KB)
[v2]
Mon, 29 Jul 2024 16:30:17 UTC (534 KB)
[v3]
Sun, 25 Aug 2024 14:21:29 UTC (536 KB)
[v4]
Tue, 24 Sep 2024 12:00:29 UTC (557 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.