View a PDF of the paper titled AudioMarkBench: Benchmarking Robustness of Audio Watermarking, by Hongbin Liu and 4 other authors
Abstract:The increasing realism of synthetic speech, driven by advancements in text-to-speech models, raises ethical concerns regarding impersonation and disinformation. Audio watermarking offers a promising solution via embedding human-imperceptible watermarks into AI-generated audios. However, the robustness of audio watermarking against common/adversarial perturbations remains understudied. We present AudioMarkBench, the first systematic benchmark for evaluating the robustness of audio watermarking against watermark removal and watermark forgery. AudioMarkBench includes a new dataset created from Common-Voice across languages, biological sexes, and ages, 3 state-of-the-art watermarking methods, and 15 types of perturbations. We benchmark the robustness of these methods against the perturbations in no-box, black-box, and white-box settings. Our findings highlight the vulnerabilities of current watermarking techniques and emphasize the need for more robust and fair audio watermarking solutions. Our dataset and code are publicly available at this https URL.
Submission history
From: Hongbin Liu [view email]
[v1]
Tue, 11 Jun 2024 06:18:29 UTC (633 KB)
[v2]
Wed, 13 Nov 2024 08:41:50 UTC (874 KB)
Source link
lol