Learning Disentangled Speech Representations

Every’s Master Plan


View a PDF of the paper titled Learning Disentangled Speech Representations, by Yusuf Brima and 2 other authors

View PDF
HTML (experimental)

Abstract:Disentangled representation learning in speech processing has lagged behind other domains, largely due to the lack of datasets with annotated generative factors for robust evaluation. To address this, we propose SynSpeech, a novel large-scale synthetic speech dataset specifically designed to enable research on disentangled speech representations. SynSpeech includes controlled variations in speaker identity, spoken text, and speaking style, with three dataset versions to support experimentation at different levels of complexity.

In this study, we present a comprehensive framework to evaluate disentangled representation learning techniques, applying both linear probing and established supervised disentanglement metrics to assess the modularity, compactness, and informativeness of the representations learned by a state-of-the-art model. Using the RAVE model as a test case, we find that SynSpeech facilitates benchmarking across a range of factors, achieving promising disentanglement of simpler features like gender and speaking style, while highlighting challenges in isolating complex attributes like speaker identity. This benchmark dataset and evaluation framework fills a critical gap, supporting the development of more robust and interpretable speech representation learning methods.

Submission history

From: Yusuf Brima [view email]
[v1]
Sat, 4 Nov 2023 04:54:17 UTC (508 KB)
[v2]
Sat, 9 Nov 2024 06:59:47 UTC (1,988 KB)
[v3]
Thu, 9 Jan 2025 06:11:32 UTC (2,337 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.