View a PDF of the paper titled Speech Retrieval-Augmented Generation without Automatic Speech Recognition, by Do June Min and 5 other authors
Abstract:One common approach for question answering over speech data is to first transcribe speech using automatic speech recognition (ASR) and then employ text-based retrieval-augmented generation (RAG) on the transcriptions. While this cascaded pipeline has proven effective in many practical settings, ASR errors can propagate to the retrieval and generation steps. To overcome this limitation, we introduce SpeechRAG, a novel framework designed for open-question answering over spoken data. Our proposed approach fine-tunes a pre-trained speech encoder into a speech adapter fed into a frozen large language model (LLM)–based retrieval model. By aligning the embedding spaces of text and speech, our speech retriever directly retrieves audio passages from text-based queries, leveraging the retrieval capacity of the frozen text retriever. Our retrieval experiments on spoken question answering datasets show that direct speech retrieval does not degrade over the text-based baseline, and outperforms the cascaded systems using ASR. For generation, we use a speech language model (SLM) as a generator, conditioned on audio passages rather than transcripts. Without fine-tuning of the SLM, this approach outperforms cascaded text-based models when there is high WER in the transcripts.
Submission history
From: Do June Min [view email]
[v1]
Sat, 21 Dec 2024 06:16:04 UTC (1,873 KB)
[v2]
Thu, 2 Jan 2025 07:29:01 UTC (1,873 KB)
[v3]
Fri, 3 Jan 2025 07:18:30 UTC (1,873 KB)
Source link
lol