View a PDF of the paper titled Harnessing the Zero-Shot Power of Instruction-Tuned Large Language Model in End-to-End Speech Recognition, by Yosuke Higuchi and 2 other authors
Abstract:We propose to utilize an instruction-tuned large language model (LLM) for guiding the text generation process in automatic speech recognition (ASR). Modern large language models (LLMs) are adept at performing various text generation tasks through zero-shot learning, prompted with instructions designed for specific objectives. This paper explores the potential of LLMs to derive linguistic information that can facilitate text generation in end-to-end ASR models. Specifically, we instruct an LLM to correct grammatical errors in an ASR hypothesis and use the LLM-derived representations to refine the output further. The proposed model is built on the joint CTC and attention architecture, with the LLM serving as a front-end feature extractor for the decoder. The ASR hypothesis, subject to correction, is obtained from the encoder via CTC decoding and fed into the LLM along with a specific instruction. The decoder subsequently takes as input the LLM output to perform token predictions, combining acoustic information from the encoder and the powerful linguistic information provided by the LLM. Experimental results show that the proposed LLM-guided model achieves a relative gain of approximately 13% in word error rates across major benchmarks.
Submission history
From: Yosuke Higuchi [view email]
[v1]
Tue, 19 Sep 2023 11:10:50 UTC (152 KB)
[v2]
Mon, 30 Sep 2024 06:22:12 UTC (289 KB)
[v3]
Tue, 7 Jan 2025 05:15:54 UTC (289 KB)
Source link
lol