Spelling Correction through Rewriting of Non-Autoregressive ASR Lattices

Every’s Master Plan


[Submitted on 24 Sep 2024]

View a PDF of the paper titled Spelling Correction through Rewriting of Non-Autoregressive ASR Lattices, by Leonid Velikovich and 6 other authors

View PDF
HTML (experimental)

Abstract:For end-to-end Automatic Speech Recognition (ASR) models, recognizing personal or rare phrases can be hard. A promising way to improve accuracy is through spelling correction (or rewriting) of the ASR lattice, where potentially misrecognized phrases are replaced with acoustically similar and contextually relevant alternatives. However, rewriting is challenging for ASR models trained with connectionist temporal classification (CTC) due to noisy hypotheses produced by a non-autoregressive, context-independent beam search.

We present a finite-state transducer (FST) technique for rewriting wordpiece lattices generated by Transformer-based CTC models. Our algorithm performs grapheme-to-phoneme (G2P) conversion directly from wordpieces into phonemes, avoiding explicit word representations and exploiting the richness of the CTC lattice. Our approach requires no retraining or modification of the ASR model. We achieved up to a 15.2% relative reduction in sentence error rate (SER) on a test set with contextually relevant entities.

Submission history

From: Leonid Velikovich [view email]
[v1]
Tue, 24 Sep 2024 21:42:25 UTC (1,875 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.