View a PDF of the paper titled Beyond Silent Letters: Amplifying LLMs in Emotion Recognition with Vocal Nuances, by Zehui Wu and 4 other authors
Abstract:Emotion recognition in speech is a challenging multimodal task that requires understanding both verbal content and vocal nuances. This paper introduces a novel approach to emotion detection using Large Language Models (LLMs), which have demonstrated exceptional capabilities in natural language understanding. To overcome the inherent limitation of LLMs in processing audio inputs, we propose SpeechCueLLM, a method that translates speech characteristics into natural language descriptions, allowing LLMs to perform multimodal emotion analysis via text prompts without any architectural changes. Our method is minimal yet impactful, outperforming baseline models that require structural modifications. We evaluate SpeechCueLLM on two datasets: IEMOCAP and MELD, showing significant improvements in emotion recognition accuracy, particularly for high-quality audio data. We also explore the effectiveness of various feature representations and fine-tuning strategies for different LLMs. Our experiments demonstrate that incorporating speech descriptions yields a more than 2% increase in the average weighted F1 score on IEMOCAP (from 70.111% to 72.596%).
Submission history
From: Ziwei Gong [view email]
[v1]
Wed, 31 Jul 2024 03:53:14 UTC (9,752 KB)
[v2]
Thu, 1 Aug 2024 01:17:34 UTC (9,752 KB)
[v3]
Wed, 16 Oct 2024 00:26:45 UTC (9,237 KB)
[v4]
Mon, 23 Dec 2024 12:35:12 UTC (9,240 KB)
Source link
lol