GCM-Net: Graph-enhanced Cross-Modal Infusion with a Metaheuristic-Driven Network for Video Sentiment and Emotion Analysis

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


[Submitted on 2 Oct 2024]

View a PDF of the paper titled GCM-Net: Graph-enhanced Cross-Modal Infusion with a Metaheuristic-Driven Network for Video Sentiment and Emotion Analysis, by Prasad Chaudhari and 4 other authors

View PDF

Abstract:Sentiment analysis and emotion recognition in videos are challenging tasks, given the diversity and complexity of the information conveyed in different modalities. Developing a highly competent framework that effectively addresses the distinct characteristics across various modalities is a primary concern in this domain. Previous studies on combined multimodal sentiment and emotion analysis often overlooked effective fusion for modality integration, intermodal contextual congruity, optimizing concatenated feature spaces, leading to suboptimal architecture. This paper presents a novel framework that leverages the multi-modal contextual information from utterances and applies metaheuristic algorithms to learn the contributing features for utterance-level sentiment and emotion prediction. Our Graph-enhanced Cross-Modal Infusion with a Metaheuristic-Driven Network (GCM-Net) integrates graph sampling and aggregation to recalibrate the modality features for video sentiment and emotion prediction. GCM-Net includes a cross-modal attention module determining intermodal interactions and utterance relevance. A harmonic optimization module employing a metaheuristic algorithm combines attended features, allowing for handling both single and multi-utterance inputs. To show the effectiveness of our approach, we have conducted extensive evaluations on three prominent multi-modal benchmark datasets, CMU MOSI, CMU MOSEI, and IEMOCAP. The experimental results demonstrate the efficacy of our proposed approach, showcasing accuracies of 91.56% and 86.95% for sentiment analysis on MOSI and MOSEI datasets. We have performed emotion analysis for the IEMOCAP dataset procuring an accuracy of 85.66% which signifies substantial performance enhancements over existing methods.

Submission history

From: Mohammad Zia Ur Rehman [view email]
[v1]
Wed, 2 Oct 2024 10:07:48 UTC (10,966 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.