SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering, by Tianyu Yang and 5 other authors

View PDF
HTML (experimental)

Abstract:Audio-Visual Question Answering (AVQA) is a challenging task that involves answering questions based on both auditory and visual information in videos. A significant challenge is interpreting complex multi-modal scenes, which include both visual objects and sound sources, and connecting them to the given question. In this paper, we introduce the Source-aware Semantic Representation Network (SaSR-Net), a novel model designed for AVQA. SaSR-Net utilizes source-wise learnable tokens to efficiently capture and align audio-visual elements with the corresponding question. It streamlines the fusion of audio and visual information using spatial and temporal attention mechanisms to identify answers in multi-modal scenes. Extensive experiments on the Music-AVQA and AVQA-Yang datasets show that SaSR-Net outperforms state-of-the-art AVQA methods.

Submission history

From: Tianyu Yang [view email]
[v1]
Thu, 7 Nov 2024 18:12:49 UTC (6,406 KB)
[v2]
Fri, 8 Nov 2024 04:56:53 UTC (6,406 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.