MSCoTDet: Language-driven Multi-modal Fusion for Improved Multispectral Pedestrian Detection

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled MSCoTDet: Language-driven Multi-modal Fusion for Improved Multispectral Pedestrian Detection, by Taeheon Kim and 5 other authors

View PDF
HTML (experimental)

Abstract:Multispectral pedestrian detection is attractive for around-the-clock applications due to the complementary information between RGB and thermal modalities. However, current models often fail to detect pedestrians in certain cases (e.g., thermal-obscured pedestrians), particularly due to the modality bias learned from statistically biased datasets. In this paper, we investigate how to mitigate modality bias in multispectral pedestrian detection using Large Language Models (LLMs). Accordingly, we design a Multispectral Chain-of-Thought (MSCoT) prompting strategy, which prompts the LLM to perform multispectral pedestrian detection. Moreover, we propose a novel Multispectral Chain-of-Thought Detection (MSCoTDet) framework that integrates MSCoT prompting into multispectral pedestrian detection. To this end, we design a Language-driven Multi-modal Fusion (LMF) strategy that enables fusing the outputs of MSCoT prompting with the detection results of vision-based multispectral pedestrian detection models. Extensive experiments validate that MSCoTDet effectively mitigates modality biases and improves multispectral pedestrian detection.

Submission history

From: Taeheon Kim [view email]
[v1]
Fri, 22 Mar 2024 13:50:27 UTC (4,765 KB)
[v2]
Wed, 29 May 2024 12:53:17 UTC (5,129 KB)
[v3]
Wed, 8 Jan 2025 09:29:10 UTC (7,470 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.