SensorQA: A Question Answering Benchmark for Daily-Life Monitoring

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


[Submitted on 9 Jan 2025]

View a PDF of the paper titled SensorQA: A Question Answering Benchmark for Daily-Life Monitoring, by Benjamin Reichman and 7 other authors

View PDF
HTML (experimental)

Abstract:With the rapid growth in sensor data, effectively interpreting and interfacing with these data in a human-understandable way has become crucial. While existing research primarily focuses on learning classification models, fewer studies have explored how end users can actively extract useful insights from sensor data, often hindered by the lack of a proper dataset. To address this gap, we introduce Dataset, the first human-created question-answering (QA) dataset for long-term time-series sensor data for daily life monitoring. Dataset is created by human workers and includes 5.6K diverse and practical queries that reflect genuine human interests, paired with accurate answers derived from sensor data. We further establish benchmarks for state-of-the-art AI models on this dataset and evaluate their performance on typical edge devices. Our results reveal a gap between current models and optimal QA performance and efficiency, highlighting the need for new contributions. The dataset and code are available at: url{this https URL}.

Submission history

From: Benjamin Reichman [view email]
[v1]
Thu, 9 Jan 2025 05:06:44 UTC (12,269 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.