Evaluating the Performance of Large Language Models in Scientific Claim Detection and Classification

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning



arXiv:2412.16486v1 Announce Type: new
Abstract: The pervasive influence of social media during the COVID-19 pandemic has been a double-edged sword, enhancing communication while simultaneously propagating misinformation. This textit{Digital Infodemic} has highlighted the urgent need for automated tools capable of discerning and disseminating factual content. This study evaluates the efficacy of Large Language Models (LLMs) as innovative solutions for mitigating misinformation on platforms like Twitter. LLMs, such as OpenAI’s GPT and Meta’s LLaMA, offer a pre-trained, adaptable approach that bypasses the extensive training and overfitting issues associated with traditional machine learning models. We assess the performance of LLMs in detecting and classifying COVID-19-related scientific claims, thus facilitating informed decision-making. Our findings indicate that LLMs have significant potential as automated fact-checking tools, though research in this domain is nascent and further exploration is required. We present a comparative analysis of LLMs’ performance using a specialized dataset and propose a framework for their application in public health communication.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.