LLMs and Finetuning: Benchmarking cross-domain performance for hate speech detection

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled LLMs and Finetuning: Benchmarking cross-domain performance for hate speech detection, by Ahmad Nasir and 2 other authors

View PDF

Abstract:In the evolving landscape of online communication, hate speech detection remains a formidable challenge, further compounded by the diversity of digital platforms. This study investigates the effectiveness and adaptability of pre-trained and fine-tuned Large Language Models (LLMs) in identifying hate speech, to address two central questions: (1) To what extent does the model performance depend on the fine-tuning and training parameters?, (2) To what extent do models generalize to cross-domain hate speech detection? and (3) What are the specific features of the datasets or models that influence the generalization potential? The experiment shows that LLMs offer a huge advantage over the state-of-the-art even without pretraining. Ordinary least squares analyses suggest that the advantage of training with fine-grained hate speech labels is washed away with the increase in dataset size. We conclude with a vision for the future of hate speech detection, emphasizing cross-domain generalizability and appropriate benchmarking practices.

Submission history

From: Kokil Jaidka [view email]
[v1]
Sun, 29 Oct 2023 10:07:32 UTC (521 KB)
[v2]
Sat, 30 Mar 2024 15:01:08 UTC (499 KB)
[v3]
Sat, 30 Nov 2024 02:56:48 UTC (668 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.