View a PDF of the paper titled Investigating Annotator Bias in Large Language Models for Hate Speech Detection, by Amit Das and 9 other authors
Abstract:Data annotation, the practice of assigning descriptive labels to raw data, is pivotal in optimizing the performance of machine learning models. However, it is a resource-intensive process susceptible to biases introduced by annotators. The emergence of sophisticated Large Language Models (LLMs), like ChatGPT presents a unique opportunity to modernize and streamline this complex procedure. While existing research extensively evaluates the efficacy of LLMs, as annotators, this paper delves into the biases present in LLMs, specifically GPT 3.5 and GPT 4o when annotating hate speech data. Our research contributes to understanding biases in four key categories: gender, race, religion, and disability. Specifically targeting highly vulnerable groups within these categories, we analyze annotator biases. Furthermore, we conduct a comprehensive examination of potential factors contributing to these biases by scrutinizing the annotated data. We introduce our custom hate speech detection dataset, HateSpeechCorpus, to conduct this research. Additionally, we perform the same experiments on the ETHOS (Mollas et al., 2022) dataset also for comparative analysis. This paper serves as a crucial resource, guiding researchers and practitioners in harnessing the potential of LLMs for dataannotation, thereby fostering advancements in this critical field. The HateSpeechCorpus dataset is available here: this https URL
Submission history
From: Aman Chadha Mr. [view email]
[v1]
Mon, 17 Jun 2024 00:18:31 UTC (118 KB)
[v2]
Tue, 18 Jun 2024 06:21:16 UTC (118 KB)
[v3]
Sat, 12 Oct 2024 21:46:04 UTC (118 KB)
Source link
lol