Gender-Neutral Large Language Models for Medical Applications: Reducing Bias in PubMed Abstracts

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning



arXiv:2501.06365v1 Announce Type: new
Abstract: This paper presents a pipeline for mitigating gender bias in large language models (LLMs) used in medical literature by neutralizing gendered occupational pronouns. A dataset of 379,000 PubMed abstracts from 1965-1980 was processed to identify and modify pronouns tied to professions. We developed a BERT-based model, “Modern Occupational Bias Elimination with Refined Training,” or “MOBERT,” trained on these neutralized abstracts, and compared its performance with “1965Bert,” trained on the original dataset. MOBERT achieved a 70% inclusive replacement rate, while 1965Bert reached only 4%. A further analysis of MOBERT revealed that pronoun replacement accuracy correlated with the frequency of occupational terms in the training data. We propose expanding the dataset and refining the pipeline to improve performance and ensure more equitable language modeling in medical applications.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.