Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


[Submitted on 14 Jan 2025]

Authors:Shuzhou Sun (1 and 2), Li Liu (3), Yongxiang Liu (3), Zhen Liu (3), Shuanghui Zhang (3), Janne Heikkilä (2), Xiang Li (3) ((1) The College of Computer Science, Nankai University, Tianjin, China, (2) The Center for Machine Vision and Signal Analysis, University of Oulu, Finland, (3) The College of Electronic Science, National University of Defense Technology, China)

View a PDF of the paper titled Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation, by Shuzhou Sun (1 and 2) and 15 other authors

View PDF
HTML (experimental)

Abstract:Bias in Foundation Models (FMs) – trained on vast datasets spanning societal and historical knowledge – poses significant challenges for fairness and equity across fields such as healthcare, education, and finance. These biases, rooted in the overrepresentation of stereotypes and societal inequalities in training data, exacerbate real-world discrimination, reinforce harmful stereotypes, and erode trust in AI systems. To address this, we introduce Trident Probe Testing (TriProTesting), a systematic testing method that detects explicit and implicit biases using semantically designed probes. Here we show that FMs, including CLIP, ALIGN, BridgeTower, and OWLv2, demonstrate pervasive biases across single and mixed social attributes (gender, race, age, and occupation). Notably, we uncover mixed biases when social attributes are combined, such as gender x race, gender x age, and gender x occupation, revealing deeper layers of discrimination. We further propose Adaptive Logit Adjustment (AdaLogAdjustment), a post-processing technique that dynamically redistributes probability power to mitigate these biases effectively, achieving significant improvements in fairness without retraining models. These findings highlight the urgent need for ethical AI practices and interdisciplinary solutions to address biases not only at the model level but also in societal structures. Our work provides a scalable and interpretable solution that advances fairness in AI systems while offering practical insights for future research on fair AI technologies.

Submission history

From: Shuzhou Sun [view email]
[v1]
Tue, 14 Jan 2025 19:06:37 UTC (5,939 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.