[Submitted on 14 Jan 2025]
View a PDF of the paper titled Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation, by Shuzhou Sun (1 and 2) and 15 other authors
Abstract:Bias in Foundation Models (FMs) – trained on vast datasets spanning societal and historical knowledge – poses significant challenges for fairness and equity across fields such as healthcare, education, and finance. These biases, rooted in the overrepresentation of stereotypes and societal inequalities in training data, exacerbate real-world discrimination, reinforce harmful stereotypes, and erode trust in AI systems. To address this, we introduce Trident Probe Testing (TriProTesting), a systematic testing method that detects explicit and implicit biases using semantically designed probes. Here we show that FMs, including CLIP, ALIGN, BridgeTower, and OWLv2, demonstrate pervasive biases across single and mixed social attributes (gender, race, age, and occupation). Notably, we uncover mixed biases when social attributes are combined, such as gender x race, gender x age, and gender x occupation, revealing deeper layers of discrimination. We further propose Adaptive Logit Adjustment (AdaLogAdjustment), a post-processing technique that dynamically redistributes probability power to mitigate these biases effectively, achieving significant improvements in fairness without retraining models. These findings highlight the urgent need for ethical AI practices and interdisciplinary solutions to address biases not only at the model level but also in societal structures. Our work provides a scalable and interpretable solution that advances fairness in AI systems while offering practical insights for future research on fair AI technologies.
Submission history
From: Shuzhou Sun [view email]
[v1]
Tue, 14 Jan 2025 19:06:37 UTC (5,939 KB)
Source link
lol