View a PDF of the paper titled Accurately Classifying Out-Of-Distribution Data in Facial Recognition, by Gianluca Barone and Aashrit Cunchala and Rudy Nunez
Abstract:Standard classification theory assumes that the distribution of images in the test and training sets are identical. Unfortunately, real-life scenarios typically feature unseen data (“out-of-distribution data”) which is different from data in the training distribution (“in-distribution”). This issue is most prevalent in social justice problems where data from under-represented groups may appear in the test data without representing an equal proportion of the training data. This may result in a model returning confidently wrong decisions and predictions. We are interested in the following question: Can the performance of a neural network improve on facial images of out-of-distribution data when it is trained simultaneously on multiple datasets of in-distribution data? We approach this problem by incorporating the Outlier Exposure model and investigate how the model’s performance changes when other datasets of facial images were implemented. We observe that the accuracy and other metrics of the model can be increased by applying Outlier Exposure, incorporating a trainable weight parameter to increase the machine’s emphasis on outlier images, and by re-weighting the importance of different class labels. We also experimented with whether sorting the images and determining outliers via image features would have more of an effect on the metrics than sorting by average pixel value, and found no conclusive results. Our goal was to make models not only more accurate but also more fair by scanning a more expanded range of images. Utilizing Python and the Pytorch package, we found models utilizing outlier exposure could result in more fair classification.
Submission history
From: Gianluca Barone [view email]
[v1]
Fri, 5 Apr 2024 03:51:19 UTC (1,302 KB)
[v2]
Mon, 24 Jun 2024 03:19:39 UTC (1,302 KB)
[v3]
Tue, 25 Jun 2024 02:20:06 UTC (1,302 KB)
[v4]
Sat, 14 Sep 2024 15:37:34 UTC (1,302 KB)
[v5]
Fri, 11 Oct 2024 15:48:53 UTC (1,271 KB)
Source link
lol