View a PDF of the paper titled Bridging the Projection Gap: Overcoming Projection Bias Through Parameterized Distance Learning, by Chong Zhang and 5 other authors
Abstract:Generalized zero-shot learning (GZSL) aims to recognize samples from both seen and unseen classes using only seen class samples for training. However, GZSL methods are prone to bias towards seen classes during inference due to the projection function being learned from seen classes. Most methods focus on learning an accurate projection, but bias in the projection is inevitable. We address this projection bias by proposing to learn a parameterized Mahalanobis distance metric for robust inference. Our key insight is that the distance computation during inference is critical, even with a biased projection. We make two main contributions – (1) We extend the VAEGAN (Variational Autoencoder & Generative Adversarial Networks) architecture with two branches to separately output the projection of samples from seen and unseen classes, enabling more robust distance learning. (2) We introduce a novel loss function to optimize the Mahalanobis distance representation and reduce projection bias. Extensive experiments on four datasets show that our approach outperforms state-of-the-art GZSL techniques with improvements of up to 3.5 % on the harmonic mean metric.
Submission history
From: Chong Zhang Mr. [view email]
[v1]
Mon, 4 Sep 2023 06:41:29 UTC (710 KB)
[v2]
Tue, 2 Apr 2024 05:20:01 UTC (4,865 KB)
[v3]
Fri, 20 Sep 2024 11:50:58 UTC (4,867 KB)
Source link
lol