PViT: Prior-augmented Vision Transformer for Out-of-distribution Detection

Every’s Master Plan


View a PDF of the paper titled PViT: Prior-augmented Vision Transformer for Out-of-distribution Detection, by Tianhao Zhang and 2 other authors

View PDF
HTML (experimental)

Abstract:Vision Transformers (ViTs) have achieved remarkable success over various vision tasks, yet their robustness against data distribution shifts and inherent inductive biases remain underexplored. To enhance the robustness of ViT models for image Out-of-Distribution (OOD) detection, we introduce a novel and generic framework named Prior-augmented Vision Transformer (PViT). Taking as input the prior class logits from a pretrained model, we train PViT to predict the class logits. During inference, PViT identifies OOD samples by quantifying the divergence between the predicted class logits and the prior logits obtained from pre-trained models. Unlike existing state-of-the-art(SOTA) OOD detection methods, PViT shapes the decision boundary between ID and OOD by utilizing the proposed prior guided confidence, without requiring additional data modeling, generation methods, or structural modifications. Extensive experiments on the large-scale ImageNet benchmark, evaluated against over seven OOD datasets, demonstrate that PViT significantly outperforms existing SOTA OOD detection methods in terms of FPR95 and AUROC. The codebase is publicly available at this https URL.

Submission history

From: Tianhao Zhang [view email]
[v1]
Sun, 27 Oct 2024 23:29:46 UTC (40,783 KB)
[v2]
Mon, 13 Jan 2025 23:45:51 UTC (45,674 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.