SOWA: Adapting Hierarchical Frozen Window Self-Attention to Visual-Language Models for Better Anomaly Detection

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled SOWA: Adapting Hierarchical Frozen Window Self-Attention to Visual-Language Models for Better Anomaly Detection, by Zongxiang Hu and 2 other authors

View PDF
HTML (experimental)

Abstract:Visual anomaly detection is essential in industrial manufacturing, yet traditional methods often rely heavily on extensive normal datasets and task-specific models, limiting their scalability. Recent advancements in large-scale vision-language models have significantly enhanced zero- and few-shot anomaly detection. However, these approaches may not fully leverage hierarchical features, potentially overlooking nuanced details crucial for accurate detection. To address this, we introduce a novel window self-attention mechanism based on the CLIP model, augmented with learnable prompts to process multi-level features within a Soldier-Officer Window Self-Attention (SOWA) framework. Our method has been rigorously evaluated on five benchmark datasets, achieving superior performance by leading in 18 out of 20 metrics, setting a new standard against existing state-of-the-art techniques.

Submission history

From: Zongxiang Hu [view email]
[v1]
Thu, 4 Jul 2024 04:54:03 UTC (11,943 KB)
[v2]
Tue, 30 Jul 2024 11:02:50 UTC (2,215 KB)
[v3]
Fri, 15 Nov 2024 02:40:34 UTC (2,389 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.