Semantically-Prompted Language Models Improve Visual Descriptions

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Semantically-Prompted Language Models Improve Visual Descriptions, by Michael Ogezi and 2 other authors

View PDF
HTML (experimental)

Abstract:Language-vision models like CLIP have made significant strides in vision tasks, such as zero-shot image classification (ZSIC). However, generating specific and expressive visual descriptions remains challenging; descriptions produced by current methods are often ambiguous and lacking in granularity. To tackle these issues, we propose V-GLOSS: Visual Glosses, a novel method built upon two key ideas. The first is Semantic Prompting, which conditions a language model on structured semantic knowledge. The second is a new contrastive algorithm that elicits fine-grained distinctions between similar concepts. With both ideas, we demonstrate that V-GLOSS improves visual descriptions and achieves strong results in the zero-shot setting on general and fine-grained image-classification datasets, including ImageNet, STL-10, FGVC Aircraft, and Flowers 102. Moreover, these descriptive capabilities contribute to enhancing image-generation performance. Finally, we introduce a quality-tested silver dataset with descriptions generated with V-GLOSS for all ImageNet classes.

Submission history

From: Michael Ogezi [view email]
[v1]
Mon, 5 Jun 2023 17:22:54 UTC (8,717 KB)
[v2]
Fri, 23 Jun 2023 16:29:51 UTC (1 KB) (withdrawn)
[v3]
Tue, 2 Apr 2024 16:19:22 UTC (4,634 KB)
[v4]
Fri, 22 Nov 2024 15:58:28 UTC (4,634 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.