[Submitted on 15 Jan 2025]
View a PDF of the paper titled VCRScore: Image captioning metric based on V&L Transformers, CLIP, and precision-recall, by Guillermo Ruiz and 2 other authors
Abstract:Image captioning has become an essential Vision & Language research task. It is about predicting the most accurate caption given a specific image or video. The research community has achieved impressive results by continuously proposing new models and approaches to improve the overall model’s performance. Nevertheless, despite increasing proposals, the performance metrics used to measure their advances have remained practically untouched through the years. A probe of that, nowadays metrics like BLEU, METEOR, CIDEr, and ROUGE are still very used, aside from more sophisticated metrics such as BertScore and ClipScore.
Hence, it is essential to adjust how are measure the advances, limitations, and scopes of the new image captioning proposals, as well as to adapt new metrics to these new advanced image captioning approaches.
This work proposes a new evaluation metric for the image captioning problem. To do that, first, it was generated a human-labeled dataset to assess to which degree the captions correlate with the image’s content. Taking these human scores as ground truth, we propose a new metric, and compare it with several well-known metrics, from classical to newer ones. Outperformed results were also found, and interesting insights were presented and discussed.
Submission history
From: Daniela Moctezuma [view email]
[v1]
Wed, 15 Jan 2025 21:14:36 UTC (2,458 KB)
Source link
lol