View a PDF of the paper titled CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model, by Shuai Zhao and 3 other authors
Abstract:Pre-trained vision-language models~(VLMs) are the de-facto foundation models for various downstream tasks. However, scene text recognition methods still prefer backbones pre-trained on a single modality, namely, the visual modality, despite the potential of VLMs to serve as powerful scene text readers. For example, CLIP can robustly identify regular (horizontal) and irregular (rotated, curved, blurred, or occluded) text in images. With such merits, we transform CLIP into a scene text reader and introduce CLIP4STR, a simple yet effective STR method built upon image and text encoders of CLIP. It has two encoder-decoder branches: a visual branch and a cross-modal branch. The visual branch provides an initial prediction based on the visual feature, and the cross-modal branch refines this prediction by addressing the discrepancy between the visual feature and text semantics. To fully leverage the capabilities of both branches, we design a dual predict-and-refine decoding scheme for inference. We scale CLIP4STR in terms of the model size, pre-training data, and training data, achieving state-of-the-art performance on 13 STR benchmarks. Additionally, a comprehensive empirical study is provided to enhance the understanding of the adaptation of CLIP to STR. Our method establishes a simple yet strong baseline for future STR research with VLMs.
Submission history
From: Shuai Zhao [view email]
[v1]
Tue, 23 May 2023 12:51:20 UTC (7,208 KB)
[v2]
Tue, 17 Oct 2023 05:39:43 UTC (7,388 KB)
[v3]
Thu, 2 May 2024 12:10:16 UTC (7,398 KB)
[v4]
Tue, 24 Dec 2024 04:27:37 UTC (4,213 KB)
Source link
lol