Diving into the Depths of Spotting Text in Multi-Domain Noisy Scenes

Document Type

Conference Article

Publication Title

Proceedings IEEE International Conference on Robotics and Automation

Abstract

When used in a real-world noisy environment, the capacity to generalize to multiple domains is essential for any autonomous scene text spotting system. However, existing state-of-the-art methods employ pretraining and fine-tuning strategies on natural scene datasets, which do not exploit the feature interaction across other complex domains. In this work, we explore and investigate the problem of domain-agnostic scene text spotting, i.e., training a model on multi-domain source data such that it can directly generalize to target domains rather than being specialized for a specific domain or scenario. In this regard, we present the community a text spotting validation benchmark called Under-Water Text (UWT) for noisy underwater scenes to establish an important case study. Moreover, we also design an efficient super-resolution based end-to-end transformer baseline called DA-TextSpotter which achieves comparable or superior performance over existing text spotting architectures for both regular and arbitrary-shaped scene text spotting benchmarks in terms of both accuracy and model efficiency. The dataset, code and pre-trained models have been released in our Github.

First Page

410

Last Page

417

DOI

10.1109/ICRA57147.2024.10611120

Publication Date

1-1-2024

Comments

Open Access; Green Open Access

Share

COinS