TWD: A New Deep E2E Model for Text Watermark/Caption and Scene Text Detection in Video
Document Type
Conference Article
Publication Title
Proceedings - International Conference on Pattern Recognition
Abstract
Text watermark detection in video images is challenging because text watermark characteristics are different from caption and scene texts in the video images. Developing a successful model for detecting text watermark, caption, and scene texts is an open challenge. This study aims at developing a new Deep End-to-End model for Text Watermark Detection (TWD), caption and scene text in video images. To standardize non-uniform contrast, quality, and resolution, we explore the U-Net3+ model for enhancing poor quality text without affecting high-quality text. Similarly, to address the challenges of arbitrary orientation, text shapes and complex background, we explore Stacked Hourglass Encoded Fourier Contour Embedding Network (SFCENet) by feeding the output of the U-Net3+ model as input. Furthermore, the proposed work integrates enhancement and detection models as an end-to-end model for detecting multi-type text in video images. To validate the proposed model, we create our own dataset (named TW-866), which provides video images containing text watermark, caption (subtitles), as well as scene text. The proposed model is also evaluated on standard natural scene text detection datasets, namely, ICDAR 2019 MLT, CTW1500, Total-Text, and DAST1500. The results show that the proposed method outperforms the existing methods. This is the first work on text watermark detection in video images to the best of our knowledge.
First Page
1492
Last Page
1498
DOI
10.1109/ICPR56361.2022.9956279
Publication Date
1-1-2022
Recommended Citation
Banerjee, Ayan; Shivakumara, Palaiahnakote; Acharya, Parikshit; Pal, Umapada; and Canet, Josep Llados, "TWD: A New Deep E2E Model for Text Watermark/Caption and Scene Text Detection in Video" (2022). Conference Articles. 440.
https://digitalcommons.isical.ac.in/conf-articles/440