An Efficient Preprocessing Module for Incidental Scene Text Recognition.

Date of Submission

December 2020

Date of Award

Winter 12-12-2021

Institute Name (Publisher)

Indian Statistical Institute

Document Type

Master's Dissertation

Degree Name

Master of Technology

Subject Name

Computer Science


Computer Vision and Pattern Recognition Unit (CVPR-Kolkata)


Bhattacharya, Ujjwal (CVPR-Kolkata; ISI)

Abstract (Summary of the Work)

State-of-the-art scene text recognition systems perform satisfactorily on samples of benchmark datasets as long as the quality of the text in an image sample is not affected significantly by certain distortions such as blurring etc. However, their performance may drop sharply whenever the input text appears well outside the focus of the image capturing device or it is suffered by motion blur etc. In this study, we considered incidental scene texts which usually exhibit much more diversity, variability and complexity together with the common challenges of scene text recognition compared to their counterparts which are captured by properly positioning the camera and making possible adjustments of various image capturing parameters. In this work, we introduce a trainable deep network that implements a super-resolution technique as the preprocessing module on low quality scene images to boost text recognition accuracy of the existing models. There are various super resolution techniques for image available in the literature which mainly focus on reconstructing the detailed texture of image but fails to improve the quality of texts appearing in the image and thus the results of their recognition does not get improved. Here, we propose a novel text-content aware super-resolution network to improve the quality of texts appearing in natural scene image leading to their more accurate recognition by automatic methods. Simulation results of the proposed model on the ICDAR 2015 Incidental Scene Text dataset demonstrate its effectiveness as an efficient preprocessing model. Code developed as a part of this dissertation is available at:


ProQuest Collection ID:

Control Number


Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.


This document is currently not available here.