Curved text detection in blurred/non-blurred video/scene images
Multimedia Tools and Applications
Text detection in video/images is challenging due to the presence of multiple blur caused by defocus and motion. In this paper, we present a new method for detecting texts in blurred/non-blurred images. Unlike the existing methods that use deblurring or classifiers, the proposed method estimates degree of blur in images based on contrast variations in neighbor pixels and a low pass filter, which results in candidate pixels for deblurring. We consider gradient values of each pixel as the weight for the degree of blur. The proposed method then performs K-means clustering on weighted values of candidate pixels to get text candidates irrespective of blur types. Next, Bhattacharyya distance is used to extract symmetry property of texts to remove false text candidates, which provides text components. Further, the proposed method fixes bounding box for each text component based on the nearest neighbor criteria and direction of the text component. Experimental results on defocus, motion, non-blurred images and standard datasets of curved text show that the proposed method outperforms the existing methods.
Xue, Minglong; Shivakumara, Palaiahnakote; Zhang, Chao; Lu, Tong; and Pal, Umapada, "Curved text detection in blurred/non-blurred video/scene images" (2019). Journal Articles. 694.