Rough-fuzzy based scene categorization for text detection and recognition in video
Article Type
Research Article
Publication Title
Pattern Recognition
Abstract
Scene image or video understanding is a challenging task especially when number of video types increases drastically with high variations in background and foreground. This paper proposes a new method for categorizing scene videos into different classes, namely, Animation, Outlet, Sports, e-Learning, Medical, Weather, Defense, Economics, Animal Planet and Technology, for the performance improvement of text detection and recognition, which is an effective approach for scene image or video understanding. For this purpose, at first, we present a new combination of rough and fuzzy concept to study irregular shapes of edge components in input scene videos, which helps to classify edge components into several groups. Next, the proposed method explores gradient direction information of each pixel in each edge component group to extract stroke based features by dividing each group into several intra and inter planes. We further extract correlation and covariance features to encode semantic features located inside planes or between planes. Features of intra and inter planes of groups are then concatenated to get a feature matrix. Finally, the feature matrix is verified with temporal frames and fed to a neural network for categorization. Experimental results show that the proposed method outperforms the existing state-of-the-art methods, at the same time, the performances of text detection and recognition methods are also improved significantly due to categorization.
First Page
64
Last Page
82
DOI
10.1016/j.patcog.2018.02.014
Publication Date
8-1-2018
Recommended Citation
Roy, Sangheeta; Shivakumara, Palaiahnakote; Jain, Namita; Khare, Vijeta; Dutta, Anjan; Pal, Umapada; and Lu, Tong, "Rough-fuzzy based scene categorization for text detection and recognition in video" (2018). Journal Articles. 1302.
https://digitalcommons.isical.ac.in/journal-articles/1302
Comments
All Open Access, Green