Word Level Font-to-Font Image Translation using Convolutional Recurrent Generative Adversarial Networks
Proceedings - International Conference on Pattern Recognition
Conversion of one font to another font is very useful in real life applications. In this paper, we propose a Convolutional Recurrent Generative model to solve the word level font transfer problem. Our network is able to convert the font style of any printed text images from its current font to the required font. The network is trained end-to-end for the complete word images. Thus it eliminates the necessary pre-processing steps, like character segmentations. We extend our model to conditional setting that helps to learn one-to-many mapping function. We employ a novel convolutional recurrent model architecture in the Generator that efficiently deals with the word images of arbitrary width. It also helps to maintain the consistency of the final images after concatenating the generated image patches of target font. Besides, the Generator and the Discriminator network, we employ a Classification network to classify the generated word images of converted font style to their subsequent font categories. Most of the earlier works related to image translation are performed on square images. Our proposed architecture is the first of its kind which can handle images of varying widths. Word images generally have varying width depending on the number of characters present. Hence, we test our model on a synthetically generated font dataset. We compare our method with some of the state-of-the-art methods for image translation. The superior performance of our network on the same dataset proves the ability of our model to learn the font distributions.
Kumarbhunia, Ankan; Kumarbhunia, Ayan; Banerjee, Prithaj; Konwer, Aishik; Bhowmick, Abir; Pratim Roy, Partha; and Pal, Umapada, "Word Level Font-to-Font Image Translation using Convolutional Recurrent Generative Adversarial Networks" (2018). Conference Articles. 43.