AGA-GAN: Attribute Guided Attention Generative Adversarial Network with U-Net for face hallucination

Article Type

Research Article

Publication Title

Image and Vision Computing

Abstract

The performance of facial super-resolution methods relies on their ability to recover facial structures and salient features effectively. Even though the convolutional neural network and generative adversarial network-based methods deliver impressive performances on face hallucination tasks, the ability to use attributes associated with the low-resolution images to improve performance is unsatisfactory. In this paper, we propose an Attribute Guided Attention Generative Adversarial Network which employs novel attribute guided attention (AGA) modules to identify and focus the generation process on various facial features in the image. Stacking multiple AGA modules enables the recovery of both high and low-level facial structures. We design the discriminator to learn discriminative features by exploiting the relationship between the high-resolution image and their corresponding facial attribute annotations. We then explore the use of U-Net based architecture to refine existing predictions and synthesize further facial details. Extensive experiments across several metrics show that our AGA-GAN and AGA-GAN + U-Net framework outperforms several other cutting-edge face hallucination state-of-the-art methods. We also demonstrate the viability of our method when every attribute descriptor is not known and thus, establishing its application in real-world scenarios. Our code is available at https://github.com/NoviceMAn-prog/AGA-GAN.

DOI

10.1016/j.imavis.2022.104534

Publication Date

10-1-2022

Comments

Open Access, Green

This document is currently not available here.

Share

COinS