LGVTON: a landmark guided approach for model to person virtual try-on

Article Type

Research Article

Publication Title

Multimedia Tools and Applications

Abstract

In this paper, we propose a Landmark Guided Virtual Try-On (LGVTON) method for clothes, which aims to solve the problem of clothing trials on e-commerce websites. Given the images of two people: a person and a model, it generates a rendition of the person wearing the clothes of the model. This is useful considering the fact that on most e-commerce websites images of only clothes are not usually available. We follow a three-stage approach to achieve our objective. In the first stage, LGVTON warps the clothes of the model using a Thin-Plate Spline (TPS) based transformation to fit the person. Unlike previous TPS-based methods, we use the landmarks (of human and clothes) to compute the TPS transformation. This enables the warping to work independently of the complex patterns, such as stripes, florals, and textures, present on the clothes. However, this computed warp may not always be very precise. We, therefore, further refine it in the subsequent stages with the help of a mask generator (Stage 2) and an image synthesizer (Stage 3) modules. The mask generator improves the fit of the warped clothes, and the image synthesizer ensures a realistic output. To tackle the problem of lack of paired training data, we resort to a self-supervised training strategy. Here paired data refers to the image pair of model and person wearing the same cloth. We compare LGVTON with four existing methods on two popular fashion datasets namely MPV and DeepFashion using two performance measures, FID (Fréchet Inception Distance) and SSIM (Structural Similarity Index). The proposed method in most cases outperforms the state-of-the-art methods.

First Page

5051

Last Page

5087

DOI

10.1007/s11042-021-11647-9

Publication Date

2-1-2022

This document is currently not available here.

Share

COinS