Latent space mapping for generation of object elements with corresponding data annotation

Shabab Bazrafkan, Hossein Javidnia, Peter Corcoran

Research output: Contribution to a Journal (Peer & Non Peer)Articlepeer-review

3 Citations (Scopus)

Abstract

Deep neural generative models such as Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN) give promising results in estimating the data distribution across a range of machine learning fields of application. Recent results have been especially impressive in image synthesis where learning the spatial appearance information is a key goal. This enables the generation of intermediate spatial data that corresponds to the original dataset. In the training stage, these models learn to decrease the distance of their output distribution to the actual data and, in the test phase, they map a latent space to the data space. Since these models have already learned their latent space mapping, one question is whether there is a function mapping the latent space to any aspect of the database for the given generator. In this work, it has been shown that this mapping is relatively straightforward using small neural network models and by minimizing the mean square error. As a demonstration of this technique, two example use cases have been implemented: firstly, the idea to generate facial images with corresponding landmark data and secondly, generation of low-quality iris images (as would be captured with a smartphone user-facing camera) with a corresponding ground-truth segmentation contour.

Original languageEnglish
Pages (from-to)179-186
Number of pages8
JournalPattern Recognition Letters
Volume116
DOIs
Publication statusPublished - 1 Dec 2018

Keywords

  • Deep neural networks
  • Generative models
  • Latent space mapping

Fingerprint

Dive into the research topics of 'Latent space mapping for generation of object elements with corresponding data annotation'. Together they form a unique fingerprint.

Cite this