Accurate 2D Facial Depth Models Derived from a 3D Synthetic Dataset

Faisal Khan, Shubhajit Basak, Peter Corcoran

Research output: Chapter in Book or Conference Publication/ProceedingConference Publicationpeer-review

3 Citations (Scopus)

Abstract

As Consumer Technologies (CT) seeks to engage and interact more closely with the end-user it becomes important to observe and analyze a user's interaction with CT devices and associated services. One of the most useful modes for monitoring a user is to analyze a real-time video stream of their face. Facial expressions, movements and biometrics all provide important information, but obtaining a calibrated input with 3D accuracy from a single camera requires accurate knowledge of the facial depth and distance of different features from the camera. In this paper, a method is proposed to generate synthetic high-accuracy human facial depth from synthetic 3D face models. The generated synthetic human facial dataset is then used in Convolutional Neural Networks (CNN's) for monocular depth facial estimation and the results of the experiments are presented.

Original languageEnglish
Title of host publication2021 IEEE International Conference on Consumer Electronics, ICCE 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728197661
DOIs
Publication statusPublished - 10 Jan 2021
Event2021 IEEE International Conference on Consumer Electronics, ICCE 2021 - Las Vegas, United States
Duration: 10 Jan 202112 Jan 2021

Publication series

NameDigest of Technical Papers - IEEE International Conference on Consumer Electronics
Volume2021-January
ISSN (Print)0747-668X

Conference

Conference2021 IEEE International Conference on Consumer Electronics, ICCE 2021
Country/TerritoryUnited States
CityLas Vegas
Period10/01/2112/01/21

Keywords

  • 3D Facial models
  • CNN's
  • Facial Depth models

Fingerprint

Dive into the research topics of 'Accurate 2D Facial Depth Models Derived from a 3D Synthetic Dataset'. Together they form a unique fingerprint.

Cite this