Pose-Aware Speech Driven Facial Landmark Animation Pipeline for Automated Dubbing

  • Dan Bigioi
  • , Hugh Jordan
  • , Rishabh Jain
  • , Rachel Mcdonnell
  • , Peter Corcoran

Research output: Contribution to a Journal (Peer & Non Peer)Articlepeer-review

8 Citations (Scopus)

Abstract

A novel neural pipeline allowing one to generate pose aware 3D animated facial landmarks synchronised to a target speech signal is proposed for the task of automatic dubbing. The goal is to automatically synchronize a target actors' lips and facial motion to an unseen speech sequence, while maintaining the quality of the original performance. Given a 3D facial key point sequence extracted from any reference video, and a target audio clip, the neural pipeline learns how to generate head pose aware, identity aware landmarks and outputs accurate 3D lip motion directly at the inference stage. These generated landmarks can be used to render a photo-realistic video via an additional image to image conversion stage. In this paper, a novel data augmentation technique is introduced that increases the size of the training dataset from N audio/visual pairs up to NxN unique pairs for the task of automatic dubbing. The trained inference pipeline employs a LSTM-based network that takes Mel-coefficients as input from an unseen speech sequence, combined with head pose, and identity parameters extracted from a reference video to generate a new set of pose aware 3D landmarks that are synchronized with the unseen speech.

Original languageEnglish
Pages (from-to)133357-133369
Number of pages13
JournalIEEE Access
Volume10
DOIs
Publication statusPublished - 2022

Keywords

  • Machine learning
  • artificial intelligence
  • audio driven deep fakes
  • automatic dubbing
  • computer vision
  • lip synchronization
  • talking head generation

Fingerprint

Dive into the research topics of 'Pose-Aware Speech Driven Facial Landmark Animation Pipeline for Automated Dubbing'. Together they form a unique fingerprint.

Cite this