IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2012

A Data-driven Approach for Facial Expression Synthesis in Video

Kai Li1, 2   Feng Xu1   Jue Wang3   Qionghai Dai1   Yebin Liu1

1Deptartment of Automation, Tsinghua University     2Graduate School at Shenzhen, Tsinghua University     3Adobe Systems


Download the PDF

Download the video



  This paper presents a method to synthesize a realistic facial animation of a target person, driven by a facial performance video of another person. Different from traditional facial animation approaches, our system takes advantage of an existing facial performance database of the target person, and generates the final video by retrieving frames from the database that have similar expressions to the input ones. To achieve this we develop an expression similarity metric for accurately measuring the expression difference between two video frames. To enforce temporal coherence, our system employs a shortest path algorithm to choose the optimal image for each frame from a set of candidate frames chosen by the similarity metric. Finally, our system adopts an expression mapping method to further minimize the expression difference between the input and retrieved frames. Experimental results show that our system can generate high quality facial animation using this data-driven approach.



  author = {Li, Kai and Xu, Feng and Wang, Jue and Dai, Qionghai and Liu, Yebin},
  title = {A Data-driven Approach for Facial Expression Synthesis in Video},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2012},
  pages = {57-64},


  1. I. Kemelmacher-Shlizerman, A. Sankar, E. Shechtman, and S. M. Seitz. Being john malkovich. In European Conference on Computer vision, pages 341-353, 2010
  2. C. Liu. Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. Doctoral Thesis, Massachusetts Institute of Technology, May 2009
  3. Z. Liu, Y. Shan, and Z. Zhang. Expressive expression mapping with ratio images. In SIGGRAPH, pages 271-276, 2001
  4. P. Lucey, J. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In Proceedings of IEEE workshop on CVPR for Human Communicative Behavior Analysis, pages 94-101, 2010

Last Update: Oct. 25, 2012