English
首页
科学研究
研究课题
RUSH
出版物
论文专著
软件著作权
成果奖励
团队成员
带头人
全部成员
讲座
联系方式
课题研究
首页
>
课题研究
>
人脸三维动态重建
人脸三维动态重建
## 3D Dynamic Reconstruction of Face ### Introduction Traditional works on this topic are already able to reconstruct large-scale facial expressions in real-time and transfer expressions to some virtual avatars (e.g. animoji). However, large-scale facial motions are still far from vividly and completely expressing human facial expressions. To overcome these drawbacks, we focus on reconstructing facial detail motions. Firstly, we involve audio signal to better reconstruct lip motions when users are talking or reading. Audio signal has a strong correlation with mouth shapes. Also, it is a good compensation for visual signal when heads are not facing cameras or occluded by hands or books. Besides the lips, the eye regions are also very important for understanding a facial expression, as the eyes are said to be the window of our mind. Based on this observation, we also focus on eye regions and propose two techniques to reconstruct eyeball and eyelids motions in real time. ### Highlights  ####Video-Audio Driven Real-Time Facial Animation - Use both video and audio input, require no user-sepcific training and robust to occlusions, large rotations, and background noise - Combine multi-linear model and speaker-independent DNN acoustic model to extract user-independent features from input frames in real time - A data-driven lip motion regressor for modeling the correlation bewteen speech data and mouth shapes > Liu, Y., Xu, F., Chai, J., Tong, X., Wang, L., & Huo, Q. (2015). [Video-audio driven real-time facial animation](https://dl.acm.org/citation.cfm?id=2818122). ACM Transactions on Graphics (TOG), 34(6), 182.  ####Controllable High-fidelity Facial Performance Transfer - A novel facial expression transfer and editing technique for high-fidelity facial performance data - An efficient multi-scale representation that decomposes high-fidelity facial expression into high-level facial feature deformation, large-scale mesh deformation and fine-scale facial details - Allow user to quickly modify and control the retargeted facial sequences in the spatial-temporal domain >Xu, F., Chai, J., Liu, Y., & Tong, X. (2014). [Controllable high-fidelity facial performance transfer](https://dl.acm.org/citation.cfm?id=2601210). ACM Transactions on Graphics (TOG), 33(4), 42.  ####Real-Time 3D Eye Performance Reconstruction for RGBD Cameras - Combine real-time facial tracking with eye performance reconstruction - A simplified geometry and appearance eyeball model, plus a 3D Taylor expansion-based optimization scheme, achieves efficient and robust eye performance reconstruction - A bidirectional regression model that predicts the motions of eyes when occlusions or tracking failures occur > Wen, Q., Xu, F., & Yong, J. H. (2017). [Real-time 3d eye performance reconstruction for rgbd cameras](https://ieeexplore.ieee.org/abstract/document/7790904). IEEE transactions on visualization and computer graphics, 23(12), 2586-2598.  ####Real-time 3D Eyelids Tracking from Semantic Edges - Real-time reconstruction, tracking, and animation of realistic human eyelid shape and motion - A linear parametric eyelid model for human eyelid shape and motion - A real-time semantic edge-based eyelid fitting solution > Wen, Q., Xu, F., Lu, M., & Yong, J. H. (2017). [Real-time 3d eyelids tracking from semantic edges](https://dl.acm.org/citation.cfm?id=3130837). ACM Transactions on Graphics (TOG), 36(6), 193. ###Research Production Related works have published more than 10 Siggraph/Siggraph Asia/ICCV/CVPR/TOG/TVCG papers, which are top publications in computer vision and graphics. Besides research, these works have also been successfully transferred to industry. For example, the work on high quality eye region reconstruction has been transferred to Huawei and Sense time for totally 200 million Chinese Yuan.