Feng Xu 徐枫

 

 

 

PHD Student

TNList, Department of
Automation, Tsinghua University, Beijing 100084, China

Telephone:  +86-10-6278-8613 ext 816 (o)
Mobile:
    +86-1381-0352-297
E-Mail: 
  xufeng2003@gmail.com


Experiences

·         Ph.D.  Sep. 2007 ~ , Department of Automation, Tsinghua University, Beijing, China.

·         B.S.  Sep. 2003 ~ July 2007, Department of Physics, Tsinghua University, Beijing, China.


Research Interests


Research Projects

bart.png

Video-based Characters - Creating New Human Performances from a Multi-view Video Database
We present a method to synthesize realistic video sequences of humans according to arbitrary user-defined body motions and viewpoints. We first capture a small database of multi-view video sequences of an actor performing various basic motions. This database needs to be captured only once and serves as the input to our synthesis algorithm. We then apply a marker-less mode-based performance capture approach to the entire database, to obtain pose and geometry of the actor in each database frame. To create novel video sequences of the actor in the database, a user animates the 3D human skeleton with arbitrary motion and viewpoints. Our technique then synthesizes a realistic video sequence of the actor performing the specified motion based only on the initial database. The first key component of our approach is a new efficient retrieval strategy to find appropriate spatio-temporally coherent database frames from which to synthesize the target video frames. The second key component is a warping-based texture synthesis approach that uses the retrieved most similar database frames to synthesize spatio-temporally coherent target video frames. This enables us, for instance, to easily create video sequences of actors performing dangerous stunts, without the actors ever having to actually perform them. We show through a variety of result videos and through a user study that we can synthesize highly realistic videos of people, even if the target motions and camera views are starkly different from the database content.

seg.png

Occlusion-Aware Motion Layer Extraction under Large inter-Frame Motions
Extracting motion layers from videos is an important task for video representation, analysis and compression. For videos with large inter-frame motions, motion layer extraction is challenging in two respects: the estimation of large disparity motions, and the awareness of large occluded regions. In this paper, we propose an effective method for motion layer extraction under large disparity motions. To robustly estimate large displacement motions, we have developed an efficient voting-based method which estimates planar homographies from sparse feature matches. To handle occlusions, we first integrate color and motion consistency into a Markov random field framework to achieve per-pixel assignment with occlusion detection. Then, we perform motion-color segmentation and an earth mover’s distance-based comparison to determine motion labels for occluded pixels. Experimental results show that our proposed method achieves good performance in automatically extracting multiple moving objects under large disparity motions while maintaining a low computational cost.

seg&3D.png

Video-Object Segmentation and 3D-Trajectory Estimation for Monocular Video Sequences
In this paper, we describe a video-object segmentation and 3D-trajectory estimation method for the analysis of dynamic scenes from a monocular uncalibrated view. Based on the color and motion information among video frames, our proposed method segments the scene, calibrates the camera, and calculates the 3D trajectories of moving objects. It can be employed for video-object segmentation, 2D-to-3D video conversion, video-object retrieval, etc. In our method, reliable 2D feature motions are established by comparing SIFT descriptors among successive frames, and image over-segmentation is achieved using a graph-based method. Then, the 2D motions and the segmentation result iteratively refine each other in a hierarchically structured framework to achieve video-object segmentation. Finally, the 3D trajectories of the segmented moving objects are estimated based on a local constant-velocity constraint, and are refined by a Hidden Markov Model (HMM)-based algorithm. Experiments show that the proposed framework can achieve a good performance in terms of both object segmentation and 3D-trajectory estimation.

2D-to-3D Conversion Based on Motion and Color Mergence
In this work, we present an efficient scheme to synthesize stereoscopic video from monoscopic video. We use the improved optical flow method to extract pixel-level motion for each frame. By considering the intensity of the estimated motion, we can classify the moving objects. Then, to achieve more accurate classification, we combine color information in the frame using the method derives from the minimum discrimination information (MDI) principle. Finally, constraints-involved flood-fill method is developed to segment the frame and assign depth values for different segmented regions. The experimental results show that our scheme achieves good performances on both segmentation and depth determination.


Publications

·         Feng Xu, Yebin Liu, Carsten Stoll, James Tompkin, Gaurav Bharaj, Qionghai Dai, Hans-Peter Seidel, Jan Kautz and Christian Theobalt, "Video-based Characters - Creating New Human Performances from a Multi-view Video Database", conditionally accepted by SIGGRAPH 2011.

·         Feng Xu and Qionghai Dai, "Occlusion-Aware Motion Layer Extraction under Large inter-Frame Motions", accepted by IEEE Transactions on Image Processing, 2011.

·         Feng Xu, Kin-Man Lam and Qionghai Dai, "Video-object segmentation and 3D-trajectory estimation for monocular video sequences", Image and Vision Computing, Volume 29 Issue 2-3, February, 2011.

·         Feng Xu, Guihua Er, Xudong Xie and Qionghai Dai, "2D-to-3D Conversion Based on Motion and Color Mergence ", 3DTV Conference: The True Vision Capture, Transmission and Display of 3D Video, 2008 (Oral).

·         Youwei Yan, Feng Xu, Qionghai Dai, Xiaodong Liu, "A novel method for automatic 2D-to-3D video conversion", 3DTV Conference: The True Vision Capture, Transmission and Display of 3D Video, 2010.

·         Xudong Xie, Jie Gong, Qionghai Dai, Feng Xu, "Rotation and scaling invariant texture classification based on Gabor wavelets ",5th International Conference on Visual Information Engineering, 2008.


Honors and Awards

·         Oct, 2004 Zheng Zongsheng Scholarship

·         Oct, 2005 First Grade Scholarship for excellence in academic performance

·         Jul, 2007 Excellence Thesis in Tsinghua University

·         Jan, 2008 The second Prize of laboratory construction in Tsinghua University (as a member)


Links

Broadband Network and Digital Media Lab
Tsinghua University

My Advisor:

Prof.Qionghai Dai

 

My Friends:

Yebin Liu

Xun Cao

Yigang Peng

Chenglei Wu

 

Locations of visitors to this page