the 43rd international conference and exhibition on
24-28 July
Anaheim, California
A novel image-based representation for dynamic 3D avatars that allows effective handling of various hairstyles and headwear, and can generate expressive facial animations with fine-scale details in real time.
Chen Cao
Zhejiang University
Hongzhi Wu
Zhejiang University
Yanlin Weng
Zhejiang University
Tianjia Shao
Zhejiang University
Kun Zhou
Zhejiang University
This general model for automatic lip-sync based only on a vocal performance and its transcript requires only seconds to lip-sync minutes of audio, and the quality of the result rivals data-driven and performance capture methods.
Pif Edwards
University of Toronto
Chris Landreth
University of Toronto
Eugene Fiume
University of Toronto
Karan Singh
University of Toronto
Introducing a method for modifying the apparent head pose or distance between camera and subject in a portrait photo. The approach fits a full-perspective camera and a parametric 3D head model, and then builds a 2D warp to approximate the effect of a desired change in 3D.
Ohad Fried
Princeton University
Eli Shechtman
Adobe Research
Dan Goldman
Adobe Research
Adam Finkelstein
Princeton University
This technique for transferring the painterly style of a head portrait onto another is based on recent advances in deep learning. Results show the method can handle a wide variety of styles while maintaining the integrity of the facial structures.
Ahmed Selim
Trinity College Dublin
Mohamed Elgharib
Qatar Computing Research Institute
Linda Doyle
Trinity College Dublin