SIGGRAPH Asia tech papers preview: 3D avatars from a single photo

pinscreen

Over the past few years, I’ve had the pleasure of reading about and watching the results of digital human and faces research led by Hao Li. He is, simultaneously, CEO & Co-Founder of Pinscreen, Inc., Assistant Professor of Computer Science at the University of Southern California, and Director of the Vision and Graphics Lab at the USC Institute for Creative Technologies.

Most recently, Li and his collaborators have been demonstrating tools to produce 3D human avatars directly from facial scans and even just single photos. Their latest research, titled ‘Avatar Digitization From a Single Image For Real-Time Rendering’, is being published as a technical paper at SIGGRAPH Asia in Bangkok (27 – 30 November). I had a quick chat to Li on what the research is all about, and where his startup Pinscreen is up to. Read on to also find out how to get a discount to SIGGRAPH Asia.

vfxblog: Could you explain the general idea behind this paper, and what being able to realize a 3D avatar from a single photo involves?

Hao Li: This paper proposes the first system to build a complete game character-like digital avatar including hair and full rig from a single input photograph. With such flexibility, one of the biggest advantages is that people will be able to start creating CG characters of celebrities, their friends, or themselves without further constraints. This will open up new possibilities for all kinds of applications, especially for AR, VR, and gaming, as well as a democratized for of digital content creation in general.

To create such avatar we needed to engineer a comprehensive framework that not only combines single-view face modeling and hair digitization, but also generate models that can be rendered efficiently in game engines. State-of-the-art avatars for real-time rendering often use polystrips (or polycards) as hair representation, and generating those typically requires skilled artistry. The idea behind this paper is to develop an algorithm that can customize existing polystrip hair models to individual subjects, but also ensure that they look good. We involve a lot of state-of-the-art components such as our high-quality facial texture inference technique based on deep convolutional neural networks, but also ensure a robust treatment of hair using a deep learning approach.

This paper demonstrates a prototype system that we have developed half a year ago, and the computation is quite involved. Our latest prototype at Pinscreen only takes several seconds from input to 3D avatar, using more efficient algorithms and parallelized computing.

vfxblog: Hair is an important part of this – what does it take to make the hair feel lifelike but also manageable in terms of realtime rendering?

Hao Li: As opposed to the face, hair is an incredibly complex geometric component of the human body. It is volumetric, the dimensionality of hairstyle variations and deformations are huge, and it is challenging to predict a plausible shape in unseen regions. Furthermore, the rendering of realistic hair appearance is also very difficult to achieve due to tthe complex interaction between strands and its surrounding. photorealistic and natural looking digital hair as seen in VFX is known to involve a complex modeling process and intensive rendering computations. In interactive settings such as games, graphics engineers and digital artists have found ways to develop efficient representations to render high-quality hairs in real-time but the asset production is still very difficult to produce and requires highly skilled and experienced artists. In order to automate this creation, we need to develop machine learning techniques that can mimic the creation process of artists as much as possible, while covering as many type of hairstyles as possible.

paper
Download the paper here. The citation is AVATAR DIGITIZATION FROM A SINGLE IMAGE FOR REAL-TIME RENDERING, Liwen Hu, Shunsuke Saito, Lingyu Wei, Koki Nagano, Jaewoo Seo, Jens Fursund, Iman Sadeghi, Carrie Sun, Yen-Chun Chen, Hao Li, ACM Transactions on Graphics, Proceedings of the 10th ACM SIGGRAPH Conference and Exhibition in Asia 2017, 11/2017 – SIGGRAPH ASIA 2017.

vfxblog: Where do you see this particular research having direct application?

Hao Li: The obvious applications are personalized gaming and VR applications, where users become digital avatars and interact with each other in a virtual environment, which is important for realistic simulation and training purposes as well as immersive communication and telepresence in general. Nevertheless, I believe that once these technologies are available to consumers, new applications will emerge, as well as new forms of augmented reality-driven applications. In the long term, I believe that humans will also interact with virtual avatars and communicate through parametric models, as they enable us to take any possible form.

vfxblog: Anything you can say about Pinscreen being available for wider use right now?

Hao Li: We are about to launch an app, that will feature this technology and will be available to everyone :-).

Remember, vfxblog readers can sign up to SIGGRAPH Asia in Bangkok with a 10% discount. Head to http://bit.ly/sa17reg and use the code EP107010MS71.

No Responses to “SIGGRAPH Asia tech papers preview: 3D avatars from a single photo”

Post a Comment