[VGS-IT] Reconstructing the Real World in Motion

FIT VUT v Brně 23.3.2016

The next speaker in VGS-IT series will be Christian Theobalt.

The talk will be given on Wednesday, March 23 at 1pm in room G202.

Title: Reconstructing the Real World in Motion

Abstract: Even though many challenges remain unsolved, in recent years computer graphics algorithms to render photo-realistic imagery have seen tremendous progress. An important prerequisite for high-quality renderings is the availability of good models of the scenes to be rendered, namely models of shape, motion and appearance. Unfortunately, the technology to create such models has not kept pace with the technology to render the imagery. In fact, we observe a content creation bottleneck, as it often takes man months of tedious manual work by animation artists to craft models of moving virtual scenes.

To overcome this limitation, the graphics and vision communities has been developing techniques to capture dense 4D (3D+time) models of dynamic scenes from real world examples, for instance from footage of real world scenes recorded with cameras or other sensors. One example are performance capture methods that measure detailed dynamic surface models, for example of actors or an actors face, from multi-view video and without markers in the scene. Even though such 4D capture methods made big strides ahead, they are still at an early stage. Their application is limited to scenes of moderate complexity in controlled environments, reconstructed detail is limited, and captured content cannot be easily modified, to name only a few restrictions. Recently, the need for efficient dynamic scene reconstruction methods has further increased by developments in other thriving research domains, such as virtual and augmented reality, 3D video, or robotics.

In this talk, I will elaborate on some ideas on how to go beyond the current limits of 4D reconstruction, and show some results from our recent work. For instance, I will show how we can take steps to capture dynamic models of humans and general scenes in unconstrained environments with few sensors. I will also show how we can capture higher shape detail as well as material parameters of scenes outside of the lab. The talk will also show how one can effectively reconstruct very challenging scenes of a smaller scale, such a hand motion. Further on, I will discuss how we can capitalize on more sophisticated light transport models to enable high-quality reconstruction in much more uncontrolled scenes, eventually also outdoors, with only few cameras, or even just a single one. Ideas on how to perform deformable scene reconstruction in real-time will also be presented, if time allows.

Christian Theobalt from Max-Planck-Institute for Informatics, Saarbrücken, Germany is concerned with dynamic 3D scene reconstruction, marker-less motion capture, machine learning, and many other interesting issues on the boundary between the fields of Computer Vision and Computer Graphics.

Christian always publishes his work in best venues (2015: PAMI, 2×CGF, SIGCHI, 4×CVPR, 4×SIGGRAPH, 2×ICCV). He received several awards, he was also awarded an ERC Grant, he started his own research group at MPII and founded a spin-off company .

All are cordially invited.

Your IPv4 address:
Switch to IPv6 connection

DNSSEC [dnssec]