Title:  Expired:  

Predictive Rendering - The Other Type of Realistic Computer Graphics
Alex Wilkie will kindly share his deep knowledge of predictive rendering with us. Alex is well known rigorous researcher, and a very good speaker. Do not miss the chance to learn about his recent advances in realistic rendering, presented at SIGGRAPH, EUROGRAPHICS, or EGSR. His talk takes place on Monday, May 19, 2pm, at E105.

This talk has two parts: in the first, we first discuss the basic differences between mainstream computer graphics, and genuinely predictive image synthesis. In the second part, we give a brief overview of the application domains predictive rendering is useful for, the technological state of the art in this field, and the main research directions that are currently being investigated. This includes the specific topics that our group in Prague is working on now, and which directions will probably be upcoming research areas in the near term future.
Tags: lecture, research, rendering
Created: 18.5.2014 10:18:14 Updated: 19.5.2014 12:00:49

Perceptual Display: Towards Reducing Gaps Between Real World and Displayed Scenes
Karol Myszkowski will give an invited talk within VGS-IT series on Thursday, October 9, 10:00, in E104.

Title: Perceptual Display: Towards Reducing Gaps Between Real World and Displayed Scenes

Abstract: The human visual system (HVS) has its own limitations (e.g., the quality of eye optics, the luminance range that can be simultaneously perceived, and so on), which to certain extent reduce he requirements imposed on display devices. Still a significant deficit of reproducible contrast, brightness, spatial pixel resolution, and depth ranges can be observed, which fall short with respect to the HVS capabilities. Moreover, unfortunate interactions between technological and biological aspects create new problems, which are unknown for real-world observation conditions.

In this talk, we are aiming at the exploitation of perceptual effects to enhance apparent image qualities. At first, we show how the perceived image contrast and brightness can be improved by exploiting the Cornsweet and glare illusions. Then, we present techniques for hold-type blur reduction, which is inherent for LCD displays. Also, we investigate apparent resolution enhancements, which enable showing image details beyond the physical pixel resolution of the display device. Finally, we discuss the problem of perceived depth enhancement in stereovision, as well as comfortable handling of specular effects, film grain, and video cuts.


Karol is a senior researcher in the Computer Graphics Group of the Max-Planck-Institut für Informatik Saarbrücken. In the past, he served as an Associate Professor at the University of Aizu, Japan. He also worked as a Research Associate and then Assistant Professor at Szczecin University of Technology. His research interests include perception issues in graphics, high-dynamic range imaging, global illumination, rendering, and animation. He co-authored several textbooks, more than 100 high profile publications, more than 10 SIGGRAPH papers and he often served as an IPC member of ACM SIGGRAPH.

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 6.10.2014 09:21:04 Updated: 4.11.2014 10:05:04

Adding Depth to Hand-drawn Images

The next speaker in VGS-IT series will be Daniel Sýkora, CTU in Prague, CZ. Do not miss this one, we will see a lot of eye candies! It will be given on November 19, at 2pm, in E104.

Title: Adding Depth to Hand-drawn Images


Abstract: Recovering depth from a single image remains an open problem after decades of active research. In this talk we focus on a specific variant of the problem where the input image is hand-crafted line drawing. As opposed to previous attempts to provide complete 3D reconstruction either by imposing various geometric constraints or using sketch-based interfaces to produce a full 3D model incrementally, we seek for a specific kind of bas-relief approximation which is less complex to create while still sufficient for many important tasks that can arise in 2D pipelines. It enables to maintaining correct visibility and connectivity of individual parts during interactive shape manipulation, deformable image registration, and fragment composition. In the context of image enhancement it helps to improve perception of depth, generate 3D-like shading or even global illumination effects, and allows to produce stereoscopic imagery as well as source for 3D printing.

Daniel Sýkora is an Assistant Professor at the Czech Technical University in Prague. His main research interest is strongly coupled with his long-standing passion for hand-drawn animation. He developed numerous techniques which allow to eliminate repetitive and time consuming tasks while still preserve full creative freedom of manual work. To turn these research ideas into practical products Daniel intensively cooperates with studio Anifilm in Prague as well as renowned industrial partners such as Disney, Adobe, or TVPaint Development.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 6.11.2014 14:30:15
Light Transport Simulation in the ArchViz and Visual Effect industries
The next speaker in VGS-IT series will be Jaroslav Křivánek, Charles University, Prague, CZ. He presented 3 SIGGRAPH papers this year besides several other hi-quality publications. Furthermore, he has recently been selected for the New Europe 100 list, "a list of outstanding challengers who are leading world-class innovation from Central and Eastern Europe for taking computer graphics to the next level".

The talk will be given on Monday, December 8, D0206 at 1pm.

Title: Light Transport Simulation in the ArchViz and Visual Effect industries

Abstract: Research and practice of computer graphics is witnessing a renewed interest in realistic rendering based on robust and efficient light transport simulation using Monte Carlo and other statistical methods. This research effort is propelled by the desire to accurately render general environments with complex geometry, materials and light sources, which is often difficult with the industry-standard ad hoc solutions. For this reason, the movie and archiviz industries are shifting away from approximate rendering solutions towards physically-based rendering methods, which poses new challenges in terms of strict requirements on high image quality and algorithm robustness.

In this talk, I will summarize some of my contributions in the area of realistic rendering using physically-based light transport simulation. I will start by reviewing the path integral formulation of light transport which is at the basis of the vast majority of recent advances in this area. I will then review our Vertex Connection and Merging algorithm, along with its recent extension to rendering participating media, which aims at robust handling of light transport in scenes with complex, specular materials. This algorithm has gained a very favourable reception by the research community as well as the industry; Within two year from its publication, it has been adopted by numerous major companies in the field, such as Weta, PIXAR or Chaos Group. In the next part of my talk, I will review our recent and ongoing work on light transport simulation in scenes with complex visibility, which remain an open challenge both for architectural visualizations as well as for the movie industry.

Jaroslav Křivánek is a researcher, developer, and associate professor of Computer Science at the Faculty of Mathematics and Physics of Charles in University Prague. Prior to this appointment, he was a Marie Curie research fellow at Cornell University, and a junior researcher and assistant professor at Czech Technical University in Prague. Jaroslav received his Ph.D. from IRISA/INRIA Rennes and Czech Technical University (joint degree) in 2005. His primary research interests are global illumination, radiative transport (including light transport), Monte Carlo methods, and visual perception, with the goal of developing novel practical ways of producing realistic, predictive renderings of virtual models. The technologies he has co-developed are used, among others, by Weta Digital, PIXAR Animation Studios, or Sony Pictures Imageworks. He is currently working on new software, Corona Renderer, with the goal to challenge the status-quo in rendering technology used for visualizations in architecture and industrial design.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 25.11.2014 10:34:46 Updated: 2.3.2017 14:17:28
Visual Retrieval with Geometric Constraint
The next speaker in VGS-IT series will be Ondřej Chum.

The talk will be given on January 28, E104 at 3pm.

Title: Visual Retrieval with Geometric Constraint.

Abstract: In the talk, I will address the topic of image retrieval. In particular, I will focus on retrieval methods based on bag of words image representation that exploit geometric constrains. Novel formulations of image retrieval problem will be discussed, showing that the classical ranking of images based on similarity addresses only one of possible user requirements. Retrieval methods efficiently solving the new formulations by exploiting geometric constraints will be used in different scenarios. These include online browsing of image collections, image analysis based on large collections of photographs, or model construction.

For online browsing, I will show queries that try to answer question such as: "What is this?" (zoom in at a detail), "Where is that?" (zoom-out to larger visual context), or "What is to the left / right of this?". For image analysis, two novel problems straddling the boundary between image retrieval and data mining are formulated: for every pixel in the query image, (i) find the database image with the maximum resolution depicting the pixel and (ii) find the frequency with which it is photographed in detail.

Ondřej Chum received the MSc degree in computer science from Charles University, Prague, in 2001 and the PhD degree from the Czech Technical University in Prague, in 2005. From 2005 to 2006, he was a research Fellow at the Centre for Machine Perception, Czech Technical University. From 2006 to 2007 he was a post-doc at the Visual Geometry Group, University of Oxford, UK. Recently, he is now an associate professor back at the Centre for Machine Perception. His research interests include object recognition, large-scale image and particular-object retrieval, invariant feature detection, and RANSAC-type optimization. He has coorganized the "25 years of RANSAC" Workshop in conjunction with CVPR 2006, Computer Vision Winter Workshop 2006, and Vision and Sports Summer School (VS3) in Prague 2012 and 2014. He was the recipient of the runner up award for the "2012 Outstanding Young Researcher in Image & Vision Computing" by the Journal of Image and Vision Computing for researchers within seven years of their PhD, and the Best Paper Prize at the British Machine Vision Conference in 2002. In 2013, he was awarded ERC-CZ grant. His talk takes place on Wednesday, January 28 at 3pm in room E104.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 14.1.2015 15:29:42
Advances in Image Restoration: from Theory to Practice
The next speaker in VGS-IT series will be Filip Šroubek.

The talk will be given on Tuesday, February 24 at 11am in room E105.

Title: Advances in Image Restoration: from Theory to Practice

Abstract: We rely on images with ever growing emphasis. Our perception of the world is however limited by imperfect measuring conditions and devices used to acquire images. By image restoration, we understand mathematical procedures removing degradation from images. Two prominent topics of image restoration that has evolved considerably in the last 10 years are blind deconvolution and superresolution. Deconvolution by itself is an ill-posed inverse problem and one of the fundamental topics of image processing. The blind case, when the blur kernel is also unknown, is even more challenging and requires special optimization approaches to converge to the correct solution. Superresolution extends blind deconvolution by recovering lost spatial resolution of images. In this talk we will cover the recent advances in both topics that pave the way from theory to practice. Various real acquisition scenarios will be discussed together with proposed solutions for both blind deconvolution and superresolution and efficient numerical optimization methods, which allow fast implementation. Examples with real data will illustrate performance of the proposed solutions.


Filip Šroubek received the M.Sc. degree in computer science from the Czech Technical University, Prague, Czech Republic in 1998 and the Ph.D. degree in computer science from Charles University, Prague, Czech Republic in 2003. From 2004 to 2006, he was on a postdoctoral position in the Instituto de Optica, CSIC, Madrid, Spain. In 2010/2011 he received a Fulbright Visiting Scholarship at the University of California, Santa Cruz. Currently he is with the Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic. His talk takes place on Tuesday, February 24 at 11am in room E105.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 17.2.2015 13:42:09
From high dynamic range to perceptual realism
The next speaker in VGS-IT series will be Rafał Mantiuk.

The talk will be given on Friday, March 27 at 1pm in room E104.

Title: From high dynamic range to perceptual realism

Abstract: Todays computer graphics techniques make it possible to create imagery that is hardly distinguishable from photographs. However, a photograph is clearly no match to an actual real-world scene. I argue that the next big challenge in graphics is to achieve perceptual realism by creating artificial imagery that would be hard to distinguish from reality. This requires profound changes in the entire imaging pipeline, from acquisition and rendering to display, with the strong focus on visual perception.

In this talk I will give an brief overview of several projects related to high dynamic range imaging and the applications of visual perception. Then I will discuss in more detail a project in which we explored the "dark side" of the dynamic range in order to model how people perceived images at low luminance. We use such a model to simulate the appearance of night scenes on regular displays, or to generate compensated images that reverse the changes in vision due to low luminance levels. The method can be used in games, driving simulators, or as a compensation for displays used under varying ambient light levels.

Rafał Mantiuk is a senior lecturer (associate professor) at Bangor University (UK) and a member of a Reasearch Institute of Visual Computing. Before comming to Bangor he received his PhD from the Max-Planck-Institute for Computer Science (2006, Germany) and was a postdoctoral researcher at the University of British Columbia (Canada). He has published numerous journal and conference papers presented at ACM SIGGRAPH, Eurographics, CVPR and SPIE HVEI conferences, applied for several patents and was recognized by the Heinz Billing Award (2006). Rafal Mantiuk investigates how the knowledge of the human visual system and perception can be incorporated within computer graphics and imaging algorithms. His recent interests focus on designing imaging algorithms that adapt to human visual performance and viewing conditions in order to deliver the best images given limited resources, such as computation time or display contrast.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 18.3.2015 08:57:33
Calibrating Surveillance Camera Networks
The next speaker in VGS-IT series will be Branislav Mičušík.

The talk will be given on Wednesday, May 27 at 11am in room E104.

Title: Calibrating Surveillance Camera Networks

Abstract: Camera systems have witnessed a huge increase in the number of installed cameras, generating a massive amount of video data. Current computer vision technologies are not fully able to exploit the visual information available in such large camera networks partially due to the lack of information about camera exact location. A manual calibration with special calibration targets, especially in ad hoc large camera networks, does not scale well with the number of cameras, is too time consuming, hence impractical. Therefore, a fully or semi automatic method with minimal user effort is an inevitable objective which should solely rely on visual information.

I present three approaches to tackle the calibration and localization problem of self-calibrating camera networks purely relying on available visual data. First, I present an approach for the calibration of cameras building on the latest achievements in Structure from Motion community. This stands for localization a camera in a priori built 3D model consisting of either points or line segments. Second, and third respectively, our approaches calibrating a single camera, and multiple surveilance cameras respectively, from detecting and tracking people will be reviewed. I show how multiple view geometry between overlapping and non-overlapping camera views with static and dynamic point correspondences gives a strong cue towards calibrating the cameras yielding practically appealing solutions.

Branislav Mičušík is a senior scientist at the Austrian Institute of Technology. Prior to that in 07-09 he was a visiting research scholar at Stanford University, USA. In 04-07 he was a postdoctoral researcher at the Vienna University of Technology, Austria. He received his Ph.D. in 04 from the Czech Technical University in Prague, at the Center of Machine Perception. His research interests are driven by wish to learn computers and machines to understand what they see in order to infer their own location. He is a holder of the Microsoft Visual Computing Award 2011 given to the best young scientist in Visual Computing in Austria and the Best Scientific Paper Prize at the British Machine Vision Conference in 07.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 21.5.2015 18:42:00
Recent Advances in Bounding Volume Hierarchies for Ray Tracing
The next speaker in VGS-IT series will be Jiří Bittner.

The talk will be given on Wednesday, June 10 at 1pm in room E104.

Title: Recent Advances in Bounding Volume Hierarchies for Ray Tracing

Abstract: In my talk I will briefly survey the usage of bounding volume hierarchies (BVH) for ray tracing acceleration. I will present a technique optimizing bounding volume hierarchies using insertion based global optimization procedure that leads to hierarchies of higher quality compared to the previous state of the art methods. I will also discuss a modification of this technique for the incremental construction of BVH and outline the usage of the incremental construction for real-time ray tracing of complex models streamed over a network. I will further present a method allowing to construct a single BVH optimized for all frames of a given animation sequence. I will conclude my talk by presenting a new ray tracing acceleration technique combining BVHs and ray space hierarchies that allows to perform real-time ray tracing of complex scenes that do not fit into the memory of the GPU.

Jiří Bittner is an associate professor at the Faculty of Electrical Engineering of the Czech Technical University in Prague. He received his Ph.D. in 2003 at the same institute. His research interests include visibility computations, real-time rendering, spatial data structures, and global illumination. He participated in a number of national and international research projects and also several commercial projects dealing with real-time rendering of complex scenes.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 8.6.2015 09:59:05
Seminar talk: C. Mauduit - Finite automata and number theory
Dear colleagues,
let us invite you to extraordinary seminar of Formal Model Research Group with a talk of prof. Christian Mauduit on Monday, June 15, 2015, 13:00-14:00, room G108, FIT VUT, Brno, Božetěchova 2.

Speaker: Christian MAUDUIT (Institut de Mathématiques de Luminy, Aix-Marseille University, France)
Title: Finite automata and number theory
Abstract: The difficulty of the transition from the representation of an integer in a number system (e.g.  n = 19605131) to its multiplicative representation (e.g. n = is at the origin of many important open problems in mathematics and in computer science.
        The aim of this talk is to give a survey on recent results concerning the combinatorial, arithmetical and statistical properties of sequences of symbols and sequences of integers generated by finite automata, showing deep connections between number theory, combinatorics, computer science and dynamical systems.
        We will illustrate our talk with some classical examples, including the Thue-Morse sequence, the Rudin-Shapiro sequence and the Cantor sequence.

Yours sincerely,
Zbyněk Křivka and Alexander Meduna
Tags: lecture, research, formal models
Created: 11.6.2015 16:17:44
Computer Graphics Meets Computational Design
The next speaker in VGS-IT series will be Michael Wimmer.

The talk will be given on Tuesday, October 20 at 1pm in room A112.

Title: Computer Graphics Meets Computational Design

Abstract: In this talk, I will report on recent advancements in Computer Graphics, which will be of great interest for next-generation computational design tools. I will present methods for modeling from images, modeling by examples and multiple examples, but also procedural modeling, modeling of physical behavior and light transport, all recently developed in our group. The common rationale behind our research is that we exploit real-time processing power and computer graphics algorithms to enable interactive computational design tools that allow short feedback loops in design processes.

Michael Wimmer is currently an Associate Professor at the Institute of Computer Graphics and Algorithms of the Vienna University of Technology, where he heads the Rendering Group. His academic career started with his M.Sc. in 1997 at the Vienna Universtiy of Technology, where he obtained his Ph.D. in 2001. His research interests are real-time rendering, computer games, real-time visualization of urban environments, point-based rendering, procedural modeling and shape modeling. He has coauthored over 100 papers in these fields. He also coauthored the book Real-Time Shadows. He served on many program committees, including ACM SIGGRAPH and SIGGRAPH Asia, Eurographics, Eurographics Symposium on Rendering, ACM I3D, etc. He is currently associate editor of Computers & Graphics and TVCG. He was papers co-chair of EGSR 2008, Pacific Graphics 2012, and Eurographics 2015.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 2.10.2015 20:02:00
Probabilistic approach to high order assignment problems
The next speaker in VGS-IT series will be Yosi Keller.

The talk will be given on Thursday, November 26 at 1pm in room E104.

Title: Probabilistic approach to high order assignment problems

Abstract: A gamut of computer vision and engineering problems can be cast as high order matching problems, where one considers the affinity/probability of two or more assignments simultaneously. The spectral matching approach of Leordeanu and Hebert (2005) was shown to provide an approximate solution of this np-hard problem. It this talk we present recent results on the probabilistic interpretation of spectral matching. We extend the results of Zass and Shashua (2008) and provide a probabilistic interpretation to the spectral matching and graduated assignment (1996) algorithms. We then derive a new probabilistic matching scheme, and show that it can be extended to high order matching scheme, via a dual marginalization-decomposition scheme. We will present a novel Integer Least Squares algorithm and apply it to the decoding of MIMO and OFDM channels, in the uncoded and coded cases, respectively. Joint work with Amir Egozi, Michael Chertok, Avi Septimus, Ayelet Haimovitch, Shimrit Haber and Dr. Itzik Bergel.

Yosi Keller received the BSc degree in Electrical Engineering in 1994 from the Technion-Israel Institute of Technology, Haifa. He received the MSc and PhD degrees in electrical engineering from Tel-Aviv University, Tel-Aviv, in 1998 and 2003, respectively. From 2003 to 2006 he was a Gibbs assistant professor with the Department of Mathematics, Yale University. He is an Associate Professor at the Faculty of Engineering in Bar Ilan University, Israel. His research relates to the applications of graph theory and machine learning to signal processing, computer vision and 3D modelling.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 12.11.2015 10:27:26
Data processing of Astronomical Images
The next speaker in VGS-IT series will be Petr Kubánek.

The talk will be given on Thursday, December 8 at 2pm in room E104.

Title: Data processing of Astronomical Images

Abstract: Astronomy and astrophysics is one of the science fields leverageing most rapidly technological progress. Be it with simple lens used by Galileo to study the stars and planets, to modern, huge marvellous telescopes using top of the art control systems and detectors, technological progress is tightly coupled with progress in astronomy and astrophysics. In this talk, I will review principles of data acquisition and processing as performed by astronomers around the planet. I will start with basic processing done on film cameras and photography, progressing towards advanced processing and interpretation of multi terabytes digital data acquired by most productive astronomical instruments.

Petr Kubánek received master degree in Software engineering from the Faculty of Mathematics and Physics of Charles University in Prague, and master degree in fuzzy logic from University of Granada in Spain. Currently he is research fellow at the Institute of Physics of Czech Academy of Sciences in Prague. He is developing RTS2 (Remote Telescope System 2nd Version), a package for fully autonomous astronomical observatory control and scheduling. RTS2 is being used on multiple observatories around the planet, on all continents (as one of the RTS2 collaborator is currently winterovering at Dome C in Antartica). Petrs interests and expertises spans from distributed device control through databases towards image processing and data mining. During his carrier, he collaborated with top world institutions (Harvard/CfA on FLWO 48 telescope, UC Berkeley on RATIR 1.5m telescope, NASA/IfA on ATLAS project, ESA/ISDEFE on TBT project, SLAC and BNL on Large Synoptics Survey Telescope (LSST) CCD testing) and enjoyed travel to restricted areas (scheduled for observing run at US Naval Observatory in Arizona). Hi is on kind-of parental leave, enjoying his new family, and slowly returning back to vivid astronomical world.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 30.11.2015 12:31:31
Classifier Adaptation at Prediction Time
The next speaker in VGS-IT series will be Christoph Lampert.

The talk will be given on Thursday, January 12 at 1pm in room E104.

Title: Classifier Adaptation at Prediction Time

Abstract: In the era of "big data" and a large commercial interest in computer vision, it is only a matter of time until we will buy commercial object recognition systems in pre-trained form instead of training them ourselves. This, however, poses a problem of domain adaptation: the data distribution in which a customer plans to use the system will almost certainly differ from the data distribution that the vendor used during training. Two relevant effects are a change of the class ratios and the fact that the image sequences that needs to be classified in real applications are typically not i.i.d. In my talk I will introduce simple probabilistic technique that can adapt the object recognition system to the test time distribution without having to change the underlying pre-trained classifiers. I will also introduce a framework for creating realistically distributed image sequences that offer a way to benchmark such adaptive recognition systems. Our results show that the above "problem" of domain adaptation can actually be a blessing in disguise: with proper adaptation the error rates on realistic image sequences are typically lower than on standard i.i.d. test sets.

Christoph Lampert received the PhD degree in mathematics from the University of Bonn in 2003. In 2010 he joined the Institute of Science and Technology Austria (IST Austria) first as an Assistant Professor and since 2015 as a Professor. His research on computer vision and machine learning won several international and national awards, including the best paper prize of CVPR 2008. In 2012 he was awarded an ERC Starting Grant by the European Research Council. He is an Editor of the International Journal of Computer Vision (IJCV), Action Editor of the Journal for Machine Learning Research (JMLR), and Associate Editor in Chief of the IEEE Transaction on Pattern Analysis and Machine Intelligence (TPAMI).

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 6.1.2016 17:15:02 Updated: 14.1.2016 15:32:21
Reconstructing the Real World in Motion
The next speaker in VGS-IT series will be Christian Theobalt.

The talk will be given on Wednesday, March 23 at 1pm in room G202.

Title: Reconstructing the Real World in Motion

Abstract: Even though many challenges remain unsolved, in recent years computer graphics algorithms to render photo-realistic imagery have seen tremendous progress. An important prerequisite for high-quality renderings is the availability of good models of the scenes to be rendered, namely models of shape, motion and appearance. Unfortunately, the technology to create such models has not kept pace with the technology to render the imagery. In fact, we observe a content creation bottleneck, as it often takes man months of tedious manual work by animation artists to craft models of moving virtual scenes.

To overcome this limitation, the graphics and vision communities has been developing techniques to capture dense 4D (3D+time) models of dynamic scenes from real world examples, for instance from footage of real world scenes recorded with cameras or other sensors. One example are performance capture methods that measure detailed dynamic surface models, for example of actors or an actors face, from multi-view video and without markers in the scene. Even though such 4D capture methods made big strides ahead, they are still at an early stage. Their application is limited to scenes of moderate complexity in controlled environments, reconstructed detail is limited, and captured content cannot be easily modified, to name only a few restrictions. Recently, the need for efficient dynamic scene reconstruction methods has further increased by developments in other thriving research domains, such as virtual and augmented reality, 3D video, or robotics.

In this talk, I will elaborate on some ideas on how to go beyond the current limits of 4D reconstruction, and show some results from our recent work. For instance, I will show how we can take steps to capture dynamic models of humans and general scenes in unconstrained environments with few sensors. I will also show how we can capture higher shape detail as well as material parameters of scenes outside of the lab. The talk will also show how one can effectively reconstruct very challenging scenes of a smaller scale, such a hand motion. Further on, I will discuss how we can capitalize on more sophisticated light transport models to enable high-quality reconstruction in much more uncontrolled scenes, eventually also outdoors, with only few cameras, or even just a single one. Ideas on how to perform deformable scene reconstruction in real-time will also be presented, if time allows.

Christian Theobalt from Max-Planck-Institute for Informatics, Saarbrücken, Germany is concerned with dynamic 3D scene reconstruction, marker-less motion capture, machine learning, and many other interesting issues on the boundary between the fields of Computer Vision and Computer Graphics.

Christian always publishes his work in best venues (2015: PAMI, 2×CGF, SIGCHI, 4×CVPR, 4×SIGGRAPH, 2×ICCV). He received several awards, he was also awarded an ERC Grant, he started his own research group at MPII and founded a spin-off company .

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 11.3.2016 11:15:29
Embedded Graphics: Rendering and Compute using an i.MX SoC

Tags: lecture, invited talk
Created: 31.3.2016 13:14:09
Linear Programming Relaxation Approach to Discrete Energy Minimization
The next speaker in VGS-IT series will be Tomáš Werner.

The talk will be given on Tuesday, April 12 at 2pm in room A113.

Title: Linear Programming Relaxation Approach to Discrete Energy Minimization

Abstract: Discrete energy minimization consists in minimizing a function of many discrete variables that is a sum of functions, each depending on a small subset of the variables. This is also known as MAP inference in graphical models (Markov random fields) or weighted constraint satisfaction. Many successful approaches to this useful but NP-complete problem are based on its natural LP relaxation. I will discuss this LP relaxation in detail, along with algorithms able to solve it for very large instances, which appear e.g. in computer vision. In particular, I will discuss in detail a convex message passing algorihtm, generalized min-sum diffusion.

Tomáš Werner works as a researcher at the Center for Machine Perception, Faculty of Electrical Engineering, Czech Technical University, where he also obtained his PhD degree. In 2001-2002 he worked as a post-doc at the Visual Geometry Group, Oxford University, U.K. In the past, his main interest was multiple view geometry and three-dimensional reconstruction in computer vision. Today, his interest is in machine learning and optimization, in particular graphical models. He is a (co-)author of more than 70 publications, with 350 citations in WoS.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 11.2.2016 16:29:45 Updated: 23.2.2016 14:55:22
Learning visual representations from Internet data
The next speaker in VGS-IT series will be Josef Sivic.

The talk will be given on Friday, April 22 at 10:30 in room E105.

Title: Learning visual representations from Internet data

Abstract: Unprecedented amount of visual data is now available on the Internet. Wouldn't it be great if a machine could automatically learn from this data? For example, imagine a machine that can learn how to change a flat tire of a car by watching instruction videos on Youtube, or that can learn how to navigate in a city by observing street-view imagery. Learning from Internet data is, however, a very challenging problem as the data is equipped only with weak supervisory signals such as human narration of the instruction video or noisy geotags for street-level imagery. In this talk, I will describe our recent progress on learning visual representations from such weakly annotated visual data.

In the first part of the talk, I will describe a new convolutional neural network architecture that is trainable in an end-to-end manner for the visual place recognition task. I will show that the network can be trained from weakly annotated Google Street View Time Machine imagery and significantly improves over current state-of-the-art in visual place recognition.

In the second part of the talk, I will describe a technique for automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The method solves two clustering problems, one in text and one in video, linked by joint constraints to obtain a single coherent sequence of steps in both modalities. I will show results on a newly collected dataset of instruction videos from Youtube that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 13.4.2016 12:16:57
Recognition of sign language
We are pleased to invite you to the upcoming seminar of Dr. Ing. Joanna Marnik, Rzeszow University of Technology.

Title: Recognition of sign language

The talk will be given on Tuesday, April 26 at 9:00 in room L220.

Abstract: Seminar presents the leveraging of the image processing and recognition techniques to build useful human-computer interfaces. Based on speaker's experience and results of her research team, a recognition of sign language will be presented and usage in some applications for severely disabled people - to help them to communicate with others if they can not speak. Presentations includes following topics: recognition of hand gestures to support communication between deaf and hearing persons, hand shape recognition for human-computer interaction and head gestures based vision systems supporting communication of non-verbal disabled people. Further, the following problems will be discussed: hearing impaired persons in society, problems related to hand gestures recognition etc.
Tags: lecture, talk
Created: 25.4.2016 12:28:31

Everything Counts - Rendering Highly-detailed Environments in Real-time
The next speaker in VGS-IT series will be Elmar Eisemann.

The talk will be given on Friday, May 20 at 2:00pm in room E105.

Title: Everything Counts - Rendering Highly-detailed Environments in Real-time

Elmar Eisemann is a professor at TU Delft, heading the Computer Graphics and Visualization Group. Before he was an associated professor at Telecom ParisTech (until 2012) and a senior scientist heading a research group in the Cluster of Excellence (Saarland University / MPI Informatik) (until 2009). He studied at the École Normale Supérieure in Paris (2001-2005) and received his PhD from the University of Grenoble at INRIA Rhône-Alpes (2005-2008). He spent several research visits abroad; at the Massachusetts Institute of Technology (2003), University of Illinois Urbana-Champaign (2006), Adobe Systems Inc. (2007,2008). His interests include real-time and perceptual rendering, alternative representations, shadow algorithms, global illumination, and GPU acceleration techniques. He coauthored the book "Real-time shadows" and participated in various committees and editorial boards. He was local organizer of EGSR 2010, 2012, HPG 2012, and is paper chair of HPG 2015. His work received several distinction awards and he was honored with the Eurographics Young Researcher Award 2011.

Abstract: A traditional challenge in computer graphics is the simulation of natural scenes, including complex geometric models and a realistic reproduction of physical phenomena, requiring novel theoretical insights, appropriate algorithms, and well-designed data structures. In particular, there is a need for efficient image-synthesis solutions, which is fueled by the development of modern display devices, which support 3D stereo, have high resolution and refresh rates, and deep color palettes.

In this talk, we will present methods for efficient image synthesis to address recent rendering challenges. In particular, we will focus on large-scale data sets and present novel techniques to encode highly detailed geometric information in a compact representation. Further, we will give an outlook on rendering techniques for modern display devices, as these often require very differing solutions. In particular, human perception starts to paly an increasing role and has high potential to be a key factor in future rendering solutions.

All are cordially invited.

Tags: lecture, research, vgs-it
Created: 3.5.2016 15:02:41
3D Reconstruction from Photographs and Algebraic Geometry
Tomáš Pajdla will give an invited talk within VGS-IT series on Wednesday, November 2nd, 1pm, in E105.

Title: 3D Reconstruction from Photographs and Algebraic Geometry

Abstract: We will show a connection between the state of the art 3D reconstruction from photographs and algebraic geometry. In particular, we will show how some modern tools from computational algebraic geometry can be used to solve some classical as well as recent problems in computing camera calibration and orientation in space. We will present applications in large scale reconstruction from photographs, robotics and camera calibration.


Tomáš Pajdla is a Distinguished Researcher at the CIIRC - Czech Institute of Informatics, Robotics and Cybernetics ( and an Assistant Professor at the Faculty of Electrical Engineering ( of the Czech Technical University in Prague. He works in geometry, algebra and optimization of computer vision and robotics, 3D reconstruction from images, and visual object recognition. He is known for his contributions to geometry of cameras, image matching, 3D reconstruction, visual localization, camera and hand-eye calibration, and algebraic methods in computer vision (Google Scholar citations). He coauthored works awarded the best paper prizes at OAGM 1998 and 2013, BMVC 2002 and ACCV 2014.

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 26.10.2016 16:50:05

Recent Advances in Vector Graphics Creation and Display
Stefan Jeschke will give an invited talk within VGS-IT series on Wednesday, November 8th, 1pm, in G202.

Title: Recent Advances in Vector Graphics Creation and Display

Abstract: This talk gives an overview of my recent work on vector graphics representations as semantically meaningful image descriptions, in contrast to pixel-based raster images. I will cover the problem of how to efficiently create vector graphics either from scratch or from given raster images. The goal was to support designers to produce complex, high-quality representations with only limited manual input. Furthermore, I will talk about various new developments that are mainly based on the so-called "diffusion curves". Here the goal is to improve the expressiveness of such representations, for example, by adding textures so that natural images appear more realistic without adding excessive amounts of geometry beyond what can be handled by a designer. Rendering such representations at interactive frame rates on modern GPUs is another aspect I will cover in this talk.

Stefan Jeschke is a scientist at IST Austria. He received an M.Sc. in 2001 and a Ph.D. in 2005, both in computer science from the University of Rostock, Germany. Afterwards, he spend several years as a post doc researcher in several projects at Vienna University of Technology and Arizona State University. His research interest includes modeling and display of vectorized image representations, applications and solvers for PDEs, as well as modeling and rendering complex natural phenomena, preferably in real time.

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 3.11.2016 13:56:12

Data Parallelism in Computer Vision
Gernot Ziegler will give an invited talk within VGS-IT series on Wednesday, December 14th, 1pm, in E105.

Title: Data Parallelism in Computer Vision

Abstract: In algorithmic design, serial data dependencies which accelerate CPU processing for computer vision are often counterproductive for the data-parallel GPU. The talk presents data structures and algorithms that enable data parallelism for connected components, line detection, feature detection, marching cubes or octree generation. We will point out the important aspects of data parallel design that will allow you to design new algorithms for GPGPU-based computer vision and image processing yourself. As food for thought, I will sketch algorithmic ideas that could lead to new collaborative results in real-time computer vision.

Gernot Ziegler (Dr.Ing.) is an Austrian engineer with an MSc degree in Computer Science and Engineering from Linköping University, Sweden, and a PhD from the University of Saarbrücken, Germany. He pursued his PhD studies at the Max-Planck-Institute for Informatics in Saarbrücken, Germany, specializing in GPU algorithms for computer vision and data-parallel algorithms for spatial data structures. He then joined NVIDIAs DevTech team, where he consulted in high performance computing and automotive computer vision on graphics hardware. In 2016, Gernot has founded his own consulting company to explore the applications of his computer vision expertise on graphics hardware in mobile consumer, industrial vision and heritage digitalization.

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 2.12.2016 15:59:09

Neural Networks for Natural Language Processing
Tomáš Mikolov will give an invited talk within VGS-IT series on Tuesday, January 3rd, 5pm, in E112.

Title: Neural Networks for Natural Language Processing

Abstract: Neural networks are currently very successful in various machine learning tasks that involve natural language. In this talk, I will describe how recurrent neural network language models have been developed, as well as their most frequent applications to speech recognition and machine translation. Next, I will talk about distributed word representations, their interesting properties, and efficient ways how to compute them. Finally, I will describe our latest efforts to create novel dataset that would allow researchers to develop new types of applications that include communication with human users in natural language.

Tomáš Mikolov is a research scientist at Facebook AI Research since 2014. Previously he has been a member of Google Brain team, where he developed efficient algorithms for computing distributed representations of words (word2vec project). He has obtained PhD from Brno University of Technology for work on recurrent neural network based language models (RNNLM). His long term research goal is to develop intelligent machines capable of communicating with people using natural language.

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 15.12.2016 15:35:12

Efficient Deconvolution Techniques for Computational Photography
Manuel M. Oliveira will give an invited talk within VGS-IT series on Tuesday, January 31st, 1pm, in E105.

Title: Efficient Deconvolution Techniques for Computational Photography

Abstract: Deconvolution is a fundamental tool for many imaging applications ranging from microscopy to astronomy. In this talk, I will present efficient deconvolution techniques tailored for two important computational photography applications: estimating color and depth from a single photograph, and motion deblurring from camera shake. For the first, I will describea coded-aperture method based on a family of masks obtained as the convolution of one "hole" witha structural component consisting of an arrangement of Dirac delta functions. We call this arrangement of deltafunctions the structural component of the mask, and use it to efficiently encode scene distance information. I will then show how one can design well-conditioned masks for which deconvolution can be efficiently performed by inverse filtering. I will demonstrate the effectiveness of this approach by constructing a mask for distance coding and using it to recover color and depth information from single photographs. This lends to significant speedup, extended range, and higher depth resolution compared to previous approaches. For the second application, I will present an efficient technique for high-qualitynon-blind deconvolution based on the use of sparse adaptivepriors. Despite its ill-posed nature, I will show how to model the non-blinddeconvolution problem as a linear system, which issolved in the frequency domain. This clean formulation lendsto a simple and efficient implementation, which is faster and whose results tend to have higher peak signal-to-noise ratio than previous methods.

Manuel M. Oliveira is an Associate Professor of Computer Science at the Federal University of Rio Grande do Sul (UFRGS), in Brazil. He received his PhD from the University of North Carolina at Chapel Hill, in 2000. Before joining UFRGS in 2002, he was an Assistant Professor of Computer Science at the State University of New York at Stony Brook (2000 to 2002). In the 2009-2010 academic year, he was a Visiting Associate Professor at the MIT Media Lab. His research interests cover most aspects of computer graphics, but especially the frontiers among graphics, image processing, and vision (both human and machine). In these areas, he has contributed a variety of techniques including relief texture mapping, real-time filtering in high-dimensional spaces, efficient algorithms for Hough transform, new physiologically-based models for color perception and pupil-light reflex, and novel interactive techniques for measuring visual acuity. Dr. Oliveira was program co-chair of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2010 (I3D 2010), and general co-chair of ACM I3D 2009. He is an Associate Editor of IEEE TVCG and IEEE CG&A, and a member of the CIE Technical Committee TC1-89 "Enhancement of Images for Colour Defective Observers". He received the ACM Recognition of Service Award in 2009 and in 2010.

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 24.1.2017 17:48:53

Perception and Personalization in Digital Content Reproduction
Piotr Didyk will give an invited talk within VGS-IT series on Tuesday, February 15th, 1pm, in A113.

Title: Perception and Personalization in Digital Content Reproduction

Abstract: There has been a tremendous increase in quality and number of new output devices, such as stereo and automultiscopic screens, portable and wearable displays, and 3D printers. Unfortunately, abilities of these emerging technologies outperform capabilities of methods and tools for creating content. Also, the current level of understanding of how these new technologies influence user experience is insufficient to fully exploit their advantages. In this talk, I will present our recent efforts in the context of perception-driven techniques for digital content reproduction. I will demonstrate that careful combinations of new hardware, computation, and models of human perception can lead to solutions that provide a significant increase in perceived quality. More precisely, I will discuss two techniques for overcoming limitations of 3D displays. They exploit information about gaze direction as well as the motion-parallax cue. I will also demonstrate a new design of automultiscopic screen for cinema and a prototype of a near-eye augmented reality display that supports focus cues. Next, I will show how careful rendering of frames enables continuous framerate manipulations giving artists a new tool for video manipulation. The technique can, for example, reduce temporal artifacts without sacrificing the cinematic look of a movie content. In the context of digital fabrication, I will present a perceptual model for compliance with its applications to 3D printing.

Piotr Didyk is an Independent Research Group Leader at the Cluster of Excellence on "Multimodal Computing and Interaction" at the Saarland University (Germany), where he is heading a group on Perception, Display, and Fabrication. He is also appointed as a Senior Researcher at the Max Planck Institute for Informatics. Prior to this, he spent two years as a postdoctoral associate at Massachusetts Institute of Technology. In 2012, he obtained his PhD from the Max Planck Institute for Informatics and the Saarland University for his work on perceptual display. During his studies, he was also a visiting student at MIT. In 2008, he received his M.Sc. degree in Computer Science from the University of Wrocław (Poland). His research interests include human perception, new display technologies, image/video processing, and computational fabrication. His main focus are techniques that account for properties of the human sensory system and human interaction to improve perceived quality of the final images, videos, and 3D prints.

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 8.2.2017 17:59:44

Tracking with Discriminative Correlation Filters
Jiří Matas will give an invited talk within VGS-IT series on Thursday, March 2nd, 1pm, in E105.

Title: Tracking with Discriminative Correlation Filters

Abstract: Visual tracking is a core video processing problem with many applications, e.g. in surveillance, autonomous driving, sport analysis, augmented reality, film post-production and medical imaging.

In the talk, tracking methods based on Discriminative Correlation Filters (DCF) will be presented. DCF-based trackers are currently the top performers on most commonly used tracking benchmarks. Starting from the oldest and simplest versions of DCF trackers like MOSSE, we will progress to kernel-based and multi-channel variants including those exploiting CNN features. Finally, the Discriminative Correlation Filter with Channel and Spatial Reliability will be introduced.

Time permitting, I will briefly introduce a problem that has been so far largely ignored by the computer vision community - tracking of blurred, fast moving objects.

Jiří Matas is a full professor at the Center for Machine Perception, Czech Technical University in Prague. He holds a PhD degree from the University of Surrey, UK (1995). He has published more than 200 papers in refereed journals and conferences. Google Scholar reports about 22 000 citations to his work and an h-index 53. He received the best paper prize at the International Conference on Document Analysis and Recognition in 2015, the Scandinavian Conference on Image Analysis 2013, Image and Vision Computing New Zealand Conference 2013, the Asian Conference on Computer Vision 2007, and at British Machine Vision Conferences in 2002 and 2005. His students received a number of awards, e.g. Best Student paper at ICDAR 2013, Google Fellowship 2013, and various "Best Thesis" prizes. J. Matas is on the editorial board of IJCV and was the Associate Editor-in-Chief of IEEE T. PAMI. He is a member of the ERC Computer Science and Informatics panel. He has served in various roles at major international conferences, e.g. ICCV, CVPR, ICPR, NIPS, ECCV, co-chairing ECCV 2004 and CVPR 2007. He is a program co-chair for ECCV 2016. His research interests include object recognition, text localization and recognition, image retrieval, tracking, sequential pattern recognition, invariant feature detection, and Hough Transform and RANSAC-type optimization.

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 28.2.2017 09:02:55

Neural Network Supported Acoustic Beamforming for Speech Enhancement and Recognition
Reinhold Häb-Umbach will give an invited talk within VGS-IT series on Monday, April 24th, 1pm, in D0207.

Title: Neural Network Supported Acoustic Beamforming for Speech Enhancement and Recognition

Abstract: With multiple microphones spatial information can be exploited to extract a target signal from a noisy environment. While the theory of statistically optimum beamforming is well established the challenge lies in the estimation of the beamforming coefficients from the noisy input signal. Traditionally these coefficients are derived from an estimate of the direction-of-arrival of the target signal, while more elaborate methods estimate the power spectral density matrices (PSD) of the desired and the interfering signals, thus avoiding the assumption of an anechoic signal propagation. We have proposed to estimate these PSD matrices using spectral masks determined by a neural network. This combination of data-driven approaches with statistically optimum multi-channel filtering has delivered competitive results on the recent CHiME challenge. In this talk, we detail this approach and show that the concept is more general and can be, for example, also used for dereverberation. When used as a front-end for a speech recognition system, we further show how the neural network for spectral mask estimation can be optimized w.r.t. a word error rate related criterion in and end-to-end setup.

Reinhold Häb-Umbach is a professor of Communications Engineering at the University of Paderborn, Germany. His main research interests are in the fields of statistical signal processing and pattern recognition, with applications to speech enhancement, acoustic beamforming and source separation, as well as automatic speech recognition and unsupervised learning from speech and audio. He has more than 200 scientific publications, and recently co-authored the book Robust Automatic Speech Recognition - a Bridge to Practical Applications (Academic Press, 2015). He is a fellow of the International Speech Communication Association (ISCA).

All are cordially invited.
Tags: lecture, research, vgs-it
Created: 15.3.2017 16:52:18

Further information in Czech only

Your IPv4 address:
Switch to IPv6 connection

DNSSEC [dnssec]