Yannick Francken
University of Hasselt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yannick Francken.
canadian conference on computer and robot vision | 2007
Yannick Francken; Chris Hermans; Philippe Bekaert
Developments in the consumer market have indicated that the average user of a personal computer is likely to also own a webcam. With the emergence of this new user group will come a new set of applications, which will require a user-friendly way to calibrate the position of the camera with respect to the location of the screen. This paper presents a fully automatic method to calibrate a screen-camera setup, using a single moving spherical mirror. Unlike other methods, our algorithm needs no user intervention other then moving around a spherical mirror. In addition, if the user provides the algorithm with the exact radius of the sphere in millimeters, the scale of the computed solution is uniquely defined.
computer vision and pattern recognition | 2009
Chris Hermans; Yannick Francken; Tom Cuypers; Philippe Bekaert
In this paper we present a novel method for 3D structure acquisition, based on structured light. Unlike classical structured light methods, in which a static projector illuminates a scene with time-varying illumination patterns, our technique makes use of a moving projector emitting a static striped illumination pattern. This projector is translated at a constant velocity, in the direction of the projectors horizontal axis. Illuminating the object in this manner allows us to perform a per pixel analysis, in which we decompose the recorded illumination sequence into a corresponding set of frequency components. The dominant frequency in this set can be directly converted into a corresponding depth value. This per pixel analysis allows us to preserve sharp edges in the depth image. Unlike classical structured light methods, the quality of our results is not limited by projector or camera resolution, but is solely dependent on the temporal sampling density of the captured image sequence. Additional benefits include a significant robustness against common problems encountered with structured light methods, such as occlusions, specular reflections, subsurface scattering, interreflections, and to a certain extent projector defocus.
international symposium on visual computing | 2009
Yannick Francken; Tom Cuypers; Tom Mertens; Philippe Bekaert
We propose a technique for gloss and normal map acquisition of fine-scale specular surface details, or mesostructure. Our main goal is to provide an efficient, easily applicable, but sufficiently accurate method to acquire mesostructures. We therefore employ a setup consisting of inexpensive and accessible components, including a regular computer screen and a digital still camera. We extend the Gray code based normal map acquisition approach of Francken et al. [1] which utilizes a similar setup. The quality of the original method is retained and without requiring any extra input data we are able to extract per pixel glossiness information. In the paper we show the theoretical background of the method as well as results on real-world specular mesostructures.
canadian conference on computer and robot vision | 2008
Yannick Francken; Chris Hermans; Tom Cuypers; Philippe Bekaert
We propose an efficient technique for normal map acquisition, using a cheap and easy to build setup. Our setup consists solely of off-the-shelf components, such as an LCD screen, a digital camera and a linear polarizer filter. The LCD screen is employed as a linearly polarized light source emitting gradient patterns, whereas the digital camera is used to capture the incident illumination reflected off the scanned objects surface. Also, by exploiting the fact that light emitted by an LCD screen has the property of being linearly polarized, we use the filter to surpress any specular highlights. Based on the observed Lambertian reflection of only four different light patterns, we are able to obtain a detailed normal map of the scanned surface. Overall, our techniques produces convincing results, even on weak specular materials.
Proceedings of the 5th ACM/IEEE International Workshop on Projector camera systems | 2008
Yannick Francken; Tom Cuypers; Philippe Bekaert
We present a method to efficiently acquire specular mesostructure normal maps, only making use of off-the-shelf components, such as a digital still camera, an LCD screen and a linear polarizing filter. Where current methods require a specialized setup, or a considerable number of input images, we only need a cheap setup to maintain a similar level of quality. We verify the presented theory on real world examples, and provide a ground truth evaluation on photorealistic synthetic data.
international symposium on visual computing | 2009
Chris Hermans; Yannick Francken; Tom Cuypers; Philippe Bekaert
We present a novel method for 3D shape acquisition, based on mobile structured light. Unlike classical structured light methods, in which a static projector illuminates the scene with dynamic illumination patterns, mobile structured light employs a moving projector translated at a constant velocity in the direction of the projectors horizontal axis, emitting static or dynamic illumination. For our approach, a time multiplexed mix of two signals is used: (1) a wave pattern, enabling the recovery of point-projector distances for each point observed by the camera, and (2) a 2D De Bruijn pattern, used to uniquely encode a sparse subset of projector pixels. Based on this information, retrieved on a per (camera) pixel basis, we are able to estimate a sparse reconstruction of the scene. As this sparse set of 2D-3D camera-scene correspondences is sufficient to recover the camera location and orientation within the scene, we are able to convert the dense set of point-projector distances into a dense set of camera depths, effectively providing us with a dense reconstruction of the observed scene. We have verified our technique using both synthetic and real-world data. Our experiments display the same level of robustness as previous mobile structured light methods, combined with the ability to accurately estimate dense scene structure and accurate camera/projector motion without the need for prior calibration.
computer vision and pattern recognition | 2009
Tom Cuypers; Yannick Francken; Johannes Taelman; Philippe Bekaert
In this work we propose a real-time implementation for efficient extraction of multi-viewpoint silhouettes using a single camera. The method is based on our previously presented proof-of-concept shadow multiplexing method. We replace the cameras of a typical multi-camera setup with colored light sources and capture the multiplexed shadows. Because we only use a single camera, our setup is much cheaper than a classical setup, no camera synchronization is required, and less data has to be captured and processed. In addition, silhouette extraction is simple as we are segmenting the shadows instead of the texture of objects and background. Demultiplexing runs at 40 fps on current graphics hardware. Therefore this technique is suitable for real-time applications such as collision detection. We evaluate our method on both a real and a virtual setup, and show that our technique works for a large variety of objects and materials.
international conference on computer graphics and interactive techniques | 2007
Yannick Francken; Tom Mertens; Jo Gielis; Philippe Bekaert
In this paper we propose a technique for efficient acquisition of fine-scale surface details, or mesostructure. Inspired by Chen et al. [2006], we wish to recover mesostructure normal maps using a single top-view camera and a point light source based only on spec-larities. Many interesting materials such as plastic and metal can be analyzed this way. Moreover, certain translucent materials such as human skin and fruit are hard to analyze using traditional photometric stereo, due to excessive subsurface scattering. Specularities are not influenced by this, enabling us to capture detailed normal maps for such cases.
advances in computer entertainment technology | 2007
Yannick Francken; Johan Huysmans; Philippe Bekaert
We present a method for sharing visual information in 3D virtual environments, using a projective texture mapping based method. Avatars can share information with other avatars by projecting relevant information into the environment. In addition to standard projective texture mapping, an important depth cue is added: projected light is attenuated in function of the light-travel distance. This is efficiently accomplished on a per vertex basis by adaptively super-sampling under-sampled polygons. This way, the projection quality is maximized while keeping a fixed frame rate. Our technique is implemented into the Quake III engine, extending its shading language with GLSL fragment and vertex shaders.
international conference on computer graphics theory and applications | 2008
Tom Cuypers; Cedric Vanaken; Yannick Francken; Frank Van Reeth; Philippe Bekaert