Frederic Garcia
University of Luxembourg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frederic Garcia.
international conference on image processing | 2010
Frederic Garcia; Bruno Mirbach; Björn E. Ottersten; Frédéric Grandidier; Ángel Cuesta
This paper introduces a new multi-lateral filter to fuse low-resolution depth maps with high-resolution images. The goal is to enhance the resolution of Time-of-Flight sensors and, at the same time, reduce the noise level in depth measurements. Our approach is based on the joint bilateral upsampling, extended by a new factor that considers the low reliability of depth measurements along the low-resolution depth map edges. Our experimental results show better performances than alternative depth enhancing data fusion techniques.
advanced video and signal based surveillance | 2011
Frederic Garcia; Djamila Aouada; Bruno Mirbach; Thomas Solignac; Björn E. Ottersten
We present an adaptive multi-lateral filter for real-time low-resolution depth map enhancement. Despite the great advantages of Time-of-Flight cameras in 3-D sensing, there are two main drawbacks that restricts their use in a wide range of applications; namely, their fairly low spatial resolution, compared to other 3-D sensing systems, and the high noise level within the depth measurements. We therefore propose a new data fusion method based upon a bilateral filter. The proposed filter is an extension the pixel weighted average strategy for depth sensor data fusion. It includes a new factor that allows to adaptively consider 2-D data or 3-D data as guidance information. Consequently, unwanted artefacts such as texture copying get almost entirely eliminated, outperforming alternative depth enhancement filters. In addition, our algorithm can be effectively and efficiently implemented for real-time applications.
Iet Computer Vision | 2013
Frederic Garcia; Djamila Aouada; Thomas Solignac; Bruno Mirbach; Björn E. Ottersten
This study presents a real-time refinement procedure for depth data acquired by RGB-D cameras. Data from RGB-Dcameras suffer from undesired artefacts such as edge inaccuracies or holes owing to occ ...
computer vision and pattern recognition | 2011
Frederic Garcia; Djamila Aouada; Bruno Mirbach; Thomas Solignac; Björn E. Ottersten
We present a full real-time implementation of a multilateral filtering system for depth sensor data fusion with 2-D data. For such a system to perform in real-time, it is necessary to have a real-time implementation of the filter, but also a real-time alignment of the data to be fused. To achieve an automatic data mapping, we express disparity as a function of the distance between the scene and the cameras, and simplify the matching procedure to a simple indexation procedure. Our experiments show that this implementation ensures the fusion of 3-D data and 2-D data in real-time and with high accuracy.
IEEE Journal of Selected Topics in Signal Processing | 2012
Frederic Garcia; Djamila Aouada; Bruno Mirbach; Björn E. Ottersten
We propose a real-time mapping procedure for data matching to deal with hybrid time-of-flight (ToF) multi-camera rig data fusion. Our approach takes advantage of the depth information provided by the ToF camera to calculate the distance-dependent disparity between the two cameras that constitute the system. As a consequence, the not co-centric binocular system behaves as a co-centric system with co-linear optical axes between their sensors. The association between mapped and non-mapped image coordinates can be described by a set of look-up tables. This, in turn, reduces the complexity of the whole process to a simple indexing step, and thus, performs in real-time. The experimental results show that in addition to being straightforward and easy to compute, our proposed data matching approach is highly accurate which facilitates further fusion operations.
Image and Vision Computing | 2015
Frederic Garcia; Djamila Aouada; Bruno Mirbach; Thomas Solignac; Björn E. Ottersten
This paper proposes a unified multi-lateral filter to efficiently increase the spatial resolution of low-resolution and noisy depth maps in real-time. Time-of-Flight (ToF) cameras have become a very promising alternative to stereo-based range sensing systems as they provide depth measurements at a high frame rate. However, there are actually two main drawbacks that restrict their use in a wide range of applications; namely, their fairly low spatial resolution as well as the amount of noise within the depth estimation. In order to address these drawbacks, we propose a new approach based on sensor fusion. That is, we couple a ToF camera of low-resolution with a 2-D camera of higher resolution to which the low-resolution depth map will be efficiently upsampled. In this paper, we first review the existing depth map enhancement approaches based on sensor fusion and discuss their limitations. We then propose a unified multi-lateral filter that accounts for the inaccuracy of depth edges position due to the low-resolution ToF depth maps. By doing so, unwanted artefacts such as texture copying and edge blurring are almost entirely eliminated. Moreover, the proposed filter is configurable to behave as most of the alternative depth enhancement approaches. Using a convolution-based formulation and data quantization and downsampling, the described filter has been effectively and efficiently implemented for dynamic scenes in real-time applications. The experimental results show a sensitive qualitative as well as quantitative improvement on raw depth maps, outperforming state-of-the-art multi-lateral filters. New multi-lateral filter to efficiently increase the spatial resolution of low-resolution and noisy depth maps in real-time.ToF camera coupled with a 2-D camera of higher resolution to which the low-resolution depth map will upsampled.We account for the inaccuracy of depth edges position due to the low-resolution ToF depth maps.Unwanted artefacts such as texture copying and edge blurring are almost entirely eliminated.The proposed filter is convolution-based and achives a real-time performance by data quantization and downsampling.The proposed filter has been effectively and efficiently implemented for dynamic scenes in real-time applications.The proposed filter can be easily adapted for alternative depth sensing systems than ToF cameras.
international conference on computer vision | 2012
Frederic Garcia; Djamila Aouada; Hashim Kemal Abdella; Thomas Solignac; Bruno Mirbach; Björn E. Ottersten
This paper presents a general refinement procedure that enhances any given depth map obtained by passive or active sensing. Given a depth map, either estimated by triangulation methods or directly provided by the sensing system, and its corresponding 2-D image, we correct the depth values by separately treating regions with undesired effects such as empty holes, texture copying or edge blurring due to homogeneous regions, occlusions, and shadowing. In this work, we use recent depth enhancement filters intended for Time-of-Flight cameras, and adapt them to alternative depth sensing modalities, both active using an RGB-D camera and passive using a dense stereo camera. To that end, we propose specific masks to tackle areas in the scene that require a special treatment. Our experimental results show that such areas are satisfactorily handled by replacing erroneous depth measurements with accurate ones.
international conference on image processing | 2012
Frederic Garcia; Djamila Aouada; Bruno Mirbach; Björn E. Ottersten
We propose an extension of our previous work on spatial domain Time-of-Flight (ToF) data enhancement to the temporal domain. Our goal is to generate enhanced depth maps at the same frame rate of the 2-D camera that, coupled with a ToF camera, constitutes a hybrid ToF multi-camera rig. To that end, we first estimate the motion between consecutive 2-D frames, and then use it to predict their corresponding depth maps. The enhanced depth maps result from the fusion between the recorded 2-D frames and the predicted depth maps by using our previous contribution on ToF data enhancement. The experimental results show that the proposed approach overcomes the ToF camera drawbacks; namely, low resolution in space and time and high level of noise within depth measurements, providing enhanced depth maps at video frame rate.
Journal of Electronic Imaging | 2015
Frederic Garcia; Cedric Schockaert; Bruno Mirbach
Abstract. An image detail enhancement method to effectively visualize low contrast targets in high-dynamic range (HDR) infrared (IR) images is presented regardless of the dynamic range width. In general, high temperature dynamics from real-world scenes used to be encoded in a 12 or 14 bits IR image. However, the limitations of the human visual perception, from which no more than 128 shades of gray are distinguishable, and the 8-bit working range of common display devices make necessary an effective 12/14 bits HDR mapping into the 8-bit data representation. To do so, we propose to independently treat the base and detail image components that result from splitting the IR image using two dedicated guided filters. We also introduce a plausibility mask from which those regions that are prominent to present noise are accurately defined to be explicitly tackled to avoid noise amplification. The final 8-bit data representation results from the combination of the processed detail and base image components and its mapping to the 8-bit domain using an adaptive histogram-based projection approach. The limits of the histogram are accommodated through time in order to avoid global brightness fluctuations between frames. The experimental evaluation shows that the proposed noise-aware approach preserves low contrast details with an overall contrast enhancement of the image. A comparison with widely used HDR mapping approaches and runtime analysis is also provided. Furthermore, the proposed mathematical formulation enables a real-time adjustment of the global contrast and brightness, letting the operator adapt to the visualization display device without nondesirable artifacts.
Twelfth International Conference on Quality Control by Artificial Vision 2015 | 2015
Frederic Garcia; Cedric Schockaert; Bruno Mirbach
This paper presents a noise removal and image detail enhancement method that accounts for the limitations on humans perception to effectively visualize high-dynamic-range (HDR) infrared (IR) images. In order to represent real world scenes, IR images use to be represented by a HDR that generally exceeds the working range of common display devices (8 bits). Therefore, an effective HDR mapping without losing the perceptibility of details is needed. To do so, we introduce the use of two guided filters (GF) to generate an accurate base and detail image component. A plausibility mask is also generated from the combination of the linear coefficients that result from each GF; an indicator of the spatial detail that enables to identify those regions that are prominent to present noise in the detail image component. Finally, we filter the working range of the HDR along time to avoid global brightness fluctuations in the final 8 bit data representation, which results from combining both detail and base image components using a local adaptive gamma correction (LAGC). The last has been designed according to the human vision characteristics. The experimental evaluation shows that the proposed approach significantly enhances image details in addition to improving the contrast of the entire image. Finally, the high performance of the proposed approach makes it suitable for real word applications.