Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruno Mirbach is active.

Publication


Featured researches published by Bruno Mirbach.


international conference on image processing | 2010

Pixel weighted average strategy for depth sensor data fusion

Frederic Garcia; Bruno Mirbach; Björn E. Ottersten; Frédéric Grandidier; Ángel Cuesta

This paper introduces a new multi-lateral filter to fuse low-resolution depth maps with high-resolution images. The goal is to enhance the resolution of Time-of-Flight sensors and, at the same time, reduce the noise level in depth measurements. Our approach is based on the joint bilateral upsampling, extended by a new factor that considers the low reliability of depth measurements along the low-resolution depth map edges. Our experimental results show better performances than alternative depth enhancing data fusion techniques.


advanced video and signal based surveillance | 2011

A new multi-lateral filter for real-time depth enhancement

Frederic Garcia; Djamila Aouada; Bruno Mirbach; Thomas Solignac; Björn E. Ottersten

We present an adaptive multi-lateral filter for real-time low-resolution depth map enhancement. Despite the great advantages of Time-of-Flight cameras in 3-D sensing, there are two main drawbacks that restricts their use in a wide range of applications; namely, their fairly low spatial resolution, compared to other 3-D sensing systems, and the high noise level within the depth measurements. We therefore propose a new data fusion method based upon a bilateral filter. The proposed filter is an extension the pixel weighted average strategy for depth sensor data fusion. It includes a new factor that allows to adaptively consider 2-D data or 3-D data as guidance information. Consequently, unwanted artefacts such as texture copying get almost entirely eliminated, outperforming alternative depth enhancement filters. In addition, our algorithm can be effectively and efficiently implemented for real-time applications.


Iet Computer Vision | 2013

Real-time depth enhancement by fusion for RGB-D cameras

Frederic Garcia; Djamila Aouada; Thomas Solignac; Bruno Mirbach; Björn E. Ottersten

This study presents a real-time refinement procedure for depth data acquired by RGB-D cameras. Data from RGB-Dcameras suffer from undesired artefacts such as edge inaccuracies or holes owing to occ ...


IEEE Transactions on Vehicular Technology | 2009

3-D-Skeleton-Based Head Detection and Tracking Using Range Images

Pandu Ranga Rao Devarakota; Marta Castillo-Franco; Romuald Ginhoux; Bruno Mirbach; Serge Kater; Björn E. Ottersten

Vision-based 3-D head detection and tracking systems have been studied in several applications like video surveillance, face-detection systems, and occupant posture analysis. In this paper, we present the development of a topology-based framework using a 3-D skeletal model for the robust detection and tracking of a vehicle occupants head position from low-resolution range image data for a passive safety system. Unlike previous approaches to head detection, the proposed approach explores the topology information of a scene to detect the position of the head. Among the different available topology representations, the Reeb graph technique is chosen and is adapted to low-resolution 3-D range images. Invariance of the graph under rotations is achieved by using a Morse radial distance function. To cope with the particular challenges such as the noise and the large variations in the density of the data, a voxel neighborhood connectivity notion is proposed. A multiple-hypothesis tracker (MHT) with nearest-neighbor data association and Kalman filter prediction is applied on the endpoints of the Reeb graph to select and filter the correct head candidate out of Reeb graph endpoints. A systematic evaluation of the head detection framework is carried out on full-scale experimental 3-D range images and compared with the ground truth. It is shown that the Reeb graph topology algorithm developed herein allows the correct detection of the head of the occupant with only two head candidates as input to the MHT. Results of the experiments demonstrate that the proposed framework is robust under the large variations of the scene. The processing requirements of the proposed approach are discussed. It is shown that the number of operations is rather low and that real-time processing requirements can be met with the proposed method.


computer vision and pattern recognition | 2011

Real-time hybrid ToF multi-camera rig fusion system for depth map enhancement

Frederic Garcia; Djamila Aouada; Bruno Mirbach; Thomas Solignac; Björn E. Ottersten

We present a full real-time implementation of a multilateral filtering system for depth sensor data fusion with 2-D data. For such a system to perform in real-time, it is necessary to have a real-time implementation of the filter, but also a real-time alignment of the data to be fused. To achieve an automatic data mapping, we express disparity as a function of the distance between the scene and the cameras, and simplify the matching procedure to a simple indexation procedure. Our experiments show that this implementation ensures the fusion of 3-D data and 2-D data in real-time and with high accuracy.


computer vision and pattern recognition | 2015

Real-time non-rigid multi-frame depth video super-resolution

Kassem Al Ismaeil; Djamila Aouada; Thomas Solignac; Bruno Mirbach; Björn E. Ottersten

This paper proposes to enhance low resolution dynamic depth videos containing freely non-rigidly moving objects with a new dynamic multi-frame super-resolution algorithm. Existent methods are either limited to rigid objects, or restricted to global lateral motions discarding radial displacements. We address these shortcomings by accounting for non-rigid displacements in 3D. In addition to 2D optical flow, we estimate the depth displacement, and simultaneously correct the depth measurement by Kalman filtering. This concept is incorporated efficiently in a multi-frame super-resolution framework. It is formulated in a recursive manner that ensures an efficient deployment in real-time. Results show the overall improved performance of the proposed method as compared to alternative approaches, and specifically in handling relatively large 3D motions. Test examples range from a full moving human body to a highly dynamic facial video with varying expressions.


international conference on pattern recognition | 2014

RGB-D Multi-view System Calibration for Full 3D Scene Reconstruction

Hassan Afzal; Djamila Aouada; David Font; Bruno Mirbach; Björn E. Ottersten

One of the most crucial requirements for building a multi-view system is the estimation of relative poses of all cameras. An approach tailored for a RGB-D cameras based multi-view system is missing. We propose BAICP+ which combines Bundle Adjustment (BA) and Iterative Closest Point (ICP) algorithms to take into account both 2D visual and 3D shape information in one minimization formulation to estimate relative pose parameters of each camera. BAICP+ is generic enough to take different types of visual features into account and can be easily adapted to varying quality of 2D and 3D data. We perform experiments on real and simulated data. Results show that with the right weighting factor BAICP+ has an optimal performance when compared to BA and ICP used independently or sequentially.


international conference on image processing | 2013

Dynamic super resolution of depth sequences with non-rigid motions

Kassem Al Ismaeil; Djamila Aouada; Bruno Mirbach; Björn E. Ottersten

We enhance the resolution of depth videos acquired with low resolution time-of-flight cameras. To that end, we propose a new dedicated dynamic super-resolution that is capable to accurately super-resolve a depth sequence containing one or multiple moving objects without strong constraints on their shape or motion, thus clearly outperforming any existing super-resolution techniques that perform poorly on depth data and are either restricted to global motions or not precise because of an implicit estimation of motion. The proposed approach is based on a new data model that leads to a robust registration of all depth frames after a dense upsampling. The textureless nature of depth images allows to robustly handle sequences with multiple moving objects as confirmed by our experiments.


IEEE Journal of Selected Topics in Signal Processing | 2012

Real-Time Distance-Dependent Mapping for a Hybrid ToF Multi-Camera Rig

Frederic Garcia; Djamila Aouada; Bruno Mirbach; Björn E. Ottersten

We propose a real-time mapping procedure for data matching to deal with hybrid time-of-flight (ToF) multi-camera rig data fusion. Our approach takes advantage of the depth information provided by the ToF camera to calculate the distance-dependent disparity between the two cameras that constitute the system. As a consequence, the not co-centric binocular system behaves as a co-centric system with co-linear optical axes between their sensors. The association between mapped and non-mapped image coordinates can be described by a set of look-up tables. This, in turn, reduces the complexity of the whole process to a simple indexing step, and thus, performs in real-time. The experimental results show that in addition to being straightforward and easy to compute, our proposed data matching approach is highly accurate which facilitates further fusion operations.


computer analysis of images and patterns | 2013

Depth Super-Resolution by Enhanced Shift and Add

Kassem Al Ismaeil; Djamila Aouada; Bruno Mirbach; Björn E. Ottersten

We use multi-frame super-resolution, specifically, Shift & Add, to increase the resolution of depth data. In order to be able to deploy such a framework in practice, without requiring a very high number of observed low resolution frames, we improve the initial estimation of the high resolution frame. To that end, we propose a new data model that leads to a median estimation from densely upsampled low resolution frames. We show that this new formulation solves the problem of undefined pixels and further allows to improve the performance of pyramidal motion estimation in the context of super-resolution without additional computational cost. As a consequence, it increases the motion diversity within a small number of observed frames, making the enhancement of depth data more practical. Quantitative experiments run on the Middlebury dataset show that our method outperforms state-of-the-art techniques in terms of accuracy and robustness to the number of frames and to the noise level.

Collaboration


Dive into the Bruno Mirbach's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Djamila Aouada

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hassan Afzal

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge