Maarten Slembrouck
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maarten Slembrouck.
Sensors | 2014
Nyan Bo Bo; Francis Deboeverie; Mohamed Y. Eldib; Junzhi Guan; Xingzhe Xie; Jorge Niño; Dirk Van Haerenborgh; Maarten Slembrouck; Samuel Van de Velde; Heidi Steendam; Peter Veelaert; Richard P. Kleihorst; Hamid K. Aghajan; Wilfried Philips
This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics.
Sensors | 2015
Junzhi Guan; Francis Deboeverie; Maarten Slembrouck; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
In this paper, we propose a novel extrinsic calibration method for camera networks using a sphere as the calibration object. First of all, we propose an easy and accurate method to estimate the 3D positions of the sphere center w.r.t. the local camera coordinate system. Then, we propose to use orthogonal procrustes analysis to pairwise estimate the initial camera relative extrinsic parameters based on the aforementioned estimation of 3D positions. Finally, an optimization routine is applied to jointly refine the extrinsic parameters for all cameras. Compared to existing sphere-based 3D position estimators which need to trace and analyse the outline of the sphere projection in the image, the proposed method requires only very simple image processing: estimating the area and the center of mass of the sphere projection. Our results demonstrate that we can get a more accurate estimate of the extrinsic parameters compared to other sphere-based methods. While existing state-of-the-art calibration methods use point like features and epipolar geometry, the proposed method uses the sphere-based 3D position estimate. This results in simpler computations and a more flexible and accurate calibration method. Experimental results show that the proposed approach is accurate, robust, flexible and easy to use.
Sensors | 2016
Junzhi Guan; Francis Deboeverie; Maarten Slembrouck; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life.
IEEE Transactions on Image Processing | 2016
Jorge Oswaldo Niño-Castañeda; Andrés Frías-Velázquez; Nyan Bo Bo; Maarten Slembrouck; Junzhi Guan; Glen Debard; Bart Vanrumste; Tinne Tuytelaars; Wilfried Philips
This paper proposes a generic methodology for the semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video data sets. Most of the annotation data are automatically computed, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability, is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a data set of ~6 h captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60 cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved ~2.4 h of manual labor. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new data sets. We also provide an exploratory study for the multi-target case, applied on the existing and new benchmark video sequences.This paper proposes a generic methodology for semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video datasets. Most of the annotation data is computed automatically, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a dataset of approximately 6 hours captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved about 2.4 hours of manual labour. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new datasets. We also provide an exploratory study for the multi-target case, applied on existing and new benchmark video sequences.
international conference on computer vision theory and applications | 2015
Maarten Slembrouck; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
Occlusion and inferior foreground/background segmentation still poses a big problem to 3D reconstruction from a set of images in a multi-camera system because it has a destructive nature on the reconstruction if one or more of the cameras do not see the object properly. We propose a method to obtain a 3D reconstruction which takes into account the possibility of occlusion by combining the information of all cameras in the multicamera setup. The proposed algorithm tries to find a consensus of geometrical predicates that most cameras can agree on. The results show a performance with an average error lower than 2cm on the centroid of a person in case of perfect input silhouettes. We also show that tracking results are significantly improved in a room with a lot of occlusion.
Journal of Electronic Imaging | 2016
Juan Li; Maarten Slembrouck; Francis Deboeverie; Ana M. Bernardos; Juan A. Besada; Peter Veelaert; Hamid K. Aghajan; José R. Casar; Wilfried Philips
Abstract. Tracking of a handheld device’s three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.
international conference on distributed smart cameras | 2015
Juan Li; Maarten Slembrouck; Francis Deboeverie; Ana M. Bernardos; Juan A. Besada; Peter Veelaert; Hamid K. Aghajan; Wilfried Philips; José R. Casar
With the rapid advances in mobile computing, handheld Augmented Reality draws increasing attention. Pose tracking of handheld devices is of fundamental importance to register virtual information with the real world and is still a crucial challenge. In this paper, we present a low-cost, accurate and robust approach combining fiducial tracking and inertial sensors for handheld pose tracking. Two LEDs are used as fiducial markers to indicate the position of the handheld device. They are detected by an adaptive thresholding method which is robust to illumination changes, and then tracked by a Kalman filter. By combining inclination information provided by the on-device accelerometer, 6 degree-of-freedom (DoF) pose is estimated. Handheld devices are freed from computer vision processing, leaving most computing power available for applications. When one LED is occluded, the system is still able to recover the 6-DoF pose. Performance evaluation of the proposed tracking approach is carried out by comparing with the ground truth data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved an accuracy of 1.77 cm in position estimation and 4.15 degrees in orientation estimation.
international conference on informatics in control automation and robotics | 2014
Karel Bauters; Hendrik Van Landeghem; Maarten Slembrouck; Dimitri Van Cauwelaert; Dirk Van Haerenborgh
The trend towards mass customization has led to a significant increase of the complexity of manufacturing systems. Models to evaluate the complexity have been developed, but the complexity analysis of work stations is still done manually. This paper describes an automated analysis tool that makes us of multi-camera video images to support the complexity analysis of assembly line work stations.
international conference on distributed smart cameras | 2015
Maarten Slembrouck; Jorge Oswaldo Niño-Castañeda; Gianni Allebosch; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
Reliable indoor tracking of objects and persons is still a major challenge in computer vision. As GPS is unavailable indoors, other methods have to be used. Multi-camera systems using colour cameras is one approach to tackle this problem. In this paper we will present a method based on shapes-from-silhouettes where the foreground/background segmentation videos are produced with state of the art methods. We will show that our tracker outperforms all the other trackers we evaluated and obtains an accuracy of 97.89% within 50 cm from the ground truth position on the proposed dataset.
Archive | 2018
Gianni Allebosch; Maarten Slembrouck; Sanne Roegiers; Hiep Luong; Peter Veelaert; Wilfried Philips
In this paper, a robust approach for detecting foreground objects moving in front of a video screen is presented. The proposed method constructs a background model for every image shown on the screen, assuming these images are known up to an appearance transformation. This transformation is guided by a color mapping function, constructed in the beginning of the sequence. The foreground object is then segmented at runtime by comparing the input from the camera with a color mapped representation of the background image, by analysing both direct color and edge feature differences. The method is tested on challenging sequences, where the background screen displays photo-realistic videos. It is shown that the proposed method is able to produce accurate foreground masks, with obtained \(F_1\)-scores ranging from 85.61% to 90.74% on our dataset.