Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimitri Van Cauwelaert is active.

Publication


Featured researches published by Dimitri Van Cauwelaert.


Proceedings of SPIE | 2012

Decentralized tracking of humans using a camera network

Sebastian Gruenwedel; Vedran Jelaca; Jorge Oswaldo Niño-Castañeda; Peter Van Hese; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

Real-time tracking of people has many applications in computer vision and typically requires multiple cameras; for instance for surveillance, domotics, elderly-care and video conferencing. However, this problem is very challenging because of the need to deal with frequent occlusions and environmental changes. Another challenge is to develop solutions which scale well with the size of the camera network. Such solutions need to carefully restrict overall communication in the network and often involve distributed processing. In this paper we present a distributed person tracker, addressing the aforementioned issues. Real-time processing is achieved by distributing tasks between the cameras and a fusion node. The latter fuses only high level data based on low-bandwidth input streams from the cameras. This is achieved by performing tracking first on the image plane of each camera followed by sending only metadata to a local fusion node. We designed the proposed system with respect to a low communication load and towards robustness of the system. We evaluate the performance of the tracker in meeting scenarios where persons are often occluded by other persons and/or furniture. We present experimental results which show that our tracking approach is accurate even in cases of severe occlusions in some of the views.


ACM Transactions on Sensor Networks | 2014

Low-complexity scalable distributed multicamera tracking of humans

Sebastian Gruenwedel; Vedran Jelaca; Jorge Oswaldo Niño-Castañeda; Peter Van Hese; Dimitri Van Cauwelaert; Dirk Van Haerenborgh; Peter Veelaert; Wilfried Philips

Real-time tracking of people has many applications in computer vision, especially in the domain of surveillance. Typically, a network of cameras is used to solve this task. However, real-time tracking remains challenging due to frequent occlusions and environmental changes. Besides, multicamera applications often require a trade-off between accuracy and communication load within a camera network. In this article, we present a real-time distributed multicamera tracking system for the analysis of people in a meeting room. One contribution of the article is that we provide a scalable solution using smart cameras. The system is scalable because it requires a very small communication bandwidth and only light-weight processing on a “fusion center” which produces final tracking results. The fusion center can thus be cheap and can be duplicated to increase reliability. In the proposed decentralized system all low level video processing is performed on smart cameras. The smart cameras transmit a compact high-level description of moving people to the fusion center, which fuses this data using a Bayesian approach. A second contribution in our system is that the camera-based processing takes feedback from the fusion center about the most recent locations and motion states of tracked people into account. Based on this feedback and background subtraction results, the smart cameras generate a best hypothesis for each person. We evaluate the performance (in terms of precision and accuracy) of the tracker in indoor and meeting scenarios where individuals are often occluded by other people and/or furniture. Experimental results are presented based on the tracking of up to 4 people in a meeting room of 9 m by 5 m using 6 cameras. In about two hours of data, our method has only 0.3 losses per minute and can typically measure the position with an accuracy of 21 cm. We compare our approach to state-of-the-art methods and show that our system performs at least as good as other methods. However, our system is capable to run in real-time and therefore produces instantaneous results.


Sensors | 2015

Extrinsic Calibration of Camera Networks Using a Sphere

Junzhi Guan; Francis Deboeverie; Maarten Slembrouck; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

In this paper, we propose a novel extrinsic calibration method for camera networks using a sphere as the calibration object. First of all, we propose an easy and accurate method to estimate the 3D positions of the sphere center w.r.t. the local camera coordinate system. Then, we propose to use orthogonal procrustes analysis to pairwise estimate the initial camera relative extrinsic parameters based on the aforementioned estimation of 3D positions. Finally, an optimization routine is applied to jointly refine the extrinsic parameters for all cameras. Compared to existing sphere-based 3D position estimators which need to trace and analyse the outline of the sphere projection in the image, the proposed method requires only very simple image processing: estimating the area and the center of mass of the sphere projection. Our results demonstrate that we can get a more accurate estimate of the extrinsic parameters compared to other sphere-based methods. While existing state-of-the-art calibration methods use point like features and epipolar geometry, the proposed method uses the sphere-based 3D position estimate. This results in simpler computations and a more flexible and accurate calibration method. Experimental results show that the proposed approach is accurate, robust, flexible and easy to use.


Sensors | 2016

Extrinsic Calibration of Camera Networks Based on Pedestrians

Junzhi Guan; Francis Deboeverie; Maarten Slembrouck; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life.


international conference on computer vision theory and applications | 2015

Shape-from-silhouettes algorithm with built-in occlusion detection and removal

Maarten Slembrouck; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

Occlusion and inferior foreground/background segmentation still poses a big problem to 3D reconstruction from a set of images in a multi-camera system because it has a destructive nature on the reconstruction if one or more of the cameras do not see the object properly. We propose a method to obtain a 3D reconstruction which takes into account the possibility of occlusion by combining the information of all cameras in the multicamera setup. The proposed algorithm tries to find a consensus of geometrical predicates that most cameras can agree on. The results show a performance with an average error lower than 2cm on the centroid of a person in case of perfect input silhouettes. We also show that tracking results are significantly improved in a room with a lot of occlusion.


international conference on distributed smart cameras | 2011

Demo: Real-time indoors people tracking in scalable camera networks

Vedran Jelaca; Sebastian Griinwedel; Jorge Oswaldo Niño-Castañeda; Peter Van Hese; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

In this demo we present a people tracker in indoor environments. The tracker executes in a network of smart cameras with overlapping views. Special attention is given to real-time processing by distribution of tasks between the cameras and the fusion server. Each camera performs tasks of processing the images and tracking of people in the image plane. Instead of camera images, only metadata (a bounding box per person) are sent from each camera to the fusion server. The metadata are used on the server side to estimate the position of each person in real-world coordinates. Although the tracker is designed to suit any indoor environment, in this demo the trackers performance is presented in a meeting scenario, where occlusions of people by other people and/or furniture are significant and occur frequently. Multiple cameras insure views from multiple angles, which keeps tracking accurate even in cases of severe occlusions in some of the views.


international conference on informatics in control automation and robotics | 2014

An automated work cycle classification and disturbance detection tool for assembly line work stations

Karel Bauters; Hendrik Van Landeghem; Maarten Slembrouck; Dimitri Van Cauwelaert; Dirk Van Haerenborgh

The trend towards mass customization has led to a significant increase of the complexity of manufacturing systems. Models to evaluate the complexity have been developed, but the complexity analysis of work stations is still done manually. This paper describes an automated analysis tool that makes us of multi-camera video images to support the complexity analysis of assembly line work stations.


international conference on distributed smart cameras | 2014

Average Track Estimation of Moving Objects Using RANSAC and DTW

Xingzhe Xie; Jonas De Vylder; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips; Hamid K. Aghajan

This paper proposes a method for clustering and averaging the tracks of people obtained in a multi-camera network using Dynamic Time Warping (DTW) and Random Sampling (RANSAC). The method allows analyzing trajectories of factory workers in order to estimate average work cycles, variances on the work cycle and outlier trajectories. The main application is to provide information on problematic parts of work cycles, and on how to optimize the work cycles. The main novelty of the methods is track clustering based on a combination of DTW and RANSAC, with time alignment of tracks as byproduct. The experimental results show that our algorithm outperforms other methods on averaging the tracks, specifically that the spacial structure is kept even part of tracks differentiates from each other. Also it allows a deeper statistical analysis using the time alignment, i.e. time variability analysis of the arrival time for a specific location.


Proceedings of SPIE | 2014

Template Matching based People Tracking Using a Smart Camera Network

Junzhi Guan; Peter Van Hese; Jorge Oswaldo Niño-Castañeda; Nyan Bo Bo; Sebastian Gruenwedel; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

In this paper, we proposes a people tracking system composed of multiple calibrated smart cameras and one fusion server which fuses the information from all cameras. Each smart camera estimates the ground plane positions of people based on the current frame and feedback from the server from the previous time. Correlation coefficient based template matching, which is invariant to illumination changes, is proposed to estimate the position of people in each smart camera. Only the estimated position and the corresponding correlation coefficient are sent to the server. This minimal amount of information exchange makes the system highly scalable with the number of cameras. The paper focuses on creating and updating a good template for the tracked person using feedback from the server. Additionally, a static background image of the empty room is used to improve the results of template matching. We evaluated the performance of the tracker in scenarios where persons are often occluded by other persons or furniture, and illumination changes occur frequently e.g., due to switching the light on or off. For two sequences (one minute for each, one with table in the room, one without table) with frequent illumination changes, the proposed tracker never lose track of the persons. We compare the performance of our tracking system to a state-of-the-art tracking system. Our approach outperforms it in terms of tracking accuracy and people loss.


robotics and biomimetics | 2011

Detection of a hand-raising gesture by locating the arm

Nyan Bo Bo; Peter Van Hese; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

This paper proposes a novel method for detecting hand-raising gestures in meeting room and classroom environments. The proposed method first detects faces in each frame of the video sequence in order to define the region of interest (ROI). Then the system locates arms in the region of interest by analyzing the geometric structure of edges on the arm instead of directly detecting the hand. The location and the orientation of a detected arm respect to the location of the face is used to make a decision on whether or not a person is raising hand. Finally, the frequency of a raised hand detected in previous frames is used to eliminate false positive detections and robustly detects persons who are raising a hand. Unlike major visual gesture recognition systems, our method does not rely on skin color or complex tracking algorithms, while achieving 92% sensitivity and 92% selectivity.

Collaboration


Dive into the Dimitri Van Cauwelaert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge