Junzhi Guan
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junzhi Guan.
Sensors | 2014
Nyan Bo Bo; Francis Deboeverie; Mohamed Y. Eldib; Junzhi Guan; Xingzhe Xie; Jorge Niño; Dirk Van Haerenborgh; Maarten Slembrouck; Samuel Van de Velde; Heidi Steendam; Peter Veelaert; Richard P. Kleihorst; Hamid K. Aghajan; Wilfried Philips
This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics.
international conference on image processing | 2014
Mohamed Y. Eldib; Nyan Bo Bo; Francis Deboeverie; Jorge Niño; Junzhi Guan; Samuel Van de Velde; Heidi Steendam; Hamid K. Aghajan; Wilfried Philips
The current multi-camera systems have not studied the problem of person tracking under low resolution constraints. In this paper, we propose a low resolution sensor network for person tracking. The network is composed of cameras with a resolution of 30×30 pixels. The multi-camera system is used to evaluate probability occupancy mapping and maximum likelihood trackers against ground truth collected by ultra-wideband (UWB) testbed. Performance evaluation is performed on two video sequences of 30 minutes. The experimental results show that maximum likelihood estimation based tracker outperforms the state-of-the-art on low resolution cameras.
Sensors | 2015
Junzhi Guan; Francis Deboeverie; Maarten Slembrouck; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
In this paper, we propose a novel extrinsic calibration method for camera networks using a sphere as the calibration object. First of all, we propose an easy and accurate method to estimate the 3D positions of the sphere center w.r.t. the local camera coordinate system. Then, we propose to use orthogonal procrustes analysis to pairwise estimate the initial camera relative extrinsic parameters based on the aforementioned estimation of 3D positions. Finally, an optimization routine is applied to jointly refine the extrinsic parameters for all cameras. Compared to existing sphere-based 3D position estimators which need to trace and analyse the outline of the sphere projection in the image, the proposed method requires only very simple image processing: estimating the area and the center of mass of the sphere projection. Our results demonstrate that we can get a more accurate estimate of the extrinsic parameters compared to other sphere-based methods. While existing state-of-the-art calibration methods use point like features and epipolar geometry, the proposed method uses the sphere-based 3D position estimate. This results in simpler computations and a more flexible and accurate calibration method. Experimental results show that the proposed approach is accurate, robust, flexible and easy to use.
Sensors | 2016
Junzhi Guan; Francis Deboeverie; Maarten Slembrouck; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life.
IEEE Transactions on Image Processing | 2016
Jorge Oswaldo Niño-Castañeda; Andrés Frías-Velázquez; Nyan Bo Bo; Maarten Slembrouck; Junzhi Guan; Glen Debard; Bart Vanrumste; Tinne Tuytelaars; Wilfried Philips
This paper proposes a generic methodology for the semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video data sets. Most of the annotation data are automatically computed, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability, is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a data set of ~6 h captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60 cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved ~2.4 h of manual labor. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new data sets. We also provide an exploratory study for the multi-target case, applied on the existing and new benchmark video sequences.This paper proposes a generic methodology for semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video datasets. Most of the annotation data is computed automatically, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a dataset of approximately 6 hours captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved about 2.4 hours of manual labour. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new datasets. We also provide an exploratory study for the multi-target case, applied on existing and new benchmark video sequences.
Proceedings of SPIE | 2014
Junzhi Guan; Peter Van Hese; Jorge Oswaldo Niño-Castañeda; Nyan Bo Bo; Sebastian Gruenwedel; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
In this paper, we proposes a people tracking system composed of multiple calibrated smart cameras and one fusion server which fuses the information from all cameras. Each smart camera estimates the ground plane positions of people based on the current frame and feedback from the server from the previous time. Correlation coefficient based template matching, which is invariant to illumination changes, is proposed to estimate the position of people in each smart camera. Only the estimated position and the corresponding correlation coefficient are sent to the server. This minimal amount of information exchange makes the system highly scalable with the number of cameras. The paper focuses on creating and updating a good template for the tracked person using feedback from the server. Additionally, a static background image of the empty room is used to improve the results of template matching. We evaluated the performance of the tracker in scenarios where persons are often occluded by other persons or furniture, and illumination changes occur frequently e.g., due to switching the light on or off. For two sequences (one minute for each, one with table in the room, one without table) with frequent illumination changes, the proposed tracker never lose track of the persons. We compare the performance of our tracking system to a state-of-the-art tracking system. Our approach outperforms it in terms of tracking accuracy and people loss.
Proceedings of SPIE | 2014
Nyan Bo Bo; Peter Van Hese; Junzhi Guan; Sebastian Gruenwedel; Jorge Oswaldo Niño-Castañeda; Dimitri Van Cauwelaert; Dirk Van Haerenborgh; Peter Veelaert; Wilfried Philips
Many computer vision based applications require reliable tracking of multiple people under unpredictable lighting conditions. Many existing trackers do not handle illumination changes well, especially sudden changes in illumination. This paper presents a system to track multiple people reliably even under rapid illumination changes using a network of calibrated smart cameras with overlapping views. Each smart camera extracts foreground features by detecting texture changes between the current image and a static background image. The foreground features belonging to each person are tracked locally on each camera but these local estimates are sent to a fusion center which combines them to generate more accurate estimates. The nal estimates are fed back to all smart cameras, which use them as prior information for tracking in the next frame. The texture based approach makes our method very robust to illumination changes. We tested the performance of our system on six video sequences, some containing sudden illumination changes and up to four walking persons. The results show that our tracker can track multiple people accurately with an average tracking error as low as 8 cm even when the illumination varies rapidly. Performance comparison to a state-of-the-art tracking system shows that our method outperforms.
Archive | 2017
Junzhi Guan
FallRisk en Little Sister Slotsymposium, Abstracts | 2015
Mohamed Y. Eldib; Francis Deboeverie; Bo Bo Nyan; Junzhi Guan; Xingzhe Xie; Dirk Van Haerenborgh; Hamid K. Aghajan; Wilfried Philips
Distributed Smart Cameras (ICDSC), 2012 Sixth International Conference on | 2013
Junzhi Guan; Peter Van Hese; Jorge Oswaldo Niño-Castañeda; Sebastian Gruenwedel; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips