Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nyan Bo is active.

Publication


Featured researches published by Nyan Bo.


Sensors | 2014

Human Mobility Monitoring in Very Low Resolution Visual Sensor Network

Nyan Bo Bo; Francis Deboeverie; Mohamed Y. Eldib; Junzhi Guan; Xingzhe Xie; Jorge Niño; Dirk Van Haerenborgh; Maarten Slembrouck; Samuel Van de Velde; Heidi Steendam; Peter Veelaert; Richard P. Kleihorst; Hamid K. Aghajan; Wilfried Philips

This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics.


international conference on image processing | 2014

A low resolution multi-camera system for person tracking

Mohamed Y. Eldib; Nyan Bo Bo; Francis Deboeverie; Jorge Niño; Junzhi Guan; Samuel Van de Velde; Heidi Steendam; Hamid K. Aghajan; Wilfried Philips

The current multi-camera systems have not studied the problem of person tracking under low resolution constraints. In this paper, we propose a low resolution sensor network for person tracking. The network is composed of cameras with a resolution of 30×30 pixels. The multi-camera system is used to evaluate probability occupancy mapping and maximum likelihood trackers against ground truth collected by ultra-wideband (UWB) testbed. Performance evaluation is performed on two video sequences of 30 minutes. The experimental results show that maximum likelihood estimation based tracker outperforms the state-of-the-art on low resolution cameras.


IEEE Transactions on Image Processing | 2016

Scalable Semi-Automatic Annotation for Multi-Camera Person Tracking

Jorge Oswaldo Niño-Castañeda; Andrés Frías-Velázquez; Nyan Bo Bo; Maarten Slembrouck; Junzhi Guan; Glen Debard; Bart Vanrumste; Tinne Tuytelaars; Wilfried Philips

This paper proposes a generic methodology for the semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video data sets. Most of the annotation data are automatically computed, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability, is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a data set of ~6 h captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60 cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved ~2.4 h of manual labor. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new data sets. We also provide an exploratory study for the multi-target case, applied on the existing and new benchmark video sequences.This paper proposes a generic methodology for semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video datasets. Most of the annotation data is computed automatically, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a dataset of approximately 6 hours captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved about 2.4 hours of manual labour. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new datasets. We also provide an exploratory study for the multi-target case, applied on existing and new benchmark video sequences.


Proceedings of SPIE | 2014

Template Matching based People Tracking Using a Smart Camera Network

Junzhi Guan; Peter Van Hese; Jorge Oswaldo Niño-Castañeda; Nyan Bo Bo; Sebastian Gruenwedel; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

In this paper, we proposes a people tracking system composed of multiple calibrated smart cameras and one fusion server which fuses the information from all cameras. Each smart camera estimates the ground plane positions of people based on the current frame and feedback from the server from the previous time. Correlation coefficient based template matching, which is invariant to illumination changes, is proposed to estimate the position of people in each smart camera. Only the estimated position and the corresponding correlation coefficient are sent to the server. This minimal amount of information exchange makes the system highly scalable with the number of cameras. The paper focuses on creating and updating a good template for the tracked person using feedback from the server. Additionally, a static background image of the empty room is used to improve the results of template matching. We evaluated the performance of the tracker in scenarios where persons are often occluded by other persons or furniture, and illumination changes occur frequently e.g., due to switching the light on or off. For two sequences (one minute for each, one with table in the room, one without table) with frequent illumination changes, the proposed tracker never lose track of the persons. We compare the performance of our tracking system to a state-of-the-art tracking system. Our approach outperforms it in terms of tracking accuracy and people loss.


robotics and biomimetics | 2011

Detection of a hand-raising gesture by locating the arm

Nyan Bo Bo; Peter Van Hese; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips

This paper proposes a novel method for detecting hand-raising gestures in meeting room and classroom environments. The proposed method first detects faces in each frame of the video sequence in order to define the region of interest (ROI). Then the system locates arms in the region of interest by analyzing the geometric structure of edges on the arm instead of directly detecting the hand. The location and the orientation of a detected arm respect to the location of the face is used to make a decision on whether or not a person is raising hand. Finally, the frequency of a raised hand detected in previous frames is used to eliminate false positive detections and robustly detects persons who are raising a hand. Unlike major visual gesture recognition systems, our method does not rely on skin color or complex tracking algorithms, while achieving 92% sensitivity and 92% selectivity.


international conference on distributed smart cameras | 2015

Real-time multi-people tracking by greedy likelihood maximization

Nyan Bo Bo; Francis Deboeverie; Peter Veelaert; Wilfried Philips

Unlike tracking rigid targets, the task of tracking multiple people is very challenging because the appearance and the shape of a person varies depending on the targets location and orientation. This paper presents a new approach to track multiple people with high accuracy using a calibrated monocular camera. Our approach recursively updates the positions of all persons based on the observed foreground image and previously known location of each person. This is done by maximizing the likelihood of observing the foreground image given the positions of all persons. Since the computational complexity of our approach is low, it is possible to run in real time on smart cameras. When a network of multiple smart cameras overseeing the scene is available, local position estimates from smart cameras can be fused to produced more accurate joint position estimates. The performance evaluation of our approach on very challenging video sequences from public datasets shows that our tracker achieves high accuracy. When comparing to other state-of-the-art tracking systems, our method outperforms in terms of Multiple Object Tracking Accuracy (MOTA).


international joint conference on computer vision imaging and computer graphics theory and applications | 2017

Occlusion robust symbol level fusion for multiple people tracking

Nyan Bo Bo; Peter Veelaert; Wilfried Philips

In single view visual target tracking, an occlusion is one of the most challenging problems since target’s features are partially/fully covered by other targets as occlusion occurred. Instead of a limited single view, a target can be observed from multiple viewpoints using a network of cameras to mitigate the occlusion problem. However, information coming from different views must be fused by relying less on views with heavy occlusion and relying more on views with no/small occlusion. To address this need, we proposed a new fusion method which fuses the locally estimated positions of a person by the smart cameras observing from different viewpoints while taking into account the occlusion in each view. The genericity and scalability of the proposed fusion method is high since it needs only the position estimates from the smart cameras. Uncertainty for each local estimate is locally computed in a fusion center from the simulated occlusion assessment based on the camera’s projective geometry. These uncertainties together with the local estimates are used to model the probabilistic distributions required for the Bayesian fusion of the local estimates. The performance evaluation on three challenging video sequences shows that our method achieves higher accuracy than the local estimates as well as the tracking results using a classical triangulation method. Our method outperforms two state-ofthe-art trackers on a publicly available multi-camera video sequence.


Journal of Electronic Imaging | 2017

Occlusion handling framework for tracking in smart camera networks by per-target assistance task assignment

Nyan Bo Bo; Francis Deboeverie; Peter Veelaert; Wilfried Philips

Abstract. Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera’s observation of a particular target by taking into account both person–person and object–person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.


international conference on computer vision theory and applications | 2016

Multiple people tracking in smart camera networks by greedy joint-likelihood maximization

Nyan Bo Bo; Francis Deboeverie; Peter Veelaert; Wilfried Philips

This paper presents a new method to track multiple people reliably using a network of calibrated smart cameras. The task of tracking multiple persons is very difficult due to non-rigid nature of the human body, occlusions and environmental changes. Our proposed method recursively updates the positions of all persons based on the observed foreground images from all smart cameras and the previously known location of each person. The performance of our proposed method is evaluated on indoor video sequences containing person– person/object–person occlusions and sudden illumination changes. The results show that our method performs well with Multiple Object Tracking Accuracy as high as 100% and Multiple Object Tracking Precision as high as 86%. Performance comparison to a state of the art tracking system shows that our method outperforms.


Proceedings of SPIE | 2014

Illumination-Robust People Tracking Using a Smart Camera Network

Nyan Bo Bo; Peter Van Hese; Junzhi Guan; Sebastian Gruenwedel; Jorge Oswaldo Niño-Castañeda; Dimitri Van Cauwelaert; Dirk Van Haerenborgh; Peter Veelaert; Wilfried Philips

Many computer vision based applications require reliable tracking of multiple people under unpredictable lighting conditions. Many existing trackers do not handle illumination changes well, especially sudden changes in illumination. This paper presents a system to track multiple people reliably even under rapid illumination changes using a network of calibrated smart cameras with overlapping views. Each smart camera extracts foreground features by detecting texture changes between the current image and a static background image. The foreground features belonging to each person are tracked locally on each camera but these local estimates are sent to a fusion center which combines them to generate more accurate estimates. The nal estimates are fed back to all smart cameras, which use them as prior information for tracking in the next frame. The texture based approach makes our method very robust to illumination changes. We tested the performance of our system on six video sequences, some containing sudden illumination changes and up to four walking persons. The results show that our tracker can track multiple people accurately with an average tracking error as low as 8 cm even when the illumination varies rapidly. Performance comparison to a state-of-the-art tracking system shows that our method outperforms.

Collaboration


Dive into the Nyan Bo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge