Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bingwei He is active.

Publication


Featured researches published by Bingwei He.


IEEE Transactions on Industrial Informatics | 2014

GM-PHD-Based Multi-Target Visual Tracking Using Entropy Distribution and Game Theory

Xiaolong Zhou; Youfu Li; Bingwei He; Tianxiang Bai

Tracking multiple moving targets in a video is a challenge because of several factors, including noisy video data, varying number of targets, and mutual occlusion problems. The Gaussian mixture probability hypothesis density (GM-PHD) filter, which aims to recursively propagate the intensity associated with the multi-target posterior density, can overcome the difficulty caused by the data association. This paper develops a multi-target visual tracking system that combines the GM-PHD filter with object detection. First, a new birth intensity estimation algorithm based on entropy distribution and coverage rate is proposed to automatically and accurately track the newborn targets in a noisy video. Then, a robust game-theoretical mutual occlusion handling algorithm with an improved spatial color appearance model is proposed to effectively track the targets in mutual occlusion. The spatial color appearance model is improved by incorporating interferences of other targets within the occlusion region. Finally, the experiments conducted on publicly available videos demonstrate the good performance of the proposed visual tracking system.


Signal Processing | 2014

Entropy distribution and coverage rate-based birth intensity estimation in GM-PHD filter for multi-target visual tracking

Xiaolong Zhou; Youfu Li; Bingwei He

Tracking multiple moving targets in video is a challenge because of the presence of noisy video data and varying numbers of targets, and data association problems. In this paper, a multi-target visual tracking system that combines object detection with the Gaussian mixture probability hypothesis density filter is developed, in which a new birth intensity estimation method based on entropy distribution and coverage rate is proposed. The birth intensity is first initialized by the previously obtained target states and measurements. The measurements are obtained by object detection and are classified into the birth measurements and the survival measurements. The currently obtained birth measurements are then used to update the birth intensity. In the update stage, the entropy distribution is incorporated to remove some noises within the initialized birth intensity that are irrelevant to the birth measurements. The coverage rate between each birth intensity component and the corresponding birth measurement is computed to further eliminate the noises. Experiments on noisy video sequences are conducted to show the good performance of the proposed visual tracking system.


intelligent robots and systems | 2006

A Next-Best-View Method With self-termination in Active Modeling of 3D Objects

Bingwei He; Youfu Li

The objective of view planning in a visual sensing system is to make task-directed decisions for optimal sensing pose selection. The primary focus of the research described in this paper is to propose a new method for creating a complete model of free-form surface object from multiple range images acquired by a scan sensor at different space poses. Using the view sphere to limit the number of possible sensor positions, the candidates for the next-best-view (NBV) position are easily determined by detecting and measuring occlusions to the cameras view in an image. Ultimately, the candidate which obtains maximum the known partial modeling boundary integral value of the vector fields is selected as the next-best-view position. We also present a self-termination criterion for judging the completion condition in the measurement and reconstruction process. The termination condition is derived based on changes in the volume computed from two successive viewpoints. The experimental results show that the method is effective in practical implementation


intelligent robots and systems | 2013

Multi-target visual tracking with game theory-based mutual occlusion handling

Xiaolong Zhou; Youfu Li; Bingwei He; Tianxiang Bai

Tracking multiple moving targets in video is still a challenge because of mutual occlusion problem. This paper presents a Gaussian mixture probability hypothesis density-based visual tracking system with game theory-based mutual occlusion handling. First, a two-step occlusion reasoning algorithm is proposed to determine the occlusion region. Then, the spatial constraint-based appearance model with other interacting targets¶ interferences is modeled. Finally, an n-person, non-zero-sum, non-cooperative game is constructed to handle the mutual occlusion problem. The individual measurements within the occlusion region are regarded as the players in the constructed game competing for the maximum utilities by using the certain strategies. The Nash Equilibrium of the game is the optimal estimation of the locations of the players within the occlusion region. Experiments conducted on publicly available videos demonstrate the good performance of the proposed occlusion handling algorithm.


robotics and biomimetics | 2009

Research on new view planning method for automatic reconstruction of unknown 3D objects

Xiaolong Zhou; Bingwei He; Youfu Li

Automatic reconstruction of unknown 3-D objects is a main part of the machine vision technology and it has a variety of applications such as robot navigation, medical imaging and industrial inspection, etc. In this paper, a new view planning method for reconstructing unknown 3-D models automatically was proposed based on the limit visual surface. Firstly, the visual region of the laser-vision system was obtained. And the limit visual surface was modeled by means of the known boundary data obtained from initial view and was used to predict the unknown object surface information. Then the next viewpoint visibility was determined according to the visual region of the system and the visible area of next viewpoint was obtained. Finally, the position which can obtain the maximum visible area was selected as the next best view position. The experimental results of the real models showed that the method is effective in practical implementation.


international conference on image and graphics | 2009

A Novel View Planning Method for Automatic Reconstruction of Unknown 3-D Objects Based on the Limit Visual Surface

Xiaolong Zhou; Bingwei He; Youfu Li

Automatic reconstruction of unknown 3-D objects has been of great importance in the areas of machine vision, object recognition, and automatic modeling. In this paper, a new planning approach of generating 3-D models automatically is proposed. The new algorithm incorporates the limit visual surfaces of unknown model which are obtained according to both of the known object boundary knowledge and the visual region of the vision system and selects the suitability of viewpoints as the next best view on scanning coverage. The limit visual surfaces are used to predict the maximal information of unknown model and then the visibility criterion of next viewpoint is determined. And the position which can obtain the maximal visual surface area is defined as the next best view position. The reconstruction result of real model with proposed method show the efficiency in practical implementation.


international conference on intelligent robotics and applications | 2008

A New View Planning Method for Automatic Modeling of Three Dimensional Objects

Xiaolong Zhou; Bingwei He; Youfu Li

Sensor planning is a critical issue since a typical 3-D sensor can only sample a portion of an object at a single viewpoint. The primary focus of the research described in this paper is to propose a new method of creating a complete model of free-form surface object from multiple range images acquired by a scan sensor at different space poses. Candidates for the best-next-view position are determined by detecting and measuring occlusions to the cameras view in an image. Ultimately, the candidate which obtains maximum visible space volume is selected as the Next-best-view position. The experimental results show that the method is effective in practical implementation.


Iet Computer Vision | 2017

Static map reconstruction and dynamic object tracking for a camera and laser scanner system

Cheng Zou; Bingwei He; Liwei Zhang; Jianwei Zhang

The vision-based mobile robots simultaneous localisation and mapping and navigation capability in dynamic environments are highly problematic elements of robot vision applications. The goal of this study is to reconstruct a static map and track the dynamic object for a camera and laser scanner system. An improved automatic calibration is designed to merge image and laser point clouds. Then, the fusion data is exploited to detect the slowly moved object and reconstruct static map. Tracking-by-detection requires the correct assignment of noisy detection results to object trajectories. In the proposed method, occluded regions are combined 3D motion models with object appearance to manage difficulties in crowded scenes. The proposed method was validated by experimental results gathered in a real environment and on publicly available data.


robotics and biomimetics | 2016

Task execution based-on human-robot dialogue and deictic gestures

Peiqing Yan; Bingwei He; Liwei Zhang; Jianwei Zhang

Service robots have already been able to implement explicit and simple tasks assigned by human beings, but they still lack the ability to act like humans who can analyze the assigned task and ask questions to acquire supplementary information to resolve ambiguities from the environment. Inspired by this point, we fuse verbal language and pointing gesture information to enable a robot to execute a vague task, such as “bring me the book”. In this paper, we propose a system integrating human-robot dialogue, mapping and action execution planning in unknown 3D environments. We consider grounding natural language commands to a sequence of low-level instructions that can be executed by the robot. To express the targets location which is pointed to by the user in a global fixed frame, we use a SLAM approach to build the environment map. Experimental results demonstrate that a humanoid robot NAO can acquire the skill based on our proposed approach in unknown environment.


intelligent robots and systems | 2016

Vision-based real-time 3D mapping for UAV with laser sensor

Jinqiao Shi; Bingwei He; Liwei Zhang; Jianwei Zhang

Real-time 3D mapping with MAV (Micro Aerial Vehicle) in GPS-denied environment is a challenging problem. In this paper, we present an effective vision-based 3D mapping system with 2D laser-scanner. All algorithms necessary for this system are on-board. In this system, two cameras work together with the laser-scanner for motion estimation. The distance of the points detected by laser-scanner are transformed and treated as the depth of image features, which improves the robustness and accuracy of the pose estimation. The output of visual odometry is used as an initial pose in the Iterative Closest Point (ICP) algorithm and the motion trajectory is optimized by the registration result. We finally get the MAVs state by fusing IMU with the pose estimation from mapping process. This method maximizes the utility of the point clouds information and overcomes the scale problem of lacking depth information in the monocular visual odometry. The results of the experiments prove that this method has good characteristics in real-time and accuracy.

Collaboration


Dive into the Bingwei He's collaboration.

Top Co-Authors

Avatar

Youfu Li

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xiaolong Zhou

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tianxiang Bai

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yazhe Tang

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge