Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianwei Ding is active.

Publication


Featured researches published by Jianwei Ding.


asian conference on computer vision | 2010

Modeling complex scenes for accurate moving objects segmentation

Jianwei Ding; Min Li; Kaiqi Huang; Tieniu Tan

In video surveillance, it is still a difficult task to segment moving object accurately in complex scenes, since most widely used algorithms are background subtraction. We propose an online and unsupervised technique to find optimal segmentation in a Markov Random Field (MRF) framework. To improve the accuracy, color, locality, temporal coherence and spatial consistency are fused together in the framework. The models of color, locality and temporal coherence are learned online from complex scenes. A novel mixture of nonparametric regional model and parametric pixel-wise model is proposed to approximate the background color distribution. The foreground color distribution for every pixel is learned from neighboring pixels of previous frame. The locality distributions of background and foreground are approximated with the nonparametric model. The temporal coherence is modeled with a Markov chain. Experiments on challenging videos demonstrate the effectiveness of our algorithm.


Neurocomputing | 2015

Tracking by local structural manifold learning in a new SSIR particle filter

Jianwei Ding; Yunqi Tang; Wei Liu; Yongzhen Huang; Kaiqi Huang

We propose a new object tracking algorithm by local structural manifold learning in a selective sampling importance resampling (SSIR) particle filter framework. A new local structural manifold learning strategy is designed for the invariant appearance modeling in challenging conditions. The appearance of the object which has complex structure in the low-dimensional space is approximated with a set of local structural manifolds. The local structures of the appearance manifold are incrementally learned with changes in the appearance of the object. Unlike traditional particle filters which rely on random re-sampling for new particles generation, we propose a new SSIR particle filter, which integrates an auto-regressive filter to improve the process of samples generation. The distribution of the generated particle samples by our method is better than that of the traditional techniques. Experimental results on several challenging videos demonstrate the robustness and accuracy of our algorithm compared with other recent excellent tracking approaches.


Multimedia Systems | 2016

Robust tracking with adaptive appearance learning and occlusion detection

Jianwei Ding; Yunqi Tang; Huawei Tian; Wei Liu; Yongzhen Huang

It is still challenging to design a robust and efficient tracking algorithm in complex scenes. We propose a new object tracking algorithm with adaptive appearance learning and occlusion detection in an efficient self-tuning particle filter framework. The appearance of an object is modeled with a set of weighted and ordered submanifolds, which can guarantee the adaptability when there is fast illumination or pose change. To overcome the occlusion problem, we use the reconstruction error data of the appearance model to extract occlusion region by graph cuts. And the tracking result is improved with feedback of occlusion detection. The motion model is also integrated with adaptability to overcome the abrupt motion problem. To improve the efficiency of particle filter, the number of samples is tuned with respect to the scene conditions. Experimental results demonstrate that our algorithm can achieve great robustness, high accuracy and good efficiency in challenging scenes.


asian conference on pattern recognition | 2011

Global and local training for moving object classification in surveillance-oriented scene

Xin Zhao; Jianwei Ding; Kaiqi Huang; Tieniu Tan

This paper presents a new training framework for multi-class moving object classification in surveillance-oriented scene. In many practical multi-class classification tasks, the instances are close to each other in the input feature space when they have similar features. These instances may have different class labels. Since the moving objects may have various view and shape, the above phenomenon is common in multi-class moving object classification. In our framework, firstly the input feature space is divided into several local clusters. Then, global training and local training are carried out sequential with an efficient online learning based algorithm. The induced global classifier is used to assign candidate instances to the most reliable clusters. Meanwhile, the trained local classifiers within those clusters can determine which classes the candidate instances belong to. Our experimental results illustrate the effectiveness of our method for moving object classification in surveillance-oriented scene.


advanced video and signal based surveillance | 2012

Tracking Blurred Object with Data-Driven Tracker

Jianwei Ding; Kaiqi Huang; Tieniu Tan

Motion blur is very common in the low quality of image sequences and videos captured by low speed of cameras. Object tracking without accounting for the motion blur would easily fail in these kinds of videos. We propose a new data-driven tracker in the particle filter framework to address this problem without deblurring the image sequences. The motion blur is detected by exploring the property of the blurred input image through Fourier analysis. The appearance model is integrated with a set of motion blur kernels which could reflect different blur effects in real scenes. The motion model is improved to be more robust to sudden motion of the target object. To evaluate the proposed algorithm, several challenging videos with significant motion blur are used in the experiments. The experimental results demonstrate the robustness and accuracy of our algorithm.


international conference on computer vision | 2011

Robust object tracking via online learning of adaptive appearance manifold

Jianwei Ding; Yongzhen Huang; Kaiqi Huang; Tieniu Tan

Appearance modeling plays a critical role in robust object tracking, which should be adaptive to various appearance changes. We propose a new appearance model based on adaptive appearance manifold for object tracking. The adaptive appearance manifold consists of several submanifolds and each is approximated with a low dimensional linear subspace. The initial appearance model is constructed using location information of target object in the first frame, and no prior knowledge is needed. We design an efficient dynamic structure for the adaptive appearance manifold, which can reduce time of comparison between a new observation and the appearance model. The appearance model is incrementally learned online using the input sequence image. We integrate our new appearance model with the particle filtering framework. Several public challenging videos are used to test our tracking algorithm. The experimental results demonstrate that our algorithm is robust to illumination change, pose variation, partial occlusion and clutter background. And the speed of our algorithm is also very fast.


asian conference on pattern recognition | 2011

Robust moving object segmentation with two-stage optimization

Jianwei Ding; Xin Zhao; Kaiqi Huang; Tieniu Tan

Inspired by interactive segmentation algorithms, we propose an online and unsupervised technique to extract moving objects from videos captured by stationary cameras. Our method consists of two main optimization steps, from local optimal extraction to global optimal segmentation. In the first stage, reliable foreground and background pixels are extracted from input image by modeling distributions of foreground and background with color and motion cues. These reliable pixels provide hard constraints for the next step of segmentation. Then global optimal segmentation of moving object is implemented by graph cuts in the second stage. Experimental results on several challenging videos demonstrate the effectiveness and robustness of the proposed approach.


international conference on image and graphics | 2017

An Application Independent Logic Framework for Human Activity Recognition

Wengang Feng; Yanhui Xiao; Huawei Tian; Yunqi Tang; Jianwei Ding

Cameras may be employed to facilitate data collection, to serve as a data source for controlling actuators, or to monitor the status of a process which includes tracking. In order to recognize interesting events across different domains in this study we propose a cross domain framework supported by relevant theory, which will lead to an Open Surveillance concept - a systemic organization of components that will streamline future system development. The main contribution is the logic reasoning framework together with a new set of context free LLEs which could be utilized across different domains. Currently human action datasets from MSR and a synthetic human interaction dataset are used for experiments and results demonstrate the effectiveness of our approach.


international conference on image and graphics | 2017

Action Graph Decomposition Based on Sparse Coding

Wengang Feng; Huawei Tian; Yanhui Xiao; Jianwei Ding; Yunqi Tang

A video can be thought of as a visual document which may be represented from different dimensions such as frames, objects and other different levels of features. Action recognition is usually one of the most important and popular tasks, and requires the understanding of temporal and spatial cues in videos. What structures do the temporal relationships share in common inter- and intra-classes of actions? What is the best representation for those temporal relationships? We propose a new temporal relationship representation, called action graphs based on Laplacian matrices and Allen’s temporal relationships. Recognition framework based on sparse coding, which also mimics human vision system to represent and infer knowledge. To our best knowledge, “action graphs” is put forward to represent the temporal relationships. we are the first using sparse graph coding for event analysis.


chinese conference on pattern recognition | 2014

Robust Appearance Learning for Object Tracking in Challenging Scenes

Jianwei Ding; Yunqi Tang; Huawei Tian; Yongzhen Huang

This paper studies the appearance learning of object tracking in challenging scenes. We propose a new appearance modeling approach in the deep learning architecture for object tracking. Visual prior is learned from a large set of unlabeled images. Then it is transferred to the appearance model during tracking. Traditional trackers usually do tracking before updating at every input image. Drift may occur when there are complex appearance variations. We propose to update the appearance model before tracking. This can effectively prevent tracking failures when there are complex appearance changes. And the motion parameters estimation could be more accurate with the updated appearance model. Experimental results on challenging videos demonstrate the robustness and accuracy of the proposed algorithm compared with several state of the art approaches.

Collaboration


Dive into the Jianwei Ding's collaboration.

Top Co-Authors

Avatar

Kaiqi Huang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yunqi Tang

Chinese People's Public Security University

View shared research outputs
Top Co-Authors

Avatar

Huawei Tian

Chinese People's Public Security University

View shared research outputs
Top Co-Authors

Avatar

Tieniu Tan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yongzhen Huang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Wengang Feng

Chinese People's Public Security University

View shared research outputs
Top Co-Authors

Avatar

Yanhui Xiao

Chinese People's Public Security University

View shared research outputs
Top Co-Authors

Avatar

Wei Liu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Xin Zhao

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Min Li

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge