Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Duansheng Chen is active.

Publication


Featured researches published by Duansheng Chen.


Applied Soft Computing | 2016

CNNTracker: Online discriminative object tracking via deep convolutional neural network

Yan Chen; Xiangnan Yang; Bineng Zhong; Shengnan Pan; Duansheng Chen; Huizhen Zhang

Abstract Object appearance model is a crucial module for object tracking and numerous schemes have been developed for object representation with impressive performance. Traditionally, the features used in such object appearance models are predefined in a handcrafted offline way but not tuned for the tracked object. In this paper, we propose a deep learning architecture to learn the most discriminative features dynamically via a convolutional neural network (CNN). In particular, we propose to enhance the discriminative ability of the appearance model in three-fold. First, we design a simple yet effective method to transfer the features learned from CNNs on the source tasks with large scale training data to the new tracking tasks with limited training data. Second, to alleviate the tracker drifting problem caused by model update, we exploit both the ground truth appearance information of the object labeled in the initial frames and the image observations obtained online. Finally, a heuristic schema is used to judge whether updating the object appearance models or not. Extensive experiments on challenging video sequences from the CVPR2013 tracking benchmark validate the robustness and effectiveness of the proposed tracking method.


Neurocomputing | 2014

Robust tracking via patch-based appearance model and local background estimation

Bineng Zhong; Yan Chen; Yingju Shen; Yewang Chen; Zhen Cui; Rongrong Ji; Xiaotong Yuan; Duansheng Chen; Weibin Chen

In this paper, to simultaneously address the tracker drift and occlusion problem, we propose a robust visual tracking algorithm via a patch-based adaptive appearance model driven by local background estimation. Inspired by human visual mechanisms (i.e., context-awareness and attentional selection), an object is represented with a patch-based appearance model, in which each patch outputs a confidence map during the tracking. Then, these confidence maps are combined via a robust estimator to finally get more robust and accurate tracking results. Moreover, we present a local spatial co-occurrence based background modeling approach to automatically estimate the local context background model of an interested object captured from a single camera, which may be stationary or moving. Finally, we utilize local background estimation to provide supervision to an analysis of possible occlusions and the adaption of patch-based appearance model of an object. Qualitative and quantitative experimental results on challenging videos demonstrate the robustness of the proposed method.


Neurocomputing | 2014

Structured partial least squares for simultaneous object tracking and segmentation

Bineng Zhong; Xiaotong Yuan; Rongrong Ji; Yan Yan; Zhen Cui; Xiaopeng Hong; Yan Chen; Tian Wang; Duansheng Chen; Jiaxin Yu

Segmentation-based tracking methods are a class of powerful tracking methods that have been highly successful in alleviating model drift during online-learning of the trackers. These methods typically include a detection component and a segmentation component, in which the tracked objects are first located by detection; then the results from detection are used to guide the process of segmentation to reduce the noises in the training data. However, one of the limitations is that the processes of detection and segmentation are treated entirely separately. The drift from detection may affect the results of segmentation. This also aggravates the trackers drift. In this paper, we propose a novel method to address this limitation by incorporating structured labeling information in the partial least square analysis algorithms for simultaneous object tracking and segmentation. This allows for novel structured labeling constraints to be placed directly on the tracked objects to provide useful contour constraint to alleviate the drifting problem. We show through both visual results and quantitative measurements on the challenging sequences that our method produces more robust tracking results while obtaining accurate object segmentation results.


Neurocomputing | 2013

Background subtraction driven seeds selection for moving objects segmentation and matting

Bineng Zhong; Yan Chen; Yewang Chen; Rongrong Ji; Ying Chen; Duansheng Chen; Hanzi Wang

In this paper, we address the difficult task of moving objects segmentation and matting in dynamic scenes. Toward this end, we propose a new automatic way to integrate a background subtraction (BGS) and an alpha matting technique via a heuristic seeds selection scheme. Specifically, our method can be divided into three main steps. First, we use a novel BGS method as attention mechanisms, generating many possible foreground pixels by tuning it for low false-positives and false-negatives as much as possible. Second, a connected components algorithm is used to give the bounding boxes of the labeled foreground pixels. Finally, matting of the object associated to a given bounding box is performed using a heuristic seeds selection scheme. This matting task is guided by top-down knowledge. Experimental results demonstrate the efficiency and effectiveness of our method.


Neurocomputing | 2015

Online learning 3D context for robust visual tracking

Bineng Zhong; Yingju Shen; Yan Chen; Weibo Xie; Zhen Cui; Hong-Bo Zhang; Duansheng Chen; Tian Wang; Xin Liu; Shu-Juan Peng; Jin Gou; Ji-Xiang Du; Jing Wang; Wenming Zheng

Abstract In this paper, we study the challenging problem of tracking single object in a complex dynamic scene. In contrast to most existing trackers which only exploit 2D color or gray images to learn the appearance model of the tracked object online, we take a different approach, inspired by the increased popularity of depth sensors, by putting more emphasis on the 3D Context to prevent model drift and handle occlusion. Specifically, we propose a 3D context-based object tracking method that learns a set of 3D context key-points, which have spatial–temporal co-occurrence correlations with the tracked object, for collaborative tracking in binocular video data. We first learn 3D context key-points via the spatial–temporal constrain in their spatial and depth coordinates. Then, the position of the object of interest is determined by a probability voting from the learnt 3D context key-points. Moreover, with depth information, a simple yet effective occlusion handling scheme is proposed to detect occlusion and recovery. Qualitative and quantitative experimental results on challenging video sequences demonstrate the robustness of the proposed method.


PLOS ONE | 2016

Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism

Bineng Zhong; Jun Zhang; Pengfei Wang; Ji-Xiang Du; Duansheng Chen

To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2017

Sparse Representation-Based Semi-Supervised Regression for People Counting

Hong-Bo Zhang; Bineng Zhong; Qing Lei; Ji-Xiang Du; Jialin Peng; Duansheng Chen; Xiao Ke

Label imbalance and the insufficiency of labeled training samples are major obstacles in most methods for counting people in images or videos. In this work, a sparse representation-based semi-supervised regression method is proposed to count people in images with limited data. The basic idea is to predict the unlabeled training data, select reliable samples to expand the labeled training set, and retrain the regression model. In the algorithm, the initial regression model, which is learned from the labeled training data, is used to predict the number of people in the unlabeled training dataset. Then, the unlabeled training samples are regarded as an over-complete dictionary. Each feature of the labeled training data can be expressed as a sparse linear approximation of the unlabeled data. In turn, the labels of the labeled training data can be estimated based on a sparse reconstruction in feature space. The label confidence in labeling an unlabeled sample is estimated by calculating the reconstruction error. The training set is updated by selecting unlabeled samples with minimal reconstruction errors, and the regression model is retrained on the new training set. A co-training style method is applied during the training process. The experimental results demonstrate that the proposed method has a low mean square error and mean absolute error compared with those of state-of-the-art people-counting benchmarks.


SpringerPlus | 2016

Multi-surface analysis for human action recognition in video.

Hong-Bo Zhang; Qing Lei; Bineng Zhong; Ji-Xiang Du; Jialin Peng; Tsung-Chih Hsiao; Duansheng Chen

The majority of methods for recognizing human actions are based on single-view video or multi-camera data. In this paper, we propose a novel multi-surface video analysis strategy. The video can be expressed as three-surface motion feature (3SMF) and spatio-temporal interest feature. 3SMF is extracted from the motion history image in three different video surfaces: horizontal–vertical, horizontal- and vertical-time surface. In contrast to several previous studies, the prior probability is estimated by 3SMF rather than using a uniform distribution. Finally, we model the relationship score between each video and action as a probability inference to bridge the feature descriptors and action categories. We demonstrate our methods by comparing them to several state-of-the-arts action recognition benchmarks.


Neurocomputing | 2016

Higher order partial least squares for object tracking

Bineng Zhong; Xiangnan Yang; Yingju Shen; Cheng Wang; Tian Wang; Zhen Cui; Hong-Bo Zhang; Xiaopeng Hong; Duansheng Chen

Object tracking has a wide range of applications and great efforts have been spent to build the object appearance model using image features encoded in a vector as observations. Since a video or image sequence is intrinsically a multi-dimensional matrix or a high-order tensor, these methods cannot fully utilize the spatial-temporal correlations within the 2D image ensembles and inevitably lose a lot of useful information. In this paper, we propose a novel 4D object tracking method via the higher order partial least squares (HOPLS) which is a generalized multi-linear regression method. To do so, we first represent each training and testing example as a set of image instances of a target or background object. Then, we view object tracking as a multi-class classification problem and construct the 4D data matrix and 2D labeling matrix for HOPLS. Furthermore, we use HOPLS to adaptively learn low-dimensional discriminative feature subspace for object representation. Finally, a simple yet effective updating schema is used to update the object appearance model. Experimental results on challenging video sequences demonstrate the robustness and effectiveness of the proposed 4D tracking method.


BioMed Research International | 2016

Robust Individual-Cell/Object Tracking via PCANet Deep Network in Biomedicine and Computer Vision.

Bineng Zhong; Shengnan Pan; Cheng Wang; Tian Wang; Ji-Xiang Du; Duansheng Chen; Liujuan Cao

Tracking individual-cell/object over time is important in understanding drug treatment effects on cancer cells and video surveillance. A fundamental problem of individual-cell/object tracking is to simultaneously address the cell/object appearance variations caused by intrinsic and extrinsic factors. In this paper, inspired by the architecture of deep learning, we propose a robust feature learning method for constructing discriminative appearance models without large-scale pretraining. Specifically, in the initial frames, an unsupervised method is firstly used to learn the abstract feature of a target by exploiting both classic principal component analysis (PCA) algorithms with recent deep learning representation architectures. We use learned PCA eigenvectors as filters and develop a novel algorithm to represent a target by composing of a PCA-based filter bank layer, a nonlinear layer, and a patch-based pooling layer, respectively. Then, based on the feature representation, a neural network with one hidden layer is trained in a supervised mode to construct a discriminative appearance model. Finally, to alleviate the tracker drifting problem, a sample update scheme is carefully designed to keep track of the most representative and diverse samples during tracking. We test the proposed tracking method on two standard individual cell/object tracking benchmarks to show our trackers state-of-the-art performance.

Collaboration


Dive into the Duansheng Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhen Cui

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge