Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianfang Dou is active.

Publication


Featured researches published by Jianfang Dou.


Neurocomputing | 2015

Moving object detection based on incremental learning low rank representation and spatial constraint

Jianfang Dou; Jianxun Li; Qin Qin; Zimei Tu

Background modeling and subtraction, the task to detect moving objects in a scene, is an important step in video analysis. In this paper, we present a novel moving object detection method based on Online Low Rank Matrix Recovery and graph cut from monocular video sequences. First, use the K-SVD method to initialize the dictionary to construct the background model, perform foreground detection with augmented Lagrange multipliers (ALM) and refine foreground values by spatial smooth constraint to extract the background and foreground information; Then obtain the clusters of foreground and background respectively using mean shift clustering on the background and foreground information; Third, initialize the S/T Network with corresponding image pixels as nodes (except S/T node); Calculate the data and smoothness term of graph; Finally, use max flow/minimum cut to segmentation S/T network to extract the motion objects. Online dictionary learning is adopted to update the background model. Experimental results on indoor and outdoor videos demonstrate the efficiency of our proposed method.


Signal, Image and Video Processing | 2017

Background subtraction based on circulant matrix

Jianfang Dou; Qin Qin; Zimei Tu

Detecting moving objects in a scene is a fundamental and critical step for many high-level computer vision tasks. However, background subtraction modeling is still an open and challenge problem, particularly in practical scenarios with drastic illumination changes and dynamic backgrounds. In this paper, we present a novel background modeling method focused on dealing with complex environments based on circular shift operator. The background model is constructed by performing circular shifts on the neighborhood of each pixel, which forms a basic region unit. The foreground mask is obtained via two stages. The first stage is to subtract the established background from the current frame to obtain the distance map. The second is to adopt the graph cut on the distance map. In order to adapt to the background changes, the background model is updated with an adaptive update rate. Experimental results on indoor and outdoor videos demonstrate the efficiency of our proposed method.


Neurocomputing | 2015

Robust visual tracking based on incremental discriminative projective non-negative matrix factorization

Jianfang Dou; Jianxun Li; Qin Qin; Zimei Tu

Visual tracking usually requires an object appearance model that is robust to changing illumination, partial occlusion, large pose and other factors encountered in video. Most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them in presence of significant variation of the object appearance model or challenging situations. To address this issue, we propose a robust tracking algorithm based on discriminative projective non-negative matrix factorization and a robust inter-frame matching schema. The models of target and background are presented by the basis matrices of non-negative matrix factorization. In order to adapt the basis matrices to the variation of foreground and background during tracking, an incremental learning method is employed to update the basis matrices. A robust inter-frame matching by bidirectional method and Delaunay triangulation is adopted to improve the proposal distribution of particle filter, thus enhancing the performance of tracking. Template matching is used to correct the drift of the target if the result of discriminative part is unreliable. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experiments on some publicly available benchmarks of video sequences demonstrate the effectiveness and robustness of our approach.


Multimedia Tools and Applications | 2017

Robust visual tracking based on generative and discriminative model collaboration

Jianfang Dou; Qin Qin; Zimei Tu

Effective object appearance model is one of the key issues for the success of visual tracking. Since the appearance of a target and the environment changes dynamically, the majority of existed visual tracking algorithms tend to drift away from targets. To address this issue, we propose a robust tracking algorithm by integrating the generative and discriminative model. The object appearance model is made up of generative target model and a discriminative classifier. For the generative target model, we adopt the weighted structural local sparse appearance model combining patch based gray value and Histogram of Oriented Gradients feature as the patch dictionary. By sampling positives and negatives, alignment-pooling features are obtained based on the patch dictionary through local sparse coding, then we use support vector machine to train the discriminative classifier. The proposed method is embedded into a Bayesian inference framework for visual tracking. A combined matching method is adopted to improve the proposal distribution of the particle filter. Moreover, in order to adapt the situation change, the patch dictionary and discriminative classifier are updated by incremental learning every five frames. Experimental results on some publicly available benchmarks of video sequences demonstrate the accuracy and effectiveness of our tracker.


chinese control and decision conference | 2016

Infrared and visible image registration based on SIFT and sparse representation

Jianfang Dou; Qin Qin; Zimei Tu; Xishuai Peng; Yuanxiang Li

This paper proposes a visual-infrared image registration method based on sparse representation using Scale Invariant Features Transform (SIFT) features. Firstly, obtain the inverse image of the infrared image, enhance the visible and infrared image through Brightness Preserving Bi-Histogram Equalization (BPHE), and extract SIFT feature points and descriptors. Then, the SIFT descriptors of the visible image is sparse represented by the descriptors of the infrared images, find the initial matches based on l1 minimization. RANSAC is adopted to filter out the mismatches. Finally, optimize the transform parameters based on an improved Powell algorithm. Experimental results show the proposed method improves the registration performance compared to sift based methods.


Multimedia Tools and Applications | 2018

Image fusion based on wavelet transform with genetic algorithms and human visual system

Jianfang Dou; Qin Qin; Zimei Tu

A novel wavelet-based approach for multi-focus image fusion is presented, which is developed by taking into not only account the characteristics of human visual system (HVS) but also consider the optimization of image quality index to meet the human perception. After the multi-focus images to be fused are decomposed by the wavelet transform, different-fusion schemes for combining the coefficients are proposed: coefficients in low-frequency band are using the genetic algorithms to estimate the optimal weight according to the Edge-Association Index, and coefficients in high-frequency bands are weighted fusion by the texture masking of human visual system. To overcome the presence of noise and guarantee the homogeneity of the fused image, all the coefficients are subsequently performed by a window-based consistency verification process. The fused image is finally constructed by the inverse wavelet transform with all composite coefficients. To quantitatively evaluate and prove the performance of the proposed method, series of experiments and comparisons with some existing fusion methods are carried out in the paper. Experimental results on simulated and real images indicate that the proposed method is effective and can get satisfactory fusion results.


chinese control and decision conference | 2017

Robust edit propagation based on Hessian Local Linear Embedding

Jianfang Dou; Qin Qin; Zimei Tu

Edit propagation is a technique that can propagate various image edits (e.g., colorization and recoloring) performed via user strokes to the entire image based on similarity of image features. Although manifold preserving has been used for edit propagation, there is still much room for improvement. To this end, this paper proposes an edit propagation method based on Hessian Local Linear Embedding which is a modification of locally-linear embedding, with the distributed field descriptor as the image features. We demonstrate the proposed edit propagation approach can achieve better results than previous work.


chinese control and decision conference | 2017

Circulant structures based moving object detection

Jianfang Dou; Qin Qin; Zimei Tu

Background modeling and subtraction, the task to detect moving objects in a scene, is a fundamental and critical step for many high level computer vision tasks. However, background subtraction modeling is still an open and challenge problem particularly in practical scenarios with drastic illumination changes and dynamic backgrounds. In this paper, we present a novel background modeling method focused on dealing with complex environments based on circular shift operator. Firstly, the neighborhood of each pixel forming a region as a basis unit, performing circular shift on this unit to construct the background model. Then subtract the established background from the current frame to obtain the distance map. Thirdly, adopt the graph-cut on the distance map to perform foreground detection. The background model is updated with an adaptive update rate. Experimental results on indoor and outdoor videos demonstrate the efficiency of our proposed method.


Pattern Recognition and Image Analysis | 2017

Robust image matching with cascaded outliers removal

Jianfang Dou; Qin Qin; Zimei Tu

Finding feature correspondences between a pair of images is a fundamental problem in computer vision for 3D reconstruction and target recognition. In practice, for feature based matching methods, there is often having a higher percentage of incorrect matches and decreasing the matching accuracy, which is not suitable for subsequent processing. In this paper, we develop a novel algorithm to find good and more correspondences. Firstly, detecting SURF keypoints and extracting SURF descriptors; Then Obtain the initial matches based on the Euclidean distance of SURF descriptors; Thirdly, remove false matches by sparse representation theory, at the same time, exploiting the information of SURF keypoints, such as scale and orientation, forming the geometrical constraints to further delete incorrect matches; Finally, adopt Delaunay triangulation to refine the matches and get the final matches. Experimental results on real-world image matching datasets demonstrate the effectiveness and robustness of our proposed method.


LIDAR Imaging Detection and Target Recognition 2017 | 2017

Design of platform for removing screws from LCD display shields

Qin Qin; Jianfang Dou; Tu Zimei; Dongdong Zhu; Yueguang Lv; Jianzhong Su; Wei Gong; Jian Yang; Weimin Bao; Weibiao Chen; Zelin Shi; Jindong Fei; Shensheng Han; Weiqi Jin

Removing the screws on the sides of a shield is a necessary process in disassembling a computer LCD display. To solve this issue, a platform has been designed for removing the screws on display shields. This platform uses virtual instrument technology with LabVIEW as the development environment to design the mechanical structure with the technologies of motion control, human-computer interaction and target recognition. This platform removes the screws from the sides of the shield of an LCD display mechanically thus to guarantee follow-up separation and recycle.

Collaboration


Dive into the Jianfang Dou's collaboration.

Top Co-Authors

Avatar

Qin Qin

Shanghai Second Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Zimei Tu

Shanghai Second Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Jian Yang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Jianxun Li

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xishuai Peng

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yuanxiang Li

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge