Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chunlei Yang is active.

Publication


Featured researches published by Chunlei Yang.


The Visual Computer | 2017

Salient object detection in complex scenes via D-S evidence theory based region classification

Chunlei Yang; Jiexin Pu; Yongsheng Dong; Zhonghua Liu; Lingfei Liang; Xiaohong Wang

In complex scenes, multiple objects are often concealed in cluttered backgrounds. Their saliency is difficult to be detected by using conventional methods, mainly because single color contrast can not shoulder the mission of saliency measure; other image features should be involved in saliency detection to obtain more accurate results. Using Dempster-Shafer (D-S) evidence theory based region classification, a novel method is presented in this paper. In the proposed framework, depth feature information extracted from a coarse map is employed to generate initial feature evidences which indicate the probabilities of regions belonging to foreground or background. Based on the D-S evidence theory, both uncertainty and imprecision are modeled, and the conflicts between different feature evidences are properly resolved. Moreover, the method can automatically determine the mass functions of the two-stage evidence fusion for region classification. According to the classification result and region relevance, a more precise saliency map can then be generated by manifold ranking. To further improve the detection results, a guided filter is utilized to optimize the saliency map. Both qualitative and quantitative evaluations on three publicly challenging benchmark datasets demonstrate that the proposed method outperforms the contrast state-of-the-art methods, especially for detection in complex scenes.


The Visual Computer | 2018

Multi-scale counting and difference representation for texture classification

Yongsheng Dong; Jinwang Feng; Chunlei Yang; Xiaohong Wang; Lintao Zheng; Jiexin Pu

Multi-scale analysis has been widely used for constructing texture descriptors by modeling the coefficients in transformed domains. However, the resulting descriptors are not robust to the rotated textures when performing texture classification. To alleviate this problem, we in this paper propose a multi-scale counting and difference representation (CDR) of image textures for texture classification. Particularly, we first extract a single-scale CDR feature consisting of the local counting vector (LCV) and the differential excitation vector (DEV). The LCV is established to capture different types of textural structures using the discrete local counting projection, while the DEV is used to describe the difference information of textures in accordance with the differential excitation projection. Finally, the multi-scale CDR feature of a texture image is constructed by combining CDRs at different scales. Experimental results on Brodatz, VisTex, and Outex databases demonstrate that our proposed multi-scale CDR-based texture classification method outperforms five representative texture classification methods.


IEEE Signal Processing Letters | 2017

Extended Locality-Constrained Linear Self-Coding for Saliency Detection

Chunlei Yang; Jiexin Pu; Guo-Sen Xie; Yongsheng Dong; Zhonghua Liu

In complex scenes, foreground saliency can hardly be detected completely, which may further result in the ambiguous cues of objects for other computer vision tasks. In this letter, an extended locality-constrained linear self-coding (eLLsC) scheme is proposed to assist to solve the saliency detection problem under the complex scenes. The locality of both spatial relation and feature distance is preserved in eLLsC, thus making the transformed code involved in the manifold ranking to prompt the generation of the saliency map with more complete foreground and clearer boundary. Experimental results on three saliency detection benchmarks demonstrate the effectiveness of the proposed hybrid method.


The Visual Computer | 2018

Scene classification-oriented saliency detection via the modularized prescription

Chunlei Yang; Jiexin Pu; Yongsheng Dong; Guo-Sen Xie; Yanna Si; Zhonghua Liu

Saliency detection technology has been greatly developed and applied in recent years. However, the performance of current methods is not satisfactory in complex scenes. One of the reasons is that the performance improvement is often carried out through utilizing complicated mathematical models and involving multiple features rather than classifying the scene complexity and respectively detecting saliency. To break this unified detection schema for generating better results, we propose a method of scene classification-oriented saliency detection via the modularized prescription in this paper. Different scenes are described by a scene complexity expression model, and they are analyzed and discriminately detected by different pipelines. This process seems like that doctors can tailor the treatment prescriptions when they meet different symptoms. Moreover, two SVM-based classifiers are trained for scene classification and sky region identification, and the proposed sky region discrimination and erase model can be used to efficiently decrease the saliency interference by the high luminance of the background sky regions. Experimental results demonstrate the effectiveness and superiority of the proposed method in both higher precision and better smoothness, especially for detecting in structure complex scenes.


Journal of Visual Communication and Image Representation | 2018

Hybrid of extended locality-constrained linear coding and manifold ranking for salient object detection

Chunlei Yang; Xiangluo Wang; Jiexin Pu; Guo-Sen Xie; Zhonghua Liu; Yongsheng Dong; Lingfei Liang

Abstract Recent years have witnessed great progress of salient object detection methods. However, due to the emerging complex scenes, two problems should be solved urgently: one is on the fast locating of the foreground while preserving the precision, and the other is about reducing the noise near the foreground boundary in saliency maps. In this paper, a hybrid method is proposed to ameliorate the above two issues. At first, to reduce the essential runtime of integrating the prior knowledge, a novel Prior Knowledge Learning based Region Classification (PKL-RC) method is proposed for classifying image regions and preliminarily locating foreground; furthermore, to generate more accurate saliency, a Locality-constrained Linear self-Coding based Region Clustering (LLsC-RC) model is proposed to improve the adjacency structure of the similarity graph for Manifold Ranking (MR). Experimental results demonstrate the effectiveness and superiority of the proposed method in both higher precision and better smoothness.


Entropy | 2018

Image Thresholding Segmentation on Quantum State Space

Xiangluo Wang; Chunlei Yang; Guo-Sen Xie; Zhonghua Liu

Aiming to implement image segmentation precisely and efficiently, we exploit new ways to encode images and achieve the optimal thresholding on quantum state space. Firstly, the state vector and density matrix are adopted for the representation of pixel intensities and their probability distribution, respectively. Then, the method based on global quantum entropy maximization (GQEM) is proposed, which has an equivalent object function to Otsu’s, but gives a more explicit physical interpretation of image thresholding in the language of quantum mechanics. To reduce the time consumption for searching for optimal thresholds, the method of quantum lossy-encoding-based entropy maximization (QLEEM) is presented, in which the eigenvalues of density matrices can give direct clues for thresholding, and then, the process of optimal searching can be avoided. Meanwhile, the QLEEM algorithm achieves two additional effects: (1) the upper bound of the thresholding level can be implicitly determined according to the eigenvalues; and (2) the proposed approaches ensure that the local information in images is retained as much as possible, and simultaneously, the inter-class separability is maximized in the segmented images. Both of them contribute to the structural characteristics of images, which the human visual system is highly adapted to extract. Experimental results show that the proposed methods are able to achieve a competitive quality of thresholding and the fastest computation speed compared with the state-of-the-art methods.


Acta Biotheoretica | 2018

Graphical Representation and Similarity Analysis of DNA Sequences Based on Trigonometric Functions

Guo-Sen Xie; Xiao-Bo Jin; Chunlei Yang; Jiexin Pu; Zhongxi Mo

In this paper, we propose two four-base related 2D curves of DNA primary sequences (termed as F-B curves) and their corresponding single-base related 2D curves (termed as A-related, G-related, T-related and C-related curves). The constructions of these graphical curves are based on the assignments of individual base to four different sinusoidal (or tangent) functions; then by connecting all these points on these four sinusoidal (tangent) functions, we can get the F-B curves; similarly, by connecting the points on each of the four sinusoidal (tangent) functions, we get the single-base related 2D curves. The proposed 2D curves are all strictly non degenerate. Then, a 8-component characteristic vector is constructed to compare similarity among DNA sequences from different species based on a normalized geometrical centers of the proposed curves. As examples, we examine similarity among the coding sequences of the first exon of beta-globin gene from eleven species, similarity of cDNA sequences of beta-globin gene from eight species, and similarity of the whole mitochondrial genomes of 18 eutherian mammals. The experimental results well demonstrate the effectiveness of the proposed method.


Acta Biotheoretica | 2018

Correction to: Graphical Representation and Similarity Analysis of DNA Sequences Based on Trigonometric Functions

Guo-Sen Xie; Xiao-Bo Jin; Chunlei Yang; Jiexin Pu; Zhongxi Mo

In the original publication of the article, the y axis labels present in Figs.xa01a and 2a are incorrect. The correct Figs.xa01a and 2a are provided here.


Archive | 2012

Face recognition method and base image synthesis method under illumination change conditions

Zhonghua Liu; Yong Qiu; Chunlei Yang; Tao Huang; Lingfei Liang; Yonggang Chen; Lei Zhang


IEEE Access | 2018

Jumping and Refined Local Pattern for Texture Classification

Tianyu Wang; Yongsheng Dong; Chunlei Yang; Lin Wang; Lingfei Liang; Lintao Zheng; Jiexin Pu

Collaboration


Dive into the Chunlei Yang's collaboration.

Top Co-Authors

Avatar

Jiexin Pu

Henan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhonghua Liu

Henan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Guo-Sen Xie

Henan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yongsheng Dong

Henan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lingfei Liang

Henan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lintao Zheng

Henan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xiao-Bo Jin

Henan University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaohong Wang

Henan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gang Liu

Henan University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge