Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yongsheng Dong is active.

Publication


Featured researches published by Yongsheng Dong.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

Texture Classification and Retrieval Using Shearlets and Linear Regression

Yongsheng Dong; Dacheng Tao; Xuelong Li; Jinwen Ma; Jiexin Pu

Statistical modeling of wavelet subbands has frequently been used for image recognition and retrieval. However, traditional wavelets are unsuitable for use with images containing distributed discontinuities, such as edges. Shearlets are a newly developed extension of wavelets that are better suited to image characterization. Here, we propose novel texture classification and retrieval methods that model adjacent shearlet subband dependences using linear regression. For texture classification, we use two energy features to represent each shearlet subband in order to overcome the limitation that subband coefficients are complex numbers. Linear regression is used to model the features of adjacent subbands; the regression residuals are then used to define the distance from a test texture to a texture class. Texture retrieval consists of two processes: the first is based on statistics in contourlet domains, while the second is performed using a pseudo-feedback mechanism based on linear regression modeling of shearlet subband dependences. Comprehensive validation experiments performed on five large texture datasets reveal that the proposed classification and retrieval methods outperform the current state-of-the-art.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

Graph Regularized Non-Negative Low-Rank Matrix Factorization for Image Clustering

Xuelong Li; Guosheng Cui; Yongsheng Dong

Non-negative matrix factorization (NMF) has been one of the most popular methods for feature learning in the field of machine learning and computer vision. Most existing works directly apply NMF on high-dimensional image datasets for computing the effective representation of the raw images. However, in fact, the common essential information of a given class of images is hidden in their low rank parts. For obtaining an effective low-rank data representation, we in this paper propose a non-negative low-rank matrix factorization (NLMF) method for image clustering. For the purpose of improving its robustness for the data in a manifold structure, we further propose a graph regularized NLMF by incorporating the manifold structure information into our proposed objective function. Finally, we develop an efficient alternating iterative algorithm to learn the low-dimensional representation of low-rank parts of images for clustering. Alternatively, we also incorporate robust principal component analysis into our proposed scheme. Experimental results on four image datasets reveal that our proposed methods outperform four representative methods.


ACM Transactions on Intelligent Systems and Technology | 2015

Nonnegative Multiresolution Representation-Based Texture Image Classification

Yongsheng Dong; Dacheng Tao; Xuelong Li

Effective representation of image texture is important for an image-classification task. Statistical modelling in wavelet domains has been widely used to image texture representation. However, due to the intraclass complexity and interclass diversity of textures, it is hard to use a predefined probability distribution function to fit adaptively all wavelet subband coefficients of different textures. In this article, we propose a novel modelling approach, Heterogeneous and Incrementally Generated Histogram (HIGH), to indirectly model the wavelet coefficients by use of four local features in wavelet subbands. By concatenating all the HIGHs in all wavelet subbands of a texture, we can construct a nonnegative multiresolution vector (NMV) to represent a texture image. Considering the NMV’s high dimensionality and nonnegativity, we further propose a Hessian regularized discriminative nonnegative matrix factorization to compute a low-dimensional basis of the linear subspace of NMVs. Finally, we present a texture classification approach by projecting NMVs on the low-dimensional basis. Experimental results show that our proposed texture classification method outperforms seven representative approaches.


IEEE Transactions on Neural Networks | 2018

SCE: A Manifold Regularized Set-Covering Method for Data Partitioning

Xuelong Li; Quanmao Lu; Yongsheng Dong; Dacheng Tao

Cluster analysis plays a very important role in data analysis. In these years, cluster ensemble, as a cluster analysis tool, has drawn much attention for its robustness, stability, and accuracy. Many efforts have been done to combine different initial clustering results into a single clustering solution with better performance. However, they neglect the structure information of the raw data in performing the cluster ensemble. In this paper, we propose a Structural Cluster Ensemble (SCE) algorithm for data partitioning formulated as a set-covering problem. In particular, we construct a Laplacian regularized objective function to capture the structure information among clusters. Moreover, considering the importance of the discriminative information underlying in the initial clustering results, we add a discriminative constraint into our proposed objective function. Finally, we verify the performance of the SCE algorithm on both synthetic and real data sets. The experimental results show the effectiveness of our proposed method SCE algorithm.


ACM Transactions on Intelligent Systems and Technology | 2017

Refined-Graph Regularization-Based Nonnegative Matrix Factorization

Xuelong Li; Guosheng Cui; Yongsheng Dong

Nonnegative matrix factorization (NMF) is one of the most popular data representation methods in the field of computer vision and pattern recognition. High-dimension data are usually assumed to be sampled from the submanifold embedded in the original high-dimension space. To preserve the locality geometric structure of the data, k-nearest neighbor (k-NN) graph is often constructed to encode the near-neighbor layout structure. However, k-NN graph is based on Euclidean distance, which is sensitive to noise and outliers. In this article, we propose a refined-graph regularized nonnegative matrix factorization by employing a manifold regularized least-squares regression (MRLSR) method to compute the refined graph. In particular, each sample is represented by the whole dataset regularized with ℓ2-norm and Laplacian regularizer. Then a MRLSR graph is constructed based on the representative coefficients of each sample. Moreover, we present two optimization schemes to generate refined-graphs by employing a hard-thresholding technique. We further propose two refined-graph regularized nonnegative matrix factorization methods and use them to perform image clustering. Experimental results on several image datasets reveal that they outperform 11 representative methods.


IEEE Transactions on Neural Networks | 2018

Patch Alignment Manifold Matting

Xuelong Li; Kang Liu; Yongsheng Dong; Dacheng Tao

Image matting is generally modeled as a space transform from the color space to the alpha space. By estimating the alpha factor of the model, the foreground of an image can be extracted. However, there is some dimensional information redundancy in the alpha space. It usually leads to the misjudgments of some pixels near the boundary between the foreground and the background. In this paper, a manifold matting framework named Patch Alignment Manifold Matting is proposed for image matting. In particular, we first propose a part modeling of color space in the local image patch. We then perform whole alignment optimization for approximating the alpha results using subspace reconstructing error. Furthermore, we utilize Nesterov’s algorithm to solve the optimization problem. Finally, we apply some manifold learning methods in the framework, and obtain several image matting methods, such as named ISOMAP matting and its derived Cascade ISOMAP matting. The experimental results reveal that the manifold matting framework and its two examples are effective when compared with several representative matting methods.


international conference on signal and information processing | 2015

How to represent scenes for classification

Jianhua Shi; Xuelong Li; Yongsheng Dong

Object-based scene image representations can effectively capture the semantic meanings of a scene. However, they usually neglect a scenes structure information. In this paper, we propose a novel and effective detector-based scene representation method for scene classification. In particular, we extract object features by object detectors. By sensible principal component analysis, we obtain a compact representation vector of objects in a scene image. To capture the scene layout, we then train lots of deformable part models to form a scene response vector. By concatenating these two vectors we use a linear support vector machine for scene classification. When combining with DeCAF [1] in a special way, our method is even more powerful on complex scene categorization. Experimental results on the MIT indoor database show that our approach achieves state-of-the-art performance on scene classification compared with several popular methods.


Neurocomputing | 2018

Structure Preserving Unsupervised Feature Selection

Quanmao Lu; Xuelong Li; Yongsheng Dong

Abstract Spectral analysis was usually used to guide unsupervised feature selection. However, the performances of these methods are not always satisfactory due to that they may generate continuous pseudo labels to approximate the discrete real labels. In this paper, a novel unsupervised feature selection method is proposed based on self-expression model. Unlike existing spectral analysis based methods, we utilize self-expression model to capture the relationships between the features without learning the cluster labels. Specifically, each feature can be reconstructed by using a linear combination of all the features in the original feature space, and a representative feature should give a large weight to reconstruct other features. Besides, a structure preserved constraint is incorporated into our model for keeping the local manifold structure of the data. Then an efficient alternative iterative algorithm is utilized to solve our proposed model with the theoretical analysis on its convergence. The experimental results on different datasets show the effectiveness of our method.


Neurocomputing | 2018

Subspace clustering guided convex nonnegative matrix factorization

Guosheng Cui; Xuelong Li; Yongsheng Dong

Abstract As one of the most important information of the data, the geometry structure information is usually modeled by a similarity graph to enforce the effectiveness of nonnegative matrix factorization (NMF). However, pairwise distance based graph is sensitive to noise and can not capture the subspace structure of the data. Reconstruction coefficients based graph can capture the subspace structure of the data, but the procedure of building the representation based graph is usually independent to the framework of NMF. To address this issue, a novel subspace clustering guided convex nonnegative matrix factorization (SC-CNMF) is proposed. In this NMF framework, the nonnegative subspace clustering is incorporated to learning the representation based graph, and meanwhile, a convex nonnegative matrix factorization is also updated simultaneously. To tackle the noise influence of the dataset, only k largest entries of each representation are kept in the subspace clustering. To capture the complicated geometry structure of the data, multiple centroids are also introduced to describe each cluster. Additionally, a row constraint is used to remove the relevance among the rows of the encoding matrix, which can help to improve the clustering performance of the proposed model. For the proposed NMF framework, two different objective functions with different optimizing schemes are designed. Image clustering experiments are conducted to demonstrate the effectiveness of the proposed methods on several datasets and compared with some related works based on NMF together with k-means clustering method and PCA as baseline.


chinese conference on pattern recognition | 2016

Subspace Clustering by Capped l_1 Norm

Quanmao Lu; Xuelong Li; Yongsheng Dong; Dacheng Tao

Subspace clustering, as an important clustering problem, has drawn much attention in recent years. State-of-the-art methods generally try to design an efficient model to regularize the coefficient matrix while ignore the influence of the noise model on subspace clustering. However, the real data are always contaminated by the noise and the corresponding subspace structures are likely to be corrupted. In order to solve this problem, we propose a novel subspace clustering algorithm by employing capped \(l_1\) norm to deal with the noise. Consequently, the noise term with large error can be penalized by the proposed method. So it is more robust to the noise. Furthermore, the grouping effect of our method is theoretically proved, which means highly correlated points can be grouped together. Finally, the experimental results on two real databases show that our method outperforms state-of-the-art methods.

Collaboration


Dive into the Yongsheng Dong's collaboration.

Top Co-Authors

Avatar

Xuelong Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guosheng Cui

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Kang Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Quanmao Lu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jianhua Shi

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jiexin Pu

Henan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge