Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chengyin Liu is active.

Publication


Featured researches published by Chengyin Liu.


IEEE Geoscience and Remote Sensing Letters | 2016

Hyperspectral Image Classification With Robust Sparse Representation

Chang Li; Yong Ma; Xiaoguang Mei; Chengyin Liu; Jiayi Ma

Recently, the sparse representation-based classification (SRC) methods have been successfully used for the classification of hyperspectral imagery, which relies on the underlying assumption that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples among the whole training dictionary. However, the SRC-based methods ignore the sparse representation residuals (i.e., outliers), which may make the SRC not robust for outliers in practice. To overcome this problem, we propose a robust SRC (RSRC) method which can handle outliers. Moreover, we extend the RSRC to the joint robust sparsity model named JRSRC, where pixels in a small neighborhood around the test pixel are simultaneously represented by linear combinations of a few training samples and outliers. The JRSRC can also deal with outliers in hyperspectral classification. Experiments on real hyperspectral images demonstrate that the proposed RSC and JRSRC have better performances than the orthogonal matching pursuit (OMP) and simultaneous OMP, respectively. Moreover, the JRSRC outperforms some other popular classifiers.


Information Sciences | 2017

Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration

Jiayi Ma; Junjun Jiang; Chengyin Liu; Yansheng Li

Abstract Retinal image registration, which can be formulated by matching two sets of sparse features extracted from two observed retinal images, is a crucial step in the diagnosis and treatment of various eye diseases. Existing methods suffer from missing true correspondences or do not fully consider local appearance information, which causes difficulty in matching low-quality retinal images due to insufficient reliable features. In addition, the relationships between retinal image pairs are usually modeled by linear transformation, such as affine transformation, which cannot generate accurate alignments in large viewpoint changes due to the nonplanar eyeball surface. To address these issues, a feature guided Gaussian mixture model (GMM) is proposed for the non-rigid registration of retinal images. We formulate the problem as an estimation of a feature guided mixture of densities: a GMM is fitted to one point set in which the centers of the Gaussian densities characterized by spatial positions associated with local appearance descriptors are constrained to coincide with the other point set. The problem is solved under a maximum-likelihood framework, and semi-supervised expectation-maximization is used to iteratively estimate the feature correspondence and spatial transformation, which is initialized by a set of confidential feature matches obtained previously. Non-rigid transformation is specified in a reproducing kernel Hilbert space, and a local geometric constraint is imposed to establish the transformation estimation for obtaining a meaningful solution. A fast implementation based on sparse approximation is also provided and reduces the time complexity from cubic to quadratic. Moreover, we use the edge map, which can extract more reliable features, as a uniform representation of retinal images. Experimental results on publicly available retinal images show that our approach is robust in different registration tasks and outperforms several competing approaches, especially when data is severely degraded.


Remote Sensing | 2016

Hyperspectral Unmixing with Robust Collaborative Sparse Regression

Chang Li; Yong Ma; Xiaoguang Mei; Chengyin Liu; Jiayi Ma

Recently, sparse unmixing (SU) of hyperspectral data has received particular attention for analyzing remote sensing images. However, most SU methods are based on the commonly admitted linear mixing model (LMM), which ignores the possible nonlinear effects (i.e., nonlinearity). In this paper, we propose a new method named robust collaborative sparse regression (RCSR) based on the robust LMM (rLMM) for hyperspectral unmixing. The rLMM takes the nonlinearity into consideration, and the nonlinearity is merely treated as outlier, which has the underlying sparse property. The RCSR simultaneously takes the collaborative sparse property of the abundance and sparsely distributed additive property of the outlier into consideration, which can be formed as a robust joint sparse regression problem. The inexact augmented Lagrangian method (IALM) is used to optimize the proposed RCSR. The qualitative and quantitative experiments on synthetic datasets and real hyperspectral images demonstrate that the proposed RCSR is efficient for solving the hyperspectral SU problem compared with the other four state-of-the-art algorithms.


IEEE Geoscience and Remote Sensing Letters | 2016

GBM-Based Unmixing of Hyperspectral Data Using Bound Projected Optimal Gradient Method

Chang Li; Yong Ma; Jun Huang; Xiaoguang Mei; Chengyin Liu; Jiayi Ma

The generalized bilinear model (GBM) has been widely used for the nonlinear unmixing of hyperspectral images, and traditional GBM solvers include the Bayesian algorithm, the gradient descent algorithm, the semi-nonnegative-matrix-factorization algorithm, etc. However, they suffer from one of the following problems: high computational cost, sensitive to initialization, and the pixelwise algorithm hinders us from applying to large hyperspectral images. In this letter, we apply Nesterovs optimal gradient method to solve the least-square problem under the bound constraint, which is named as the bound projected optimal gradient method (BPOGM). The BPOGM can achieve the optimal convergence rate of O(1/k2), with k denoting the number of iterations in BPOGM. We further apply the BPOGM to solve the GBM-based unmixing problem. Experiments on both synthetic data sets and real hyperspectral images demonstrate that the BPOGM is efficient for solving the GBM-based unmixing problem.


Journal of The Optical Society of America A-optics Image Science and Vision | 2016

Retinal image registration via feature-guided Gaussian mixture model

Chengyin Liu; Jiayi Ma; Yong Ma; Jun Huang

Registration of retinal images taken at different times, from different perspectives, or with different modalities is a critical prerequisite for the diagnoses and treatments of various eye diseases. This problem can be formulated as registration of two sets of sparse feature points extracted from the given images, and it is typically solved by first creating a set of putative correspondences and then removing the false matches as well as estimating the spatial transformation between the image pairs or solved by estimating the correspondence and transformation jointly involving an iteration process. However, the former strategy suffers from missing true correspondences, and the latter strategy does not make full use of local appearance information, which may be problematic for low-quality retinal images due to a lack of reliable features. In this paper, we propose a feature-guided Gaussian mixture model (GMM) to address these issues. We formulate point registration as the estimation of a feature-guided mixture of densities: A GMM is fitted to one point set, such that both the centers and local features of the Gaussian densities are constrained to coincide with the other point set. The problem is solved under a unified maximum-likelihood framework together with an iterative expectation-maximization algorithm initialized by the confident feature correspondences, where the image transformation is modeled by an affine function. Extensive experiments on various retinal images show the robustness of our approach, which consistently outperforms other state-of-the-art methods, especially when the data is badly degraded.


Journal of remote sensing | 2015

Infrared ultraspectral signature classification based on a restricted Boltzmann machine with sparse and prior constraints

Xiaoguang Mei; Yong Ma; Fan Fan; Chang Li; Chengyin Liu; Jun Huang; Jiayi Ma

The state-of-the-art ultraspectral technology brings a new hope for the high precision applications due to its high spectral resolution. However, it comes with new challenges brought by the improvement of spectral resolution such as the Hughes phenomenon and over-fitting issue, and our work is aimed at addressing these problems. As new Markov random field (MRF) models, the restricted Boltzmann machines (RBMs) have been used as generative models for many different pattern recognition and artificial intelligence applications showing promising and outstanding performance. In this article, we propose a new method for infrared ultraspectral signature classification based on the RBMs, which adopt the regularization-based techniques to improve the classification accuracy and robustness to noise compared to traditional RBMs. First, we add an arctan-like term to the objective function as a sparse constraint to improve the classification accuracy. Second, we utilize a Gaussian prior to avoid the over-fitting problem. Third, to further improve the classification performance, a multi-layer RBM model, a deep belief network (DBN), is adopted for infrared ultraspectral signature classification. Experiments using different spectral libraries provided by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Environmental Protection Agency (EPA) were performed to evaluate the performance of the proposed method by comparing it with other traditional methods, including spectral coding-based classifiers (binary coding (BC), spectral feature-based binary coding (SFBC), and spectral derivative feature coding (SDFC) matching methods), a novel feature extraction method termed crosscut feature extraction matching (CF), and three machine learning methods (artificial deoxyribonucleic acid (DNA)-based spectral matching (ADSM), DBN, and sparse deep belief network (SparseDBN)). Experimental results demonstrate that the proposed method is superior to the other methods with which it was compared and can simultaneously improve the accuracy and robustness of classification.


Sensors | 2017

Hyperspectral Image Classification with Spatial Filtering and \(l_{(2,1)}\) Norm

Hao Li; Chang Li; Cong Zhang; Zhe Liu; Chengyin Liu

Recently, the sparse representation based classification methods have received particular attention in the classification of hyperspectral imagery. However, current sparse representation based classification models have not considered all the test pixels simultaneously. In this paper, we propose a hyperspectral classification method with spatial filtering and ℓ2,1 norm (SFL) that can deal with all the test pixels simultaneously. The ℓ2,1 norm regularization is used to extract relevant training samples among the whole training data set with joint sparsity. In addition, the ℓ2,1 norm loss function is adopted to make it robust for samples that deviate significantly from the rest of the samples. Moreover, to take the spatial information into consideration, a spatial filtering step is implemented where all the training and testing samples are spatially averaged with its nearest neighbors. Furthermore, the non-negative constraint is added to the sparse representation matrix motivated by hyperspectral unmixing. Finally, the alternating direction method of multipliers is used to solve SFL. Experiments on real hyperspectral images demonstrate that the proposed SFL method can obtain better classification performance than some other popular classifiers.


international conference on multimedia and expo | 2016

Robust image matching via feature guided Gaussian mixture model

Jiayi Ma; Junjun Jiang; Yuan Gao; Jun Chen; Chengyin Liu

In this paper, we propose a novel feature guided Gaussian mixture model (FG-GMM) for image matching, which typically requires matching two sets of feature points extracted from the given images. We formulate the problem as estimation of a feature guided mixture of densities: a GMM is fitted to one point set, such that both the centers and local features of the Gaussian densities are constrained to coincide with the other point set. The problem is solved under a unified maximum-likelihood framework together with an iterative semi-supervised Expectation-Maximization (EM) algorithm initialized by the confident feature correspondences. The image transformation is specified in a reproducing kernel Hilbert space and a sparse approximation is adopted to achieve a fast implementation. Extensive experiments on various real images show the robustness of our approach, which consistently outperforms other state-of-the-art methods.


Journal of The Optical Society of America A-optics Image Science and Vision | 2016

Nonrigid registration of remote sensing images via sparse and dense feature matching.

Jun Chen; Linbo Luo; Chengyin Liu; Jin-Gang Yu; Jiayi Ma

In this paper, we propose a novel formulation for building pixelwise alignments between remote sensing images under nonrigid transformation based on matching both sparsely and densely sampled features. Our formulation contains two coupling variables: the nonrigid geometric transformation and the discrete dense flow field. To match sparse features, we fit a geometric transformation specified in a reproducing kernel Hilbert space and impose a locally linear constraint to regularize the transformation. To match dense features, we compute a dense flow field by using a formulation analogous to scale invariant feature transform (SIFT) flow which allows nonrigid matching across different scene appearances. An additional term is introduced to ensure the coherence between the two variables, and we alternatively solve for one variable under the assumption that the other is known. Extensive experiments on both synthetic and real remote sensing images demonstrate that our approach greatly outperforms state-of-the-art methods, particularly when the data contain severe degradations.


visual communications and image processing | 2016

Multimodal retinal image registration using edge map and feature guided Gaussian mixture model

Jiayi Ma; Junjun Jiang; Jun Chen; Chengyin Liu; Chang Li

In this paper, we propose a method for multimodal retinal image registration based on feature guided Gaussian mixture model (GMM) and edge map. We extract two sets of feature points from the edge maps of two images, and formulate image registration as the estimation of a feature guided mixture of densities: a GMM is fitted to one point set, such that both the centers and local features of the Gaussian densities are constrained to coincide with the other point set. The problem is solved under a maximum-likelihood framework together with an iterative EM algorithm initialized by confident feature matches, where the image transformation is modeled by an affine function. Extensive experiments on various retinal images show the robustness of our method, which consistently outperforms other state-of-the-arts, especially when the data is badly degraded.

Collaboration


Dive into the Chengyin Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chang Li

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Chen

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Junjun Jiang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jin-Gang Yu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Linbo Luo

China University of Geosciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge