Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhijing Yang is active.

Publication


Featured researches published by Zhijing Yang.


Neurocomputing | 2016

Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging

Jaime Zabalza; Jinchang Ren; Jiangbin Zheng; Huimin Zhao; Chunmei Qing; Zhijing Yang; Peijun Du; Stephen Marshall

Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.


The Visual Computer | 2017

Blind inpainting using the fully convolutional neural network

Nian Cai; Zhenghang Su; Zhineng Lin; Han Wang; Zhijing Yang; Bingo Wing-Kuen Ling

Most of existing inpainting techniques require to know beforehandwhere those damaged pixels are, i.e., non-blind inpainting methods. However, in many applications, such information may not be readily available. In this paper, we propose a novel blind inpainting method based on a fully convolutional neural network. We term this method as blind inpainting convolutional neural network (BICNN). It purely cascades three convolutional layers to directly learn an end-to-end mapping between a pre-acquired dataset of corrupted/ground truth subimage pairs. Stochastic gradient descent with standard backpropagation is used to train the BICNN. Once the BICNN is learned, it can automatically identify and remove the corrupting patterns from a corrupted image without knowing the specific regions. The learned BICNN takes a corrupted image of any size as input and directly produces a clean output by only one pass of forward propagation. Experimental results indicate that the proposed method can achieve a better inpainting performance than the existing inpainting methods for various corrupting patterns.


IEEE Transactions on Instrumentation and Measurement | 2013

Joint Empirical Mode Decomposition and Sparse Binary Programming for Underlying Trend Extraction

Zhijing Yang; Bingo Wing-Kuen Ling; Chris Bingham

This paper presents a novel methodology for extracting the underlying trends of signals via a joint empirical mode decomposition (EMD) and sparse binary programming approach. The EMD is applied to the signals and the corresponding intrinsic mode functions (IMFs) are obtained. The underlying trends of the signals are obtained by the sums of the IMFs where these IMFs are either selected or discarded. The total number of the selected IMFs is minimized subject to a specification on the maximum absolute differences between the denoised signals (signals obtained by discarding the first IMFs) and the underlying trends. Since the total number of the selected IMFs is minimized, the obtained solutions are sparse and only few IMFs are selected. The selected IMFs correspond to the components of the underlying trend of the signals. On the other hand, the L∞ norm specification guarantees that the maximum absolute differences between the underlying trends and the denoised signals are bounded by an acceptable level. This forces the underlying trends to follow the global changes of the signals. As the IMFs are either selected or discarded, the coefficients are either zero or one. This problem is actually a sparse binary programming problem with an L0 norm objective function subject to an L∞ norm constraint. Nevertheless, the problem is nonconvex, nonsmooth, and NP hard. It requires an exhaustive search for solving the problem. However, the required computational effort is too heavy to be implemented practically. To address these difficulties, we approximate the L0 norm objective function by the L1 norm objective function, and the solution of the sparse binary programming problem is obtained by applying the zero and one quantization to the solution of the corresponding continuous-valued L1 norm optimization problem. Since the isometry condition is satisfied and the number of the IMFs is small for most of practical signals, this approximation is valid and verified via our experiments conducted on practical data. As the L1 norm optimization problem can be reformulated as a linear programming problem and many efficient algorithms such as simplex or interior point methods can be applied for solving the linear programming problem, our proposed method can be implemented in real time. Also, unlike previously reported techniques that require precursor models or parameter specifications, our proposed adaptive method does not make any assumption on the characteristics of the original signals. Hence, it can be applied to extract the underlying trends of more general signals. The results show that our proposed method outperforms existing EMD, classical lowpass filtering and the wavelet methods in terms of the efficacy.


Optical Engineering | 2010

Empirical mode decomposition-based facial pose estimation inside video sequences

Ckhunmei Qing; Jianmin Jiang; Zhijing Yang

We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.


Pattern Recognition | 2017

Joint bilateral filtering and spectral similarity-based sparse representation: a generic framework for effective feature extraction and data classification in hyperspectral imaging

Tong Qiao; Zhijing Yang; Jinchang Ren; Peter Yuen; Huimin Zhao; Genyun Sun; Stephen Marshall; Jon Atli Benediktsson

Abstract Classification of hyperspectral images (HSI) has been a challenging problem under active investigation for years especially due to the extremely high data dimensionality and limited number of samples available for training. It is found that hyperspectral image classification can be generally improved only if the feature extraction technique and the classifier are both addressed. In this paper, a novel classification framework for hyperspectral images based on the joint bilateral filter and sparse representation classification (SRC) is proposed. By employing the first principal component as the guidance image for the joint bilateral filter, spatial features can be extracted with minimum edge blurring thus improving the quality of the band-to-band images. For this reason, the performance of the joint bilateral filter has shown better than that of the conventional bilateral filter in this work. In addition, the spectral similarity-based joint SRC (SS-JSRC) is proposed to overcome the weakness of the traditional JSRC method. By combining the joint bilateral filtering and SS-JSRC together, the superiority of the proposed classification framework is demonstrated with respect to several state-of-the-art spectral-spatial classification approaches commonly employed in the HSI community, with better classification accuracy and Kappa coefficient achieved.


Digital Signal Processing | 2015

Nonlinear and adaptive undecimated hierarchical multiresolution analysis for real valued discrete time signals via empirical mode decomposition approach

Weichao Kuang; Zhijing Yang; Bingo Wing-Kuen Ling; Charlotte Yuk-Fan Ho; Qingyun Dai

Hierarchical multiresolution analysis is an important tool for the analysis of signals. Since this multiresolution representation provides a pyramid like framework for representing signals, it can extract signal information effectively via levels by levels. On the other hand, a signal can be nonlinearly and adaptively represented as a sum of intrinsic mode functions (IMFs) via the empirical mode decomposition (EMD) algorithm. Nevertheless, as the IMFs are obtained only when the EMD algorithm converges, no further iterative sifting process will be performed directly when the EMD algorithm is applied to an IMF. As a result, the same IMF will be resulted and further level decompositions of the IMFs cannot be obtained directly by the EMD algorithm. In other words, the hierarchical multiresolution analysis cannot be performed via the EMD algorithm directly. This paper is to address this issue by performing a nonlinear and adaptive hierarchical multiresolution analysis based on the EMD algorithm via a frequency domain approach. In the beginning, an IMF is expressed in the frequency domain by applying discrete Fourier transform (DFT) to it. Next, zeros are inserted to the DFT sequence and a conjugate symmetric zero padded DFT sequence is obtained. Then, inverse discrete Fourier transform (IDFT) is applied to the zero padded DFT sequence and a new signal expressed in the time domain is obtained. Actually, the next level IMFs can be obtained by applying the EMD algorithm to this signal. However, the lengths of these next level IMFs are increased. To reduce these lengths, first DFT is applied to each next level IMF. Second, the DFT coefficients of each next level IMF at the positions where the zeros are inserted before are removed. Finally, by applying IDFT to the shorten DFT sequence of each next level IMF, the final set of next level IMFs are obtained. It is shown in this paper that the original IMF can be perfectly reconstructed. Moreover, computer numerical simulation results show that our proposed method can reach a component with less number of levels of decomposition compared to that of the conventional linear and nonadaptive wavelets and filter bank approaches. Also, as no filter is involved in our proposed method, there is no spectral leakage in various levels of decomposition introduced by our proposed method. Whereas there could be some significant leakage components in the various levels of decomposition introduced by the wavelets and filter bank approaches.


international conference on multisensor fusion and integration for intelligent systems | 2012

Applied sensor fault detection and validation using transposed input data PCA and ANNs

Yu Zhang; Christopher M. Bingham; Michael Gallimore; Zhijing Yang; Jun Chen

The paper presents an efficient approach for applied sensor fault detection based on an integration of principal component analysis (PCA) and artificial neural networks (ANNs). Specifically, PCA-based y-indices are introduced to measure the differences between groups of sensor readings in a time rolling window, and the relative merits of three types of ANNs are compared for operation classification. Unlike previously reported PCA techniques (commonly based on squared prediction error (SPE)) which can readily detect a sensor fault wrongly when the system data is subject bias or drifting as a result of power or loading changes, here, it is shown that the proposed methodologies are capable of detecting and identifying the emergence of sensor faults during transient conditions. The efficacy and capability of the proposed approach is demonstrated through their application on measurement data taken from an industrial generator.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

Normalized Co-Occurrence Mutual Information for Facial Pose Detection Inside Videos

Chunmei Qing; Jianmin Jiang; Zhijing Yang

Human faces captured inside videos are often presented with variable poses, making it difficult to recognize and thus pose detection becomes crucial for such face recognition under non-controlled environment. While existing mutual in formation (MI) primarily considers the relationship between corresponding individual pixels, we propose a normalized co occurrence mutual information in this letter to capture the information embedded not only in corresponding pixel values but also in their geographical locations. In comparison with the existing Mis, the proposed presents an essential advantage that both marginal entropy and joint entropy can be optimally exploited in measuring the similarity between two given images. When developed into a facial pose detection algorithm inside video sequences, we show, through extensive experiments, that such design is capable of achieving the best performances among all the representative existing techniques compared.


communication systems networks and digital signal processing | 2012

Sensor fault detection for industrial gas turbine system by using principal component analysis based y-distance indexes

Yu Zhang; Christopher M. Bingham; Zhijing Yang; Michael Gallimore; Wing-Kuen Ling

The paper presents a readily implementable and computationally efficient method for sensor fault detection based upon an extension to principal component analysis (PCA) and y-distance indexes. The proposed extension is applied to system data from a sub-15MW industrial gas turbine, with explanations of the eigenvalue/eigenvector problem and the definition of z-scores and principal component (PC) scores. The y-distance index is introduced to measure the differences between sensor reading datasets. It is shown through use of real-time operational data that in-operation sensor faults can be detected through use of the proposed y-distance indexes. The efficacy of the approach is demonstrated through experimental trials on Siemens industrial gas turbines.


Sensors | 2017

Linear vs. Nonlinear Extreme Learning Machine for Spectral-Spatial Classification of Hyperspectral Images

Faxian Cao; Zhijing Yang; Jinchang Ren; Mengying Jiang; Wing-Kuen Ling

As a new machine learning approach, the extreme learning machine (ELM) has received much attention due to its good performance. However, when directly applied to hyperspectral image (HSI) classification, the recognition rate is low. This is because ELM does not use spatial information, which is very important for HSI classification. In view of this, this paper proposes a new framework for the spectral-spatial classification of HSI by combining ELM with loopy belief propagation (LBP). The original ELM is linear, and the nonlinear ELMs (or Kernel ELMs) are an improvement of linear ELM (LELM). However, based on lots of experiments and much analysis, it is found that the LELM is a better choice than nonlinear ELM for the spectral-spatial classification of HSI. Furthermore, we exploit the marginal probability distribution that uses the whole information in the HSI and learns such a distribution using the LBP. The proposed method not only maintains the fast speed of ELM, but also greatly improves the accuracy of classification. The experimental results in the well-known HSI data sets, Indian Pines, and Pavia University, demonstrate the good performance of the proposed method.

Collaboration


Dive into the Zhijing Yang's collaboration.

Top Co-Authors

Avatar

Bingo Wing-Kuen Ling

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Zhang

University of Lincoln

View shared research outputs
Top Co-Authors

Avatar

Faxian Cao

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jinchang Ren

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

Qingyun Dai

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weichao Kuang

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge