Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jinchang Ren is active.

Publication


Featured researches published by Jinchang Ren.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Object Detection in Optical Remote Sensing Images Based on Weakly Supervised Learning and High-Level Feature Learning

Junwei Han; Dingwen Zhang; Gong Cheng; Lei Guo; Jinchang Ren

The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.


Computerized Medical Imaging and Graphics | 2010

Medical image analysis with artificial neural networks

Jianmin Jiang; Paul R. Trundle; Jinchang Ren

Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Background Prior-Based Salient Object Detection via Deep Reconstruction Residual

Junwei Han; Dingwen Zhang; Xintao Hu; Lei Guo; Jinchang Ren; Feng Wu

Detection of salient objects from images is gaining increasing research interest in recent years as it can substantially facilitate a wide range of content-based multimedia applications. Based on the assumption that foreground salient regions are distinctive within a certain context, most conventional approaches rely on a number of hand-designed features and their distinctiveness is measured using local or global contrast. Although these approaches have been shown to be effective in dealing with simple images, their limited capability may cause difficulties when dealing with more complicated images. This paper proposes a novel framework for saliency detection by first modeling the background and then separating salient objects from the background. We develop stacked denoising autoencoders with deep learning architectures to model the background where latent patterns are explored and more powerful representations of data are learned in an unsupervised and bottom-up manner. Afterward, we formulate the separation of salient objects from the background as a problem of measuring reconstruction residuals of deep autoencoders. Comprehensive evaluations of three benchmark datasets and comparisons with nine state-of-the-art algorithms demonstrate the superiority of this paper.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Effective and Efficient Midlevel Visual Elements-Oriented Land-Use Classification Using VHR Remote Sensing Images

Gong Cheng; Junwei Han; Lei Guo; Zhenbao Liu; Shuhui Bu; Jinchang Ren

Land-use classification using remote sensing images covers a wide range of applications. With more detailed spatial and textural information provided in very high resolution (VHR) remote sensing images, a greater range of objects and spatial patterns can be observed than ever before. This offers us a new opportunity for advancing the performance of land-use classification. In this paper, we first introduce an effective midlevel visual elementsoriented land-use classification method based on “partlets,” which are a library of pretrained part detectors used for midlevel visual elements discovery. Taking advantage of midlevel visual elements rather than low-level image features, a partlets-based method represents images by computing their responses to a large number of part detectors. As the number of part detectors grows, a main obstacle to the broader application of this method is its computational cost. To address this problem, we next propose a novel framework to train coarse-to-fine shared intermediate representations, which are termed “sparselets,” from a large number of pretrained part detectors. This is achieved by building a single-hidden-layer autoencoder and a single-hidden-layer neural network with an L0-norm sparsity constraint, respectively. Comprehensive evaluations on a publicly available 21-class VHR landuse data set and comparisons with state-of-the-art approaches demonstrate the effectiveness and superiority of this paper.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Classification of Hyperspectral Images by Exploiting Spectral–Spatial Information of Superpixel via Multiple Kernels

Leyuan Fang; Shutao Li; Wuhui Duan; Jinchang Ren; Jon Atli Benediktsson

For the classification of hyperspectral images (HSIs), this paper presents a novel framework to effectively utilize the spectral-spatial information of superpixels via multiple kernels, which is termed as superpixel-based classification via multiple kernels (SC-MK). In the HSI, each superpixel can be regarded as a shape-adaptive region, which consists of a number of spatial neighboring pixels with very similar spectral characteristics. First, the proposed SC-MK method adopts an oversegmentation algorithm to cluster the HSI into many superpixels. Then, three kernels are separately employed for the utilization of the spectral information, as well as spatial information, within and among superpixels. Finally, the three kernels are combined together and incorporated into a support vector machine classifier. Experimental results on three widely used real HSIs indicate that the proposed SC-MK approach outperforms several well-known classification methods.


Pattern Recognition Letters | 2011

Offline handwritten Arabic cursive text recognition using Hidden Markov Models and re-ranking

Jawad Hasan Yasin AlKhateeb; Jinchang Ren; Jianmin Jiang; Husni Al-Muhtaseb

Recognition of handwritten Arabic cursive texts is a complex task due to the similarities between letters under different writing styles. In this paper, a word-based off-line recognition system is proposed, using Hidden Markov Models (HMMs). The method employed involves three stages, namely preprocessing, feature extraction and classification. First, words from input scripts are segmented and normalized. Then, a set of intensity features are extracted from each of the segmented words, which is based on a sliding window moving across each mirrored word image. Meanwhile, structure-like features are also extracted including number of subwords and diacritical marks. Finally, these features are applied in a combined scheme for classification. Intensity features are used to train a HMM classifier, whose results are re-ranked using structure-like features for improved recognition rate. In order to validate the proposed techniques, extensive experiments were carried out using the IFN/ENIT database which contains 32,492 handwritten Arabic words. The proposed algorithm yields superior results of improved accuracy in comparison with several typical methods.


Neurocomputing | 2016

Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging

Jaime Zabalza; Jinchang Ren; Jiangbin Zheng; Huimin Zhao; Chunmei Qing; Zhijing Yang; Peijun Du; Stephen Marshall

Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Real-Time Modeling of 3-D Soccer Ball Trajectories From Multiple Fixed Cameras

Jinchang Ren; Ming Xu; James Orwell; Graeme A. Jones

In this paper, model-based approaches for real-time 3-D soccer ball tracking are proposed, using image sequences from multiple fixed cameras as input. The main challenges include filtering false alarms, tracking through missing observations, and estimating 3-D positions from single or multiple cameras. The key innovations are: 1. incorporating motion cues and temporal hysteresis thresholding in ball detection; 2. modeling each ball trajectory as curve segments in successive virtual vertical planes so that the 3-D position of the ball can be determined from a single camera view; and 3. introducing four motion phases (rolling, flying, in possession, and out of play) and employing phase-specific models to estimate ball trajectories which enables high-level semantics applied in low-level tracking. In addition, unreliable or missing ball observations are recovered using spatio-temporal constraints and temporal filtering. The system accuracy and robustness are evaluated by comparing the estimated ball positions and phases with manual ground-truth data of real soccer sequences.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Novel two dimensional singular spectrum analysis for effective feature extraction and data classification in hyperspectral imaging

Jaime Zabalza; Jinchang Ren; Jiangbin Zheng; Junwei Han; Huimin Zhao; Shutao Li; Stephen Marshall

Feature extraction is of high importance for effective data classification in hyperspectral imaging (HSI). Considering the high correlation among band images, spectral-domain feature extraction is widely employed. For effective spatial information extraction, a 2-D extension to singular spectrum analysis (2D-SSA), which is a recent technique for generic data mining and temporal signal analysis, is proposed. With 2D-SSA applied to HSI, each band image is decomposed into varying trends, oscillations, and noise. Using the trend and the selected oscillations as features, the reconstructed signal, with noise highly suppressed, becomes more robust and effective for data classification. Three publicly available data sets for HSI remote sensing data classification are used in our experiments. Comprehensive results using a support vector machine classifier have quantitatively evaluated the efficacy of the proposed approach. Benchmarked with several state-of-the-art methods including 2-D empirical mode decomposition (2D-EMD), it is found that our proposed 2D-SSA approach generates the best results in most cases. Unlike 2D-EMD that requires sequential transforms to obtain detailed decomposition, 2D-SSA extracts all components simultaneously. As a result, the execution time in feature extraction can be also dramatically reduced. The superiority in terms of enhanced discrimination ability from 2D-SSA is further validated when a relatively weak classifier, i.e., the k-nearest neighbor, is used for data classification. In addition, the combination of 2D-SSA with 1-D principal component analysis (2D-SSA-PCA) has generated the best results among several other approaches, demonstrating the great potential in combining 2D-SSA with other approaches for effective spatial-spectral feature extraction and dimension reduction in HSI.


Computer Vision and Image Understanding | 2009

Tracking the soccer ball using multiple fixed cameras

Jinchang Ren; James Orwell; Graeme A. Jones; Ming Xu

This paper demonstrates innovative techniques for estimating the trajectory of a soccer ball from multiple fixed cameras. Since the ball is nearly always moving and frequently occluded, its size and shape appearance varies over time and between cameras. Knowledge about the soccer domain is utilized and expressed in terms of field, object and motion models to distinguish the ball from other movements in the tracking and matching processes. Using ground plane velocity, longevity, normalized size and color features, each of the tracks obtained from a Kalman filter is assigned with a likelihood measure that represents the ball. This measure is further refined by reasoning through occlusions and back-tracking in the track history. This can be demonstrated to improve the accuracy and continuity of the results. Finally, a simple 3D trajectory model is presented, and the estimated 3D ball positions are fed back to constrain the 2D processing for more efficient and robust detection and tracking. Experimental results with quantitative evaluations from several long sequences are reported.

Collaboration


Dive into the Jinchang Ren's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaime Zabalza

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

Jiangbin Zheng

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tong Qiao

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

Genyun Sun

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge