Jiangyong Duan
Chinese Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jiangyong Duan.
international conference on computer vision | 2013
Gaofeng Meng; Ying Wang; Jiangyong Duan; Shiming Xiang; Chunhong Pan
Images captured in foggy weather conditions often suffer from bad visibility. In this paper, we propose an efficient regularization method to remove hazes from a single input image. Our method benefits much from an exploration on the inherent boundary constraint on the transmission function. This constraint, combined with a weighted L1-norm based contextual regularization, is modeled into an optimization problem to estimate the unknown scene transmission. A quite efficient algorithm based on variable splitting is also presented to solve the problem. The proposed method requires only a few general assumptions and can restore a high-quality haze-free image with faithful colors and fine image details. Experimental results on a variety of haze images demonstrate the effectiveness and efficiency of the proposed method.
IEEE Geoscience and Remote Sensing Letters | 2015
Zisha Zhong; Bin Fan; Jiangyong Duan; Lingfeng Wang; Kun Ding; Shiming Xiang; Chunhong Pan
We propose to integrate spectral-spatial feature extraction and tensor discriminant analysis for hyperspectral image classification. First, we apply remarkable spectral-spatial feature extraction approaches in the hyperspectral cube to extract a feature tensor for each pixel. Then, based on class label information, local tensor discriminant analysis is used to remove redundant information for subsequent classification procedure. The approach not only extracts sufficient spectral-spatial features from original hyperspectral images but also gets better feature representation owing to tensor framework. Comparative results on two benchmarks demonstrate the effectiveness of our method.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012
Gaofeng Meng; Chunhong Pan; Shiming Xiang; Jiangyong Duan; Nanning Zheng
In this paper, we propose a metric rectification method to restore an image from a single camera-captured document image. The core idea is to construct an isometric image mesh by exploiting the geometry of page surface and camera. Our method uses a general cylindrical surface (GCS) to model the curved page shape. Under a few proper assumptions, the printed horizontal text lines are shown to be line convergent symmetric. This property is then used to constrain the estimation of various model parameters under perspective projection. We also introduce a paraperspective projection to approximate the nonlinear perspective projection. A set of close-form formulas is thus derived for the estimate of GCS directrix and document aspect ratio. Our method provides a straightforward framework for image metric rectification. It is insensitive to camera positions, viewing angles, and the shapes of document pages. To evaluate the proposed method, we implemented comprehensive experiments on both synthetic and real-captured images. The results demonstrate the efficiency of our method. We also carried out a comparative experiment on the public CBDAR2007 data set. The experimental results show that our method outperforms the state-of-the-art methods in terms of OCR accuracy and rectification errors.
Neurocomputing | 2014
Jiangyong Duan; Gaofeng Meng; Shiming Xiang; Chunhong Pan
This paper presents a novel region-based framework for multifocus image fusion. The core idea is to segment the in-focus regions from the input images and merge them up to produce an all-in-focus image. To this end, we propose three intuitive constraints on the fusion process and model them into three energy terms, i.e., reconstruction error, out-of-focus energy and smoothness regularization. The three terms are then formulated into an optimization framework problem to solve a segmentation map. We also propose a greedy algorithm to minimize the objective function, which alternatively updates each pixel in the segmentation map using a coarse-to-fine strategy. The fused image is finally generated by combining the segmented in-focus regions in each source image via the segmentation map. Our approach can yield a seamless result with much fewer ringing artifacts. Comparative experiments based on various synthesized and real images demonstrate that our approach outperforms the state-of-the-art methods
Journal of remote sensing | 2016
Haichang Li; Ying Wang; Shiming Xiang; Jiangyong Duan; Feiyun Zhu; Chunhong Pan
ABSTRACT In this article, a label propagation approach with automatic seed selection is developed for hyperspectral image classification. The core idea is to combine pixel-wise classification results with spatial information described by a data graph. Using only the support vector machine (SVM) classifier on spectral features to tackle the hyperspectral classification task will produce results with a salt-and-pepper appearance. To overcome this limitation, the spatial information is incorporated by label propagation. The performance of label propagation is dependent on two points: the seeds and the connection graph. Generally, a limited number of labelled samples are available, which are considered as seeds in label propagation. However, the limited seeds will result in bad label propagation. Therefore, pseudo-seeds are automatically selected in local windows. Specifically, the pixels whose initial labels according to SVM are consistent with their most spatial neighbours are selected as seeds. Through seed selection, the number of seeds is greatly increased. Then, the label information of the selected seeds is propagated to their spatial neighbours using a data graph which is constructed according to the local structures in the image. Through seed selection and label propagation on the graph, the problem of salt-and-pepper appearance is solved elegantly – the noisy labels are highly suppressed and most of the structures are preserved. Competitive experimental results on a variety of hyperspectral data sets demonstrate the effectiveness of the proposed method.
international conference on acoustics, speech, and signal processing | 2013
Jiangyong Duan; Gaofeng Meng; Shiming Xiang; Chunhong Pan
This paper presents a new deblurring method to remove the out-of-focus blur from similar image pairs. The method is motivated by an observation that a blurred structure appearing in one image can often have its corresponding clear one in the similar clear images. Our method first extracts the patch pairs from input images by SIFT matching. Then the constraints on the patch pairs are used to estimate the blur kernel via the RANSAC algorithm. Finally, the non-blind deconvolution is adopted to restore the blurred image. The main advantage is that we can improve the deblurring results with the help of additional similar clear images in many practical applications. Our method is validated on synthetic and real images by comparing with state-of-the-art methods.
international conference on pattern recognition | 2014
Haichang Li; Jiangyong Duan; Shiming Xiang; Lingfeng Wang; Chunhong Pan
Classification of hyper spectral images is an important issue in remote sensing image processing systems. Hyper spectral images have advantages in pixel-wise classification owing to the high spectral resolution. However, the pixel-wise classification result often introduces the salt-and-pepper appearance because of the complex noise produced by atmosphere and instrument. An effective way to overcome this phenomenon is to resort to the spatial information. This paper proposes a method to solve the above problem by using spatial similarity information. First, in order to avoid the effect of noisy pixels and mixed pixels, reliable seeds are selected in local windows according to the agreement between the central pixel and its spatial neighbors. Then, the information of the reliable seeds is propagated to their spatial neighbors by a graph Laplacian. Specifically, the graph Laplacian is designed to propagate information among spatial neighbors with close similarity relationship so that some small or long thin objects are identified. Through the seed selection and local reliable information propagation, the problem of noisy labels is solved elegantly. Experiments on three real hyper spectral data sets with different spatial resolution, spectral resolution and land covers demonstrate the effectiveness of our method.
international conference on image processing | 2013
Haichang Li; Ying Wang; Jiangyong Duan; Shiming Xiang; Chunhong Pan
In this paper, we propose a novel group sparsity based semi-supervised band selection method. There are three key features in our method. First, it fulfills the band selection task by employing group sparsity on the regression coefficients in a robust linear regression for classification model, so that the selected bands hold lower classification errors. Second, the spatial smoothness prior is incorporated to preserve the similarity of spatial neighbors in band selection. Third, the objective function is efficiently optimized via an alternative iteration algorithm. Comparative results on two hyper-spectral data sets validate the effectiveness of our method, showing higher classification accuracies.
international conference on image processing | 2013
Jixia Zhang; Haibo Wang; Shaoguo Liu; Jiangyong Duan; Ying Wang; Chunhong Pan
In this paper, we present an Integrated Semi-Supervised Graph (IntSSG) approach to automatically segment face from color-depth video. In the first step, IntSSG performs skin color detection and online SIFT matching to initialize some face and non-face pixels. Then, the labels of these pixels are refined by conducting adaptive depth thresholding. Finally, based on a semi-supervised graph framework, IntSSG segments face by propagating the refined labels to other pixels. Experimental results show that IntSSG is able to accurately segment faces in difficult situations such as large pose changes and illumination variations.
asian conference on pattern recognition | 2013
Jiangyong Duan; Gaofeng Meng; Shiming Xiang; Chunhong Pan
This paper presents a new method for multifocus image fusion. We formulate the problem as an optimization framework with three terms to model common visual artifacts. A reconstruction error term is used to remove the boundary seam artifacts, and an out-of-focus energy term is used to remove the ringing artifacts. Together with an additional smoothness term, these three terms define the objective function of our framework. The objective function is then minimized by an efficient greedy iteration algorithm. Our method produces high quality fusion results with few visual artifacts. Comparative results demonstrate the efficiency of our method.