Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingwen Yan is active.

Publication


Featured researches published by Jingwen Yan.


world congress on intelligent control and automation | 2008

Image fusion algorithm based on orientation information motivated Pulse Coupled Neural Networks

Xiaobo Qu; Changwei Hu; Jingwen Yan

Pulse Coupled Neural Networks (PCNN) is a visual cortex-inspired neural networks and characterized by the global coupling and pulse synchronization of neurons. It has been proven suitable for image processing and successfully employed in image fusion. However, in most PCNN-based fusion algorithms, only single pixel value is input to motivate PCNN neuron. This is not effective enough because humans are often sensitive to features, not only pixel value. In this paper, novel orientation information is considered as features to motivate PCNN. Visual observation and objective performance evaluation criteria demonstrate that the proposed algorithm outperforms typical wavelet-based, lapacian pyramid transform-based and PCNN-based fusion algorithms.


Journal of Sensors | 2015

Undersampled Hyperspectral Image Reconstruction Based on Surfacelet Transform

Lei Liu; Jingwen Yan; Di Guo; Yunsong Liu; Xiaobo Qu

Hyperspectral imaging is a crucial technique for military and environmental monitoring. However, limited equipment hardware resources severely affect the transmission and storage of a huge amount of data for hyperspectral images. This limitation has the potentials to be solved by compressive sensing (CS), which allows reconstructing images from undersampled measurements with low error. Sparsity and incoherence are two essential requirements for CS. In this paper, we introduce surfacelet, a directional multiresolution transform for 3D data, to sparsify the hyperspectral images. Besides, a Gram-Schmidt orthogonalization is used in CS random encoding matrix, two-dimensional and three-dimensional orthogonal CS random encoding matrixes and a patch-based CS encoding scheme are designed. The proposed surfacelet-based hyperspectral images reconstruction problem is solved by a fast iterative shrinkage-thresholding algorithm. Experiments demonstrate that reconstruction of spectral lines and spatial images is significantly improved using the proposed method than using conventional three-dimensional wavelets, and growing randomness of encoding matrix can further improve the quality of hyperspectral data. Patch-based CS encoding strategy can be used to deal with large data because data in different patches can be independently sampled.


Optical Engineering | 2015

Karhunen-Loève transform for compressive sampling hyperspectral images

Lei Liu; Jingwen Yan; Xianwei Zheng; Hong Peng; Di Guo; Xiaobo Qu

Abstract. Compressed sensing (CS) is a new jointly sampling and compression technology for remote sensing. In hyperspectral imaging, a typical CS method encodes the two-dimensional (2-D) spatial information of each spectral band or encodes the third spectral information simultaneously. However, encoding the spatial information is much easier than encoding the spectral information. Therefore, it is crucial to make use of the spectral information to improve the compression rate on 2-D CS data. We propose to encode the third spectral information with an adaptive Karhunen–Loève transform. With a mathematical proof, we show that interspectral correlations are preserved among 2-D randomly encoded spatial information. This property means that one can compress 2-D CS data effectively with a Karhunen–Loève transform. Experiments demonstrate that the proposed method can better reconstruct both spectral curves and spatial images than traditional compression methods at the bit rates 0 to 1.


congress on image and signal processing | 2008

A Novel Video Denoising Method Based on Surfacelet Transform

Jingwen Yan; Hongzhi Xiao; Xiaobo Qu

In this paper, we propose a novel video denoising method applying 3D Context Model in Surfacelet Transform domain (3DCMST). Because of its directional decomposition, perfect reconstruction and low redundancy, ST has being become a powerful tool in image processing and analysis. In order to take fully advantage of the characteristic of the coefficients in ST domain, the Context model is extended from 2D to 3D, which can accomplish true 3D denoising processing. The coefficients of ST are divided into several classes according to their energy distribution by 3D Context model, and each class has independent energy estimate and threshold respectively. Experimental results show that 3DCMST has a good denoising performance in quality and fidelity. It is especially suitable for the video frame proceeding with furious movement and plenty of texture.


Chinese Optics Letters | 2007

A novel image fusion algorithm based on bandelet transform

Xiaobo Qu; Jingwen Yan; Guofu Xie; Ziqian Zhu; Bengang Chen


international conference on wavelet analysis and pattern recognition | 2007

Image fusion algorithm based on neighbors and cousins information in nonsubsampled contourlet transform domain

Xiaobo Qu; Guofu Xie; Jingwen Yan; Ziqian Zhu; Bengang Chen


Optics Communications | 2015

High quality multi-focus image fusion using self-similarity and depth information

Di Guo; Jingwen Yan; Xiaobo Qu


Optics and Precision Engineering | 2009

Multifocus image fusion method of sharp frequency localized Contourlet transform domain based on sum-modified-Laplacian

Xiaobo Qu; Jingwen Yan; Gui-De Yang; 屈小波


bio-inspired computing: theories and applications | 2007

Multi-focus Image Fusion Algorithm Based on Regional Firing Characteristic of Pulse Coupled Neural Networks

Xiaobo Qu; Jingwen Yan


Advances in Applied Mathematics and Mechanics | 2015

Bessel Sequences and Its F-Scalability

Lei Liu; Xianwei Zheng; Jingwen Yan; Xiao-Dong Niu

Collaboration


Dive into the Jingwen Yan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guofu Xie

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Di Guo

Xiamen University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge