Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pan Lin is active.

Publication


Featured researches published by Pan Lin.


Signal, Image and Video Processing | 2017

Technique for multi-focus image fusion based on fuzzy-adaptive pulse-coupled neural network

Yong Yang; Yue Que; Shuying Huang; Pan Lin

Multi-focus image fusion technique can solve the problem that not all the targets in an image are clear in case of imaging in the same scene. In this paper, a novel multi-focus image fusion technique is presented, which is developed by using the nonsubsampled contourlet transform (NSCT) and a proposed fuzzy logic based adaptive pulse-coupled neural network (PCNN) model. In our method, sum-modified Laplacian (SML) is calculated as the motivation for PCNN neurons in NSCT domain. Since the linking strength plays an important role in PCNN, we propose an adaptively fuzzy way to determine it by computing each coefficient’s importance relative to the surrounding coefficients. Combined with human visual perception characteristics, the fuzzy membership value is employed to automatically achieve the degree of importance of each coefficient, which is utilized as the linking strength in PCNN model. Experimental results on simulated and real multi-focus images show that the proposed technique has a superior performance to series of exist fusion methods.


IEEE Transactions on Instrumentation and Measurement | 2017

Multiple Visual Features Measurement With Gradient Domain Guided Filtering for Multisensor Image Fusion

Yong Yang; Yue Que; Shuying Huang; Pan Lin

Multisensor image fusion technologies, which convey image information from different sensor modalities to a single image, have been a growing interest in recent research. In this paper, we propose a novel multisensor image fusion method based on multiple visual features measurement with gradient domain guided filtering. First, a Gaussian smoothing filter is employed to decompose each source image into two components: approximate component formed by homogeneous regions and detail component with sharp edges. Second, an effective decision map construction model is presented by measuring three key visual features of the input sensor image: contrast saliency, sharpness, and structure saliency. Third, a gradient domain guided filtering-based decision map optimization technique is proposed to make full use of spatial consistency and generate weight maps. Finally, the resultant image is fused with the weight maps and then is experimentally verified through multifocus image, multimodal medical image, and infrared-visible image fusion. The experimental results demonstrate that the proposed method can achieve better performance than state-of-the-art methods in terms of subjective visual effect and objective evaluation.


IEEE Access | 2017

A Hybrid Method for Multi-Focus Image Fusion Based on Fast Discrete Curvelet Transform

Yong Yang; Song Tong; Shuying Huang; Pan Lin; Yuming Fang

This paper presents a fast discrete Curvelet transform (FDCT)-based technique for multi-focus image fusion to address two problems: texture selection in FDCT domain and block effect in spatial-based fusion. First, we present a frequency-based model by performing FDCT on the input images. Considering the human visual system characteristics, a union of pulse coupled neural network and sum-modified-Laplacian algorithms are proposed to extract the detailed information of frequencies. Then, we construct a hybrid spatial-based model. Unlike other spatial-based methods, we combine the image difference and the detailed information extracted from input images to detect the focused region. Finally, to evaluate the robustness of proposed method, we design a completed evaluation process considering the misregistration, noise error, and conditional focus situations. Experimental results indicate that the proposed method improves the fusion performance and has less computational complexity compared with various exiting frequency-based and spatial-based fusion methods.


IEEE Access | 2017

Multi-Frame Super-Resolution Reconstruction Based on Gradient Vector Flow Hybrid Field

Shuying Huang; Jun Sun; Yong Yang; Yuming Fang; Pan Lin

In this paper, we propose a novel multi-frame super-resolution (SR) method, which is developed by considering image enhancement and denoising into the SR processing. For image enhancement, a gradient vector flow hybrid field (GVFHF) algorithm, which is robust to noise is first designed to capture the image edges more accurately. Then, through replacing the gradient of anisotropic diffusion shock filter (ADSF) by GVFHF, a GVFHF-based ADSF (GVFHF-ADSF) model is proposed, which can effectively achieve image denoising and enhancement. In addition, a difference curvature-based spatial weight factor is defined in the GVFHF-ADSF model to obtain an adaptive weight between denoising and enhancement in the flat and edge regions. Finally, a GVFHF-ADSF-based multi-frame SR method is presented by employing the GVFHF-ADSF model as a regularization term and the steepest descent algorithm is adopted to solve the inverse SR problem. Experimental results and comparisons with existing methods demonstrate that the proposed GVFHF-ADSF-based SR algorithm can effectively suppress both Gaussian and salt-and-pepper noise, meanwhile enhance edges of the reconstructed image.


Proceedings of the 2nd International Conference on Computer Science and Application Engineering - CSAE '18 | 2018

Rapid Detection of Targets from Complex Backgrounds Based on Eye-tracking Data

Yichuan Jiang; Sheng Ge; Xinyu Chen; Hui Liu; Yue Leng; Yuankui Yang; Pan Lin; Junfeng Gao; Ruiming Wang; Keiji Iramina

In this study1, we used a remote eye-tracker in a head-free setting to measure target detection in visual scenes. Participants underwent two kinds of tasks that were designed to simulate different situations and to study the validity and accuracy of the remote eye-tracking system. We found that the average target detection rate in the simulation task reached 88.95%, whereas in the real scene task the average accuracy was 83.20%. Our results show that the remote eye-tracker possesses enough precision to be used for the measurement of target detection in complex visual scenes.


IEEE Journal of Biomedical and Health Informatics | 2018

Sinusoidal Signal Assisted Multivariate Empirical Mode Decomposition for Brain–Computer Interfaces

Sheng Ge; Yan Hua Shi; Ruimin Wang; Pan Lin; Jun Feng Gao; Gao Peng Sun; Keiji Iramina; Yuan Kui Yang; Yue Leng; Hai Xian Wang; Wenming Zheng

A brain–computer interface (BCI) is a communication approach that permits cerebral activity to control computers or external devices. Brain electrical activity recorded with electroencephalography (EEG) is most commonly used for BCI. Noise-assisted multivariate empirical mode decomposition (NA-MEMD) is a data-driven time-frequency analysis method that can be applied to nonlinear and nonstationary EEG signals for BCI data processing. However, because white Gaussian noise occupies a broad range of frequencies, some redundant components are introduced. To solve this leakage problem, in this study, we propose using a sinusoidal assisted signal that occupies the same frequency ranges as the original signals to improve MEMD performance. To verify the effectiveness of the proposed sinusoidal signal assisted MEMD (SA-MEMD) method, we compared the decomposition performances of MEMD, NA-MEMD, and the proposed SA-MEMD using synthetic signals and a real-world BCI dataset. The spectral decomposition results indicate that the proposed SA-MEMD can avoid the generation of redundant components and over decomposition, thus, substantially reduce the mode mixing and misalignment that occurs in MEMD and NA-MEMD. Moreover, using SA-MEMD as a signal preprocessing method instead of MEMD or NA-MEMD can significantly improve BCI classification accuracy and reduce calculation time, which indicates that SA-MEMD is a powerful spectral decomposition method for BCI.


international congress on image and signal processing | 2017

Action understanding based on a combination of one-versus-rest and one-versus-one multi-classification methods

Hui Liu; Wengming Zheng; Gaopeng Sun; Yanhua Shi; Yue Leng; Pan Lin; Ruimin Wang; Yuankui Yang; Jun Feng Gao; Haixian Wang; Keiji Iramina; Sheng Ge

When people observe the actions of others, they naturally try to understand the underlying intentions. This behavior is called action understanding, and it has an important influence on mental development, language comprehension, and socialization. In this study, we used functional near-infrared spectroscopy (fNIRS) to obtain brain signals related to action understanding and then classified different intentions. Aiming to overcome the drawbacks of traditional multiclass classification methods of one-versus-rest (OVR) and one-versus-one (OVO), in this paper, we propose a new effective method to solve multiclass classification that is a combination of OVR and OVO. Compared with OVO, this new method effectively improved the accuracy of four-class classification from 25% to 48%.


international congress on image and signal processing | 2017

One class support vector machine based filter for improving the classification accuracy of SSVEP BCI

Gaopeng Sun; Hui Liu; Yanhua Shi; Yue Leng; Pan Lin; Ruimin Wang; Yuankui Yang; Junfeng Gao; Haixian Wang; Keiji Iramina; Sheng Ge

Canonical correlation analysis (CCA) has been proved to be effective in the detection of steady state visual evoked potential (SSVEP) signals. However, the CCA method only chooses the frequency in the reference mode that corresponds to the maximum correlation value as the target. This may make the CCA output less robust. In this study, we propose a one-class support vector machine based filter to filter the sequences of correlation values in the process of the detection of SSVEP signals. The results demonstrate that the classification accuracy improved over different time windows for all subjects and the improvement achieved approximately 10% for some subjects. Moreover, the ratio of instructions that were filtered incorrectly was relative low (less than 5%) if the SSVEP signals were generated effectively.


IEEE Access | 2017

Temporal-Spatial Features of Intention Understanding Based on EEG-fNIRS Bimodal Measurement

Sheng Ge; Meng Yuan Ding; Zheng Zhang; Pan Lin; Jun Feng Gao; Ruimin Wang; Gao Peng Sun; Keiji Iramina; Hui Hua Deng; Yuan Kui Yang; Yue Leng

Understanding the actions of other people is a key component of social interaction. This paper used an electroencephalography and functional near infrared spectroscopy (EEG-fNIRS) bimodal system to investigate the temporal-spatial features of action intention understanding. We measured brain activation while participants observed three actions: 1) grasping a cup for drinking; 2) grasping a cup for moving; and 3) no meaningful intention. Analysis of EEG maximum standardized current density revealed that brain activation transitioned from the left to the right hemisphere. EEG-fNIRS source analysis results revealed that both the mirror neuron system and theory of mind network are involved in action intention understanding, and the extent to which these two systems are engaged appears to be determined by the clarity of the observed intention. These findings indicate that action intention understanding is a complex and dynamic process.


international conference on information science and control engineering | 2016

Evaluating the Feasibility of a Novel Approach for SSVEP Detection Accuracy Improvement Using Phase Shifts

Gaopeng Sun; Ruimin Wang; Yue Leng; Yuankui Yang; Pan Lin; Sheng Ge

The canonical correlation analysis (CCA), double-partial least-squares (DPLS) methods and least absolute shrinkage and selection operator (LASSO) have been proven effectively in detecting the steady-state visual evoked potential (SSVEP) in SSVEP-based brain-computer interface systems. However, the accuracy of SSVEP classification can be affected by phase shifts of the electroencephalography data, so we explored the possibility of improving SSEVP detection using these methods at different phase shifts. After calculating the accuracy at different phases, we found that the phase shifts could affect the accuracy of SSVEP classification, the classification accuracy could improved about 1.1% mostly using the CCA method, meanwhile the comparison of the three methods was made at the same time and some differences between the CCA, DPLS and LASSO methods at the different phase shifts also be found. The results indicated that on the one hand, the accuracy of SSVEP detection was improved with the change of the phase, but on the other hand, although the three methods could obtain high classification accuracy, the DPLS and LASSO method showed larger fluctuations than the CCA method as the phase of the electroencephalography data of each participant or their average changed.

Collaboration


Dive into the Pan Lin's collaboration.

Top Co-Authors

Avatar

Sheng Ge

Southeast University

View shared research outputs
Top Co-Authors

Avatar

Yue Leng

Southeast University

View shared research outputs
Top Co-Authors

Avatar

Yong Yang

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuying Huang

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Feng Gao

South Central University for Nationalities

View shared research outputs
Top Co-Authors

Avatar

Junfeng Gao

South Central University for Nationalities

View shared research outputs
Top Co-Authors

Avatar

Yue Que

Jiangxi University of Finance and Economics

View shared research outputs
Researchain Logo
Decentralizing Knowledge