Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuanhua Qiao is active.

Publication


Featured researches published by Yuanhua Qiao.


Cognitive Computation | 2017

Motor Imagery EEG Classification Based on Kernel Hierarchical Extreme Learning Machine

Lijuan Duan; Menghu Bao; Song Cui; Yuanhua Qiao; Jun Miao

As connections from the brain to an external device, Brain-Computer Interface (BCI) systems are a crucial aspect of assisted communication and control. When equipped with well-designed feature extraction and classification approaches, information can be accurately acquired from the brain using such systems. The Hierarchical Extreme Learning Machine (HELM) has been developed as an effective and accurate classification approach due to its deep structure and extreme learning mechanism. A classification system for motor imagery EEG signals is proposed based on the HELM combined with a kernel, herein called the Kernel Hierarchical Extreme Learning Machine (KHELM). Principle Component Analysis (PCA) is used to reduce the dimensionality of the data, and Linear Discriminant Analysis (LDA) is introduced to push the features away from different classes. To demonstrate the performance, the proposed system is applied to the BCI competition 2003 Dataset Ia, and the results are compared with those from state-of-the-art methods; we find that the accuracy is up to 94.54%.


international symposium on neural networks | 2009

A Method of Human Skin Region Detection Based on PCNN

Lijuan Duan; Zhiqiang Lin; Jun Miao; Yuanhua Qiao

A method of human skin region detection based on PCNN is proposed in this paper. Firstly, the input origin image is translated from RGB color space to YIQ color space, and I channel image is obtained. Secondly, we use the synchronous pulse firing mechanism of pulse coupled neural network (PCNN) to simulate the skin region detection mechanism of human eyes. Skin and non-skin regions are fired in different time. Therefore, skin regions are detected. Our comparison with other methods shows that the proposed method produces more accurate segmentation results.


Journal of Integrative Neuroscience | 2016

A neural network model for visual selection and shifting.

Yuanhua Qiao; Xiaojie Liu; Jun Miao; Lijuan Duan

In this paper, a two-layer network is built to simulate the mechanism of visual selection and shifting based on the mapping dynamic model for instantaneous frequency. Unlike the differential equation model using limit cycle to simulate neuron oscillation, we build an instantaneous frequency mapping dynamic model to describe the change of the neuron frequency to avoid the difficulty of generating limit cycle. The activity of the neuron is rebuilt based on the instantaneous frequency and in this work, we use the first layer of neurons to implement image segmentation and the second layer of neurons to act as visual selector. The frequency of the second neuron (central neuron) is always changing, while central neuron resonates with the neurons corresponding to an object, the object is selected, then with the central neuron frequency changing, the selected object loses attention, the process goes on.


Neural Computing and Applications | 2012

Qualitative analysis and application of locally coupled neural oscillator network

Yuanhua Qiao; Yong Meng; Lijuan Duan; Faming Fang; Jun Miao

This paper investigates a locally coupled neural oscillator autonomous system qualitatively. By applying an approximation method, we give a set of parameter values with which an asymptotically stable limit cycle exists, and the sufficient conditions on the coupling parameters that guarantee asymptotically global synchronization are established under the same external input. A gradational classifier is introduced to detect synchronization, and the network model based on the analytical results is applied to image segmentation. The performance is comparable to the results from other segmentation methods.


international symposium on neural networks | 2010

Visual selection and attention shifting based on fitzhugh-nagumo equations

Haili Wang; Yuanhua Qiao; Lijuan Duan; Faming Fang; Jun Miao; Bingpeng Ma

In this paper, we make some analysis on the FitzHugh-Nagumo model and improve it to build a neural network, and the network is used to implement visual selection and attention shifting Each group of neurons representing one object of a visual input is synchronized; different groups of neurons representing different objects of a visual input are desynchronized Cooperation and competition mechanism is also introduced to accelerate oscillating frequency of the salient object as well as to slow down other objects, which result in the most salient object jumping to a high frequency oscillation, while all other objects being silent The object corresponding to high frequency oscillation is selected, then the selected object is inhibited and other neurons continue to oscillate to select the next salient object.


international conference on natural computation | 2009

A Texture Images Segmentation Method Based on ICA Filters

Lijuan Duan; Jicai Ma; Jun Miao; Yuanhua Qiao

In this paper we present a feature extraction approach by using ICA filters bank, which consists of the ICA basis images learned from the training images. On the basis of its ability to capture the inherent properties of textured image, we use the ICA filters bank as template model to extract the texture feature for segmentation. Experiments based on clustering and classifications are demonstrated to show the feasibility of this method


Archive | 2011

Qualitative Analysis in Locally Coupled Neural Oscillator Network

Yong Meng; Yuanhua Qiao; Jun Miao; Lijuan Duan; Faming Fang

The paper investigates a locally coupled neural oscillator autonomous system qualitatively. To obtain analytical results, we choose an approximation method and obtain the set of parameter values for which an asymptotically stable limit cycle exists, and then give sufficient conditions on the coupling parameters which can guarantee asymptotically global synchronization of oscillators given the same external input. The above results are potentially useful to analytical and numerical work on the binding problem in perceptual grouping and pattern segmentation.


international symposium on neural networks | 2008

Image segmentation using dynamic mechanism based PCNN model

Yuanhua Qiao; Jun Miao; Lijuan Duan; Yunfeng Lu

Pulse-coupled neuron networks (PCNN) can be efficiently applied to image segmentation. However, the performance of segmentation depends on the suitable PCNN parameters, which are obtained by manual experiment, and the effect of the segmentation needs to be improved for images with noise. In this paper, dynamic mechanism based PCNN(DMPCNN) is brought forward to simulate the integrate-and-fire mechanism, and it is applied to segment images with noise effectively. Parameter selection is based on dynamic mechanism. Experimental results for image segmentation show its validity and robustness.


Archive | 2019

Seizure Prediction for iEEG Signal with Bag-of-Wave Model and Extreme Learning Machine

Song Cui; Lijuan Duan; Yuanhua Qiao; Xing Su

Long-term epileptic seizure prediction has potential to transform epilepsy care and treatment. However, the accuracy of seizure prediction is still difficult to satisfy the requirement of application. In this paper, a seizure prediction system is proposed based on Bag-of-Wave Model and Extreme Learning Machine. To get the representation of segments in iEEG signals, interictal codebook and preictal codebook are constructed by clustering algorithm. Histogram features are then extracted by projecting waves within the sliding window on two codebooks. In the end, classifying the feature with ELM into interictal phase and preictal phase. Experiments are operated on Kaggle Seizure Prediction Challenge dataset, which show the proposed approach is effective in seizure prediction.


Neurocomputing | 2018

Stereoscopic Saliency Model using Contrast and Depth-Guided-Background Prior

Fangfang Liang; Lijuan Duan; Wei Ma; Yuanhua Qiao; Zhi Cai; Laiyun Qing

Abstract Many successful models of saliency have been proposed to detect salient regions for 2D images. Because stereopsis, with its distinctive depth information, influences human viewing, it is necessary for stereoscopic saliency detection to consider depth information as an additional cue. In this paper, we propose a 3D stereoscopic saliency model based on both contrast and depth-guided-background prior. First, a depth-guided-background prior is specifically detected from a disparity map apart from the conventional prior, assuming boundary super-pixels as background. Then, saliency based on disparity with the help of the proposed prior is proposed to prioritize the contrasts among super-pixels. In addition, a scheme to combine the contrast of disparity and the contrast of color is presented. Finally, 2D spatial dissimilarity features are further employed to refine the saliency map. Experimental results on the PSU stereo saliency benchmark dataset (SSB) show that the proposed method performs better than existing saliency models.

Collaboration


Dive into the Yuanhua Qiao's collaboration.

Top Co-Authors

Avatar

Lijuan Duan

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Miao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Faming Fang

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Laiyun Qing

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chunpeng Wu

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Song Cui

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhen Yang

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaojie Liu

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yong Meng

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yunfeng Lu

Beijing University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge