Xiaoqin Zeng
Hohai University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiaoqin Zeng.
Neural Networks | 2013
Yan Xu; Xiaoqin Zeng; Lixin Han; Jing Yang
We use a supervised multi-spike learning algorithm for spiking neural networks (SNNs) with temporal encoding to simulate the learning mechanism of biological neurons in which the SNN output spike trains are encoded by firing times. We first analyze why existing gradient-descent-based learning methods for SNNs have difficulty in achieving multi-spike learning. We then propose a new multi-spike learning method for SNNs based on gradient descent that solves the problems of error function construction and interference among multiple output spikes during learning. The method could be widely applied to single spiking neurons to learn desired output spike trains and to multilayer SNNs to solve classification problems. By overcoming learning interference among multiple spikes, our method has high learning accuracy when there are a relatively large number of output spikes in need of learning. We also develop an output encoding strategy with respect to multiple spikes for classification problems. This effectively improves the classification accuracy of multi-spike learning compared to that of single-spike learning.
Neural Computation | 2013
Yan Xu; Xiaoqin Zeng; Shuiming Zhong
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.
Information Processing and Management | 2009
Shengli Wu; Yaxin Bi; Xiaoqin Zeng; Lixin Han
In data fusion, the linear combination method is a very flexible method since different weights can be assigned to different systems. However, it remains an open question which weighting schema should be used. In some previous investigations and experiments, a simple weighting schema was used: for a system, its weight is assigned as its average performance over a group of training queries. However, it is not clear if this weighting schema is good or not. In some other investigations, different numerical optimisation methods were used to search for appropriate weights for the component systems. One major problem with those numerical optimisation methods is their low efficiency. It might not be feasible to use them in some situations, for example in some dynamic environments, system weights need to be updated from time to time for reasonably good performance. In this paper, we investigate the weighting issue by extensive experiments. The key point is to try to find the relation between performances of component systems and their corresponding weights which can lead to good fusion performance. We demonstrate that a series of power functions of average performance, which can be implemented as efficiently as the simple weighting schema, is more effective than the simple weighting schema for the linear data fusion method. Some other features of the power function weighting schema and the linear combination method are also investigated. The observations obtained from this study can be used directly in fusion applications of component retrieval results. The observations are also very useful for optimisation methods to choose better starting points and therefore to obtain more effective weights more quickly.
IEEE Transactions on Neural Networks | 2012
Shuiming Zhong; Xiaoqin Zeng; Shengli Wu; Lixin Han
This paper proposes a set of adaptive learning rules for binary feedforward neural networks (BFNNs) by means of the sensitivity measure that is established to investigate the effect of a BFNNs weight variation on its output. The rules are based on three basic adaptive learning principles: the benefit principle, the minimal disturbance principle, and the burden-sharing principle. In order to follow the benefit principle and the minimal disturbance principle, a neuron selection rule and a weight adaptation rule are developed. Besides, a learning control rule is developed to follow the burden-sharing principle. The advantage of the rules is that they can effectively guide the BFNNs learning to conduct constructive adaptations and avoid destructive ones. With these rules, a sensitivity-based adaptive learning (SBALR) algorithm for BFNNs is presented. Experimental results on a number of benchmark data demonstrate that the SBALR algorithm has better learning performance than the Madaline rule II and backpropagation algorithms.
Neural Computing and Applications | 2009
Xiaoqin Zeng; Jing Shao; Yingfeng Wang; Shuiming Zhong
Architecture design is a very important issue in neural network research. One popular way to find proper size of a network is to prune an oversize trained network to a smaller one while keeping established performance. This paper presents a sensitivity-based approach to prune hidden Adalines from a Madaline with causing as little as possible performance loss and thus easy compensating for the loss. The approach is novel in setting up a relevance measure, by means of an Adalines’ sensitivity measure, to locate the least relevant Adaline in a Madaline. The sensitivity measure is the probability of an Adaline’s output inversions due to input variation with respect to overall input patterns, and the relevance measure is defined as the multiplication of the Adaline’s sensitivity value by the summation of the absolute value of the Adaline’s outgoing weights. Based on the relevance measure, a pruning algorithm can be simply programmed, which iteratively prunes an Adaline with the least relevance value from hidden layer of a given Madaline and then conducts some compensations until no more Adalines can be removed under a given performance requirement. The effectiveness of the pruning approach is verified by some experimental results.
Neurocomputing | 2015
Caigen Zhou; Xiaoqin Zeng; Haibo Jiang; Lixin Han
This paper presents a novel method for designing associative memories based on discrete recurrent neural networks to accurately memorize the networks? external inputs. In the method, a generalized model is proposed for bipolar auto-associative memory and establishing an exponential stable criteria of the networks. The model is of generality with considering time delay and introducing a tunable slope activation function, and can robustly recall the memorized external input patterns in an auto-associative way. Experimental verification demonstrates that the proposed method is more effective and generalized than other existing ones.
Neurocomputing | 2016
Caigen Zhou; Xiaoqin Zeng; Jianjiang Yu; Haibo Jiang
A unified associative memory model with a novel method for designing associative memories is presented in this paper. Based on continuous recurrent neural networks, bipolar patterns inputted from external can cause the output of neural networks to be memorized patterns. In the method, two conditions relevant to external inputs are derived to ensure the network states converge to a stable interval, and an exponential stable criterion is proposed for the network being a bipolar associative memory with higher recall speed. By introducing a tunable slope activation function and considering time delay, the proposed model is general and can recall the memorized patterns in auto-associative and hetero-associative way, while higher robust and more flexible memory can be obtained through the proposed method. Experimental verification demonstrates the effectiveness and generalization of the proposed method.
Journal of the Association for Information Science and Technology | 2014
Shengli Wu; Jieyu Li; Xiaoqin Zeng; Yaxin Bi
Data fusion is currently used extensively in information retrieval for various tasks. It has proved to be a useful technology because it is able to improve retrieval performance frequently. However, in almost all prior research in data fusion, static search environments have been used, and dynamic search environments have generally not been considered. In this article, we investigate adaptive data fusion methods that can change their behavior when the search environment changes. Three adaptive data fusion methods are proposed and investigated. To test these proposed methods properly, we generate a benchmark from a historic Text REtrieval Conference data set. Experiments with the benchmark show that 2 of the proposed methods are good and may potentially be used in practice.
international conference on machine learning and cybernetics | 2012
Ding-Ding Ma; Xiaoqin Zeng
In this paper, an improved codebook generation algorithm called SLVQ (Speaker Level Vector quantization) is proposed, which can improve the recognition accuracy of speaker independent isolated words. Linde-Buzo-Gary (LBG) algorithm is the most commonly used codebook design method. The idea behind LBG is to find an optimal codebook that minimizes the distortion between the training words and the codebook. But this does not guarantee that the testing words also have minimum distortion as training words. To address the problem of producing poor codebook for testing words in speaker independent speech recognition, the proposed method makes use of the diversity of different speakers by randomly selecting some speakers and their pronounced words in the codebook design procedure to optimize codebooks. An evaluation experiment has been conducted to compare the speech recognition performance of the codebooks produced by the LBG, the LVQ (learning vector quantization), and the SLVQ. It is clearly shown that the SLVQ method performs better than the other two methods.
database and expert systems applications | 2011
Shengli Wu; Yaxin Bi; Xiaoqin Zeng
In information retrieval, data fusion has been investigated by many researchers. Previous investigation and experimentation demonstrate that the linear combination method is an effective data fusion method for combining multiple information retrieval results. One advantage is its flexibility since different weights can be assigned to different component systems so as to obtain better fusion results. However, how to obtain suitable weights for all the component retrieval systems is still an open problem. In this paper, we use the multiple linear regression technique to obtain optimum weights for all involved component systems. Optimum is in the least squares sense that minimize the difference between the estimated scores of all documents by linear combination and the judged scores of those documents. Our experiments with four groups of runs submitted to TREC show that the linear combination method with such weights steadily outperforms the best component system and other major data fusion methods such as CombSum, CombMNZ, and the linear combination method with performance level/performance square weighting schemas by large margins.