Featured Researches

Neural And Evolutionary Computing

A Multiple Classifier Approach for Concatenate-Designed Neural Networks

This article introduces a multiple classifier method to improve the performance of concatenate-designed neural networks, such as ResNet and DenseNet, with the purpose to alleviate the pressure on the final classifier. We give the design of the classifiers, which collects the features produced between the network sets, and present the constituent layers and the activation function for the classifiers, to calculate the classification score of each classifier. We use the L2 normalization method to obtain the classifier score instead of the Softmax normalization. We also determine the conditions that can enhance convergence. As a result, the proposed classifiers are able to improve the accuracy in the experimental cases significantly, and show that the method not only has better performance than the original models, but also produces faster convergence. Moreover, our classifiers are general and can be applied to all classification related concatenate-designed network models.

Read more
Neural And Evolutionary Computing

A Nature-Inspired Feature Selection Approach based on Hypercomplex Information

Feature selection for a given model can be transformed into an optimization task. The essential idea behind it is to find the most suitable subset of features according to some criterion. Nature-inspired optimization can mitigate this problem by producing compelling yet straightforward solutions when dealing with complicated fitness functions. Additionally, new mathematical representations, such as quaternions and octonions, are being used to handle higher-dimensional spaces. In this context, we are introducing a meta-heuristic optimization framework in a hypercomplex-based feature selection, where hypercomplex numbers are mapped to real-valued solutions and then transferred onto a boolean hypercube by a sigmoid function. The intended hypercomplex feature selection is tested for several meta-heuristic algorithms and hypercomplex representations, achieving results comparable to some state-of-the-art approaches. The good results achieved by the proposed approach make it a promising tool amongst feature selection research.

Read more
Neural And Evolutionary Computing

A Neural Architecture Search based Framework for Liquid State Machine Design

Liquid State Machine (LSM), also known as the recurrent version of Spiking Neural Networks (SNN), has attracted great research interests thanks to its high computational power, biological plausibility from the brain, simple structure and low training complexity. By exploring the design space in network architectures and parameters, recent works have demonstrated great potential for improving the accuracy of LSM model with low complexity. However, these works are based on manually-defined network architectures or predefined parameters. Considering the diversity and uniqueness of brain structure, the design of LSM model should be explored in the largest search space possible. In this paper, we propose a Neural Architecture Search (NAS) based framework to explore both architecture and parameter design space for automatic dataset-oriented LSM model. To handle the exponentially-increased design space, we adopt a three-step search for LSM, including multi-liquid architecture search, variation on the number of neurons and parameters search such as percentage connectivity and excitatory neuron ratio within each liquid. Besides, we propose to use Simulated Annealing (SA) algorithm to implement the three-step heuristic search. Three datasets, including image dataset of MNIST and NMNIST and speech dataset of FSDD, are used to test the effectiveness of our proposed framework. Simulation results show that our proposed framework can produce the dataset-oriented optimal LSM models with high accuracy and low complexity. The best classification accuracy on the three datasets is 93.2%, 92.5% and 84% respectively with only 1000 spiking neurons, and the network connections can be averagely reduced by 61.4% compared with a single LSM. Moreover, we find that the total quantity of neurons in optimal LSM models on three datasets can be further reduced by 20% with only about 0.5% accuracy loss.

Read more
Neural And Evolutionary Computing

A Neural Network with Local Learning Rules for Minor Subspace Analysis

The development of neuromorphic hardware and modeling of biological neural networks requires algorithms with local learning rules. Artificial neural networks using local learning rules to perform principal subspace analysis (PSA) and clustering have recently been derived from principled objective functions. However, no biologically plausible networks exist for minor subspace analysis (MSA), a fundamental signal processing task. MSA extracts the lowest-variance subspace of the input signal covariance matrix. Here, we introduce a novel similarity matching objective for extracting the minor subspace, Minor Subspace Similarity Matching (MSSM). Moreover, we derive an adaptive MSSM algorithm that naturally maps onto a novel neural network with local learning rules and gives numerical results showing that our method converges at a competitive rate.

Read more
Neural And Evolutionary Computing

A Neuromorphic Paradigm for Online Unsupervised Clustering

A computational paradigm based on neuroscientific concepts is proposed and shown to be capable of online unsupervised clustering. Because it is an online method, it is readily amenable to streaming realtime applications and is capable of dynamically adjusting to macro-level input changes. All operations, both training and inference, are localized and efficient. The paradigm is implemented as a cognitive column that incorporates five key elements: 1) temporal coding, 2) an excitatory neuron model for inference, 3) winner-take-all inhibition, 4) a column architecture that combines excitation and inhibition, 5) localized training via spike timing de-pendent plasticity (STDP). These elements are described and discussed, and a prototype column is given. The prototype column is simulated with a semi-synthetic benchmark and is shown to have performance characteristics on par with classic k-means. Simulations reveal the inner operation and capabilities of the column with emphasis on excitatory neuron response functions and STDP implementations.

Read more
Neural And Evolutionary Computing

A New Artificial Neuron Proposal with Trainable Simultaneous Local and Global Activation Function

The activation function plays a fundamental role in the artificial neural network learning process. However, there is no obvious choice or procedure to determine the best activation function, which depends on the problem. This study proposes a new artificial neuron, named global-local neuron, with a trainable activation function composed of two components, a global and a local. The global component term used here is relative to a mathematical function to describe a general feature present in all problem domain. The local component is a function that can represent a localized behavior, like a transient or a perturbation. This new neuron can define the importance of each activation function component in the learning phase. Depending on the problem, it results in a purely global, or purely local, or a mixed global and local activation function after the training phase. Here, the trigonometric sine function was employed for the global component and the hyperbolic tangent for the local component. The proposed neuron was tested for problems where the target was a purely global function, or purely local function, or a composition of two global and local functions. Two classes of test problems were investigated, regression problems and differential equations solving. The experimental tests demonstrated the Global-Local Neuron network's superior performance, compared with simple neural networks with sine or hyperbolic tangent activation function, and with a hybrid network that combines these two simple neural networks.

Read more
Neural And Evolutionary Computing

A New Neuromorphic Computing Approach for Epileptic Seizure Prediction

Several high specificity and sensitivity seizure prediction methods with convolutional neural networks (CNNs) are reported. However, CNNs are computationally expensive and power hungry. These inconveniences make CNN-based methods hard to be implemented on wearable devices. Motivated by the energy-efficient spiking neural networks (SNNs), a neuromorphic computing approach for seizure prediction is proposed in this work. This approach uses a designed gaussian random discrete encoder to generate spike sequences from the EEG samples and make predictions in a spiking convolutional neural network (Spiking-CNN) which combines the advantages of CNNs and SNNs. The experimental results show that the sensitivity, specificity and AUC can remain 95.1%, 99.2% and 0.912 respectively while the computation complexity is reduced by 98.58% compared to CNN, indicating that the proposed Spiking-CNN is hardware friendly and of high precision.

Read more
Neural And Evolutionary Computing

A Novel DNN Training Framework via Data Sampling and Multi-Task Optimization

Conventional DNN training paradigms typically rely on one training set and one validation set, obtained by partitioning an annotated dataset used for training, namely gross training set, in a certain way. The training set is used for training the model while the validation set is used to estimate the generalization performance of the trained model as the training proceeds to avoid over-fitting. There exist two major issues in this paradigm. Firstly, the validation set may hardly guarantee an unbiased estimate of generalization performance due to potential mismatching with test data. Secondly, training a DNN corresponds to solve a complex optimization problem, which is prone to getting trapped into inferior local optima and thus leads to undesired training results. To address these issues, we propose a novel DNN training framework. It generates multiple pairs of training and validation sets from the gross training set via random splitting, trains a DNN model of a pre-specified structure on each pair while making the useful knowledge (e.g., promising network parameters) obtained from one model training process to be transferred to other model training processes via multi-task optimization, and outputs the best, among all trained models, which has the overall best performance across the validation sets from all pairs. The knowledge transfer mechanism featured in this new framework can not only enhance training effectiveness by helping the model training process to escape from local optima but also improve on generalization performance via implicit regularization imposed on one model training process from other model training processes. We implement the proposed framework, parallelize the implementation on a GPU cluster, and apply it to train several widely used DNN models. Experimental results demonstrate the superiority of the proposed framework over the conventional training paradigm.

Read more
Neural And Evolutionary Computing

A Novel Graphic Bending Transformation on Benchmark

Classical benchmark problems utilize multiple transformation techniques to increase optimization difficulty, e.g., shift for anti centering effect and rotation for anti dimension sensitivity. Despite testing the transformation invariance, however, such operations do not really change the landscape's "shape", but rather than change the "view point". For instance, after rotated, ill conditional problems are turned around in terms of orientation but still keep proportional components, which, to some extent, does not create much obstacle in optimization. In this paper, inspired from image processing, we investigate a novel graphic conformal mapping transformation on benchmark problems to deform the function shape. The bending operation does not alter the function basic properties, e.g., a unimodal function can almost maintain its unimodality after bent, but can modify the shape of interested area in the search space. Experiments indicate the same optimizer spends more search budget and encounter more failures on the conformal bent functions than the rotated version. Several parameters of the proposed function are also analyzed to reveal performance sensitivity of the evolutionary algorithms.

Read more
Neural And Evolutionary Computing

A Novel Meta-Heuristic Optimization Algorithm Inspired by the Spread of Viruses

According to the no-free-lunch theorem, there is no single meta-heuristic algorithm that can optimally solve all optimization problems. This motivates many researchers to continuously develop new optimization algorithms. In this paper, a novel nature-inspired meta-heuristic optimization algorithm called virus spread optimization (VSO) is proposed. VSO loosely mimics the spread of viruses among hosts, and can be effectively applied to solving many challenging and continuous optimization problems. We devise a new representation scheme and viral operations that are radically different from previously proposed virus-based optimization algorithms. First, the viral RNA of each host in VSO denotes a potential solution for which different viral operations will help to diversify the searching strategies in order to largely enhance the solution quality. In addition, an imported infection mechanism, inheriting the searched optima from another colony, is introduced to possibly avoid the prematuration of any potential solution in solving complex problems. VSO has an excellent capability to conduct adaptive neighborhood searches around the discovered optima for achieving better solutions. Furthermore, with a flexible infection mechanism, VSO can quickly escape from local optima. To clearly demonstrate both its effectiveness and efficiency, VSO is critically evaluated on a series of well-known benchmark functions. Moreover, VSO is validated on its applicability through two real-world examples including the financial portfolio optimization and optimization of hyper-parameters of support vector machines for classification problems. The results show that VSO has attained superior performance in terms of solution fitness, convergence rate, scalability, reliability, and flexibility when compared to those results of the conventional as well as state-of-the-art meta-heuristic optimization algorithms.

Read more

Ready to get started?

Join us today