Featured Researches

Neural And Evolutionary Computing

A multi-agent model for growing spiking neural networks

Artificial Intelligence has looked into biological systems as a source of inspiration. Although there are many aspects of the brain yet to be discovered, neuroscience has found evidence that the connections between neurons continuously grow and reshape as a part of the learning process. This differs from the design of Artificial Neural Networks, that achieve learning by evolving the weights in the synapses between them and their topology stays unaltered through time. This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism. These rules have been implemented on a multi-agent system for creating simple logic functions, that establish a base for building up more complex systems and architectures. Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions. This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters, and hence creating neural networks that can adapt to different functions.

Read more
Neural And Evolutionary Computing

A summary of the prevalence of Genetic Algorithms in Bioinformatics from 2015 onwards

In recent years, machine learning has seen an increasing presencein a large variety of fields, especially in health care and bioinformatics.More specifically, the field where machine learning algorithms have found most applications is Genetic Algorithms.The objective of this paper is to conduct a survey of articles published from 2015 onwards that deal with Genetic Algorithms(GA) and how they are used in this http URL achieve the objective, a scoping review was conducted that utilized Google Scholar alongside Publish or Perish and the Scimago Journal & CountryRank to search for respectable sources. Upon analyzing 31 articles from the field of bioinformatics, it became apparent that genetic algorithms rarely form a full application, instead they rely on other vital algorithms such as support vector machines.Indeed, support vector machines were the most prevalent algorithms used alongside genetic algorithms; however, while the usage of such algorithms contributes to the heavy focus on accuracy by GA programs, it often sidelines computation times in the process. In fact, most applications employing GAs for classification and feature selectionare nearing or at 100% success rate, and the focus of future GA development should be directed elsewhere. Population-based searches, like GA, are often combined with other machine learning algorithms. In this scoping review, genetic algorithms combined with Support Vector Machines were found to perform best. The performance metric that was evaluated most often was accuracy. Measuring the accuracy avoids measuring the main weakness of GAs, which is computational time. The future of genetic algorithms could be open-ended evolutionary algorithms, which attempt to increase complexity and find diverse solutions, rather than optimize a fitness function and converge to a single best solution from the initial population of solutions.

Read more
Neural And Evolutionary Computing

A synthetic biology approach for the design of genetic algorithms with bacterial agents

Bacteria have been a source of inspiration for the design of evolutionary algorithms. At the beginning of the 20th century synthetic biology was born, a discipline whose goal is the design of biological systems that do not exist in nature, for example, programmable synthetic bacteria. In this paper, we introduce as a novelty the designing of evolutionary algorithms where all the steps are conducted by synthetic bacteria. To this end, we designed a genetic algorithm, which we have named BAGA, illustrating its utility solving simple instances of optimization problems such as function optimization, 0/1 knapsack problem, Hamiltonian path problem. The results obtained open the possibility of conceiving evolutionary algorithms inspired by principles, mechanisms and genetic circuits from synthetic biology. In summary, we can conclude that synthetic biology is a source of inspiration either for the design of evolutionary algorithms or for some of their steps, as shown by the results obtained in our simulation experiments.

Read more
Neural And Evolutionary Computing

A thermodynamically consistent chemical spiking neuron capable of autonomous Hebbian learning

We propose a fully autonomous, thermodynamically consistent set of chemical reactions that implements a spiking neuron. This chemical neuron is able to learn input patterns in a Hebbian fashion. The system is scalable to arbitrarily many input channels. We demonstrate its performance in learning frequency biases in the input as well as correlations between different input channels. Efficient computation of time-correlations requires a highly non-linear activation function. The resource requirements of a non-linear activation function are discussed. In addition to the thermodynamically consistent model of the CN, we also propose a biologically plausible version that could be engineered in a synthetic biology context.

Read more
Neural And Evolutionary Computing

A threshold search based memetic algorithm for the disjunctively constrained knapsack problem

The disjunctively constrained knapsack problem consists in packing a subset of pairwisely compatible items in a capacity-constrained knapsack such that the total profit of the selected items is maximized while satisfying the knapsack capacity. DCKP has numerous applications and is however computationally challenging (NP-hard). In this work, we present a threshold search based memetic algorithm for solving the DCKP that combines the memetic framework with threshold search to find high quality solutions. Extensive computational assessments on two sets of 6340 benchmark instances in the literature demonstrate that the proposed algorithm is highly competitive compared to the state-of-the-art methods. In particular, we report 24 and 354 improved best-known results (new lower bounds) for Set I (100 instances) and for Set II (6240 instances), respectively. We analyze the key algorithmic components and shed lights on their roles for the performance of the algorithm. The code of our algorithm will be made publicly available.

Read more
Neural And Evolutionary Computing

AMPSO: Artificial Multi-Swarm Particle Swarm Optimization

In this paper we propose a novel artificial multi-swarm PSO which consists of an exploration swarm, an artificial exploitation swarm and an artificial convergence swarm. The exploration swarm is a set of equal-sized sub-swarms randomly distributed around the particles space, the exploitation swarm is artificially generated from a perturbation of the best particle of exploration swarm for a fixed period of iterations, and the convergence swarm is artificially generated from a Gaussian perturbation of the best particle in the exploitation swarm as it is stagnated. The exploration and exploitation operations are alternatively carried out until the evolution rate of the exploitation is smaller than a threshold or the maximum number of iterations is reached. An adaptive inertia weight strategy is applied to different swarms to guarantee their performances of exploration and exploitation. To guarantee the accuracy of the results, a novel diversity scheme based on the positions and fitness values of the particles is proposed to control the exploration, exploitation and convergence processes of the swarms. To mitigate the inefficiency issue due to the use of diversity, two swarm update techniques are proposed to get rid of lousy particles such that nice results can be achieved within a fixed number of iterations. The effectiveness of AMPSO is validated on all the functions in the CEC2015 test suite, by comparing with a set of comprehensive set of 16 algorithms, including the most recently well-performing PSO variants and some other non-PSO optimization algorithms.

Read more
Neural And Evolutionary Computing

ASBSO: An Improved Brain Storm Optimization With Flexible Search Length and Memory-Based Selection

Brain storm optimization (BSO) is a newly proposed population-based optimization algorithm, which uses a logarithmic sigmoid transfer function to adjust its search range during the convergent process. However, this adjustment only varies with the current iteration number and lacks of flexibility and variety which makes a poor search effciency and robustness of BSO. To alleviate this problem, an adaptive step length structure together with a success memory selection strategy is proposed to be incorporated into BSO. This proposed method, adaptive step length based on memory selection BSO, namely ASBSO, applies multiple step lengths to modify the generation process of new solutions, thus supplying a flexible search according to corresponding problems and convergent periods. The novel memory mechanism, which is capable of evaluating and storing the degree of improvements of solutions, is used to determine the selection possibility of step lengths. A set of 57 benchmark functions are used to test ASBSO's search ability, and four real-world problems are adopted to show its application value. All these test results indicate the remarkable improvement in solution quality, scalability, and robustness of ASBSO.

Read more
Neural And Evolutionary Computing

Accelerating Deep Neuroevolution on Distributed FPGAs for Reinforcement Learning Problems

Reinforcement learning augmented by the representational power of deep neural networks, has shown promising results on high-dimensional problems, such as game playing and robotic control. However, the sequential nature of these problems poses a fundamental challenge for computational efficiency. Recently, alternative approaches such as evolutionary strategies and deep neuroevolution demonstrated competitive results with faster training time on distributed CPU cores. Here, we report record training times (running at about 1 million frames per second) for Atari 2600 games using deep neuroevolution implemented on distributed FPGAs. Combined hardware implementation of the game console, image pre-processing and the neural network in an optimized pipeline, multiplied with the system level parallelism enabled the acceleration. These results are the first application demonstration on the IBM Neural Computer, which is a custom designed system that consists of 432 Xilinx FPGAs interconnected in a 3D mesh network topology. In addition to high performance, experiments also showed improvement in accuracy for all games compared to the CPU-implementation of the same algorithm.

Read more
Neural And Evolutionary Computing

Accelerating Reinforcement Learning Agent with EEG-based Implicit Human Feedback

Providing Reinforcement Learning (RL) agents with human feedback can dramatically improve various aspects of learning. However, previous methods require human observer to give inputs explicitly (e.g., press buttons, voice interface), burdening the human in the loop of RL agent's learning process. Further, it is sometimes difficult or impossible to obtain the explicit human advise (feedback), e.g., autonomous driving, disabled rehabilitation, etc. In this work, we investigate capturing human's intrinsic reactions as implicit (and natural) feedback through EEG in the form of error-related potentials (ErrP), providing a natural and direct way for humans to improve the RL agent learning. As such, the human intelligence can be integrated via implicit feedback with RL algorithms to accelerate the learning of RL agent. We develop three reasonably complex 2D discrete navigational games to experimentally evaluate the overall performance of the proposed work. Major contributions of our work are as follows, (i) we propose and experimentally validate the zero-shot learning of ErrPs, where the ErrPs can be learned for one game, and transferred to other unseen games, (ii) we propose a novel RL framework for integrating implicit human feedbacks via ErrPs with RL agent, improving the label efficiency and robustness to human mistakes, and (iii) compared to prior works, we scale the application of ErrPs to reasonably complex environments, and demonstrate the significance of our approach for accelerated learning through real user experiments.

Read more
Neural And Evolutionary Computing

Accuracy of neural networks for the simulation of chaotic dynamics: precision of training data vs precision of the algorithm

We explore the influence of precision of the data and the algorithm for the simulation of chaotic dynamics by neural networks techniques. For this purpose, we simulate the Lorenz system with different precisions using three different neural network techniques adapted to time series, namely reservoir computing (using ESN), LSTM and TCN, for both short and long time predictions, and assess their efficiency and accuracy. Our results show that the ESN network is better at predicting accurately the dynamics of the system, and that in all cases the precision of the algorithm is more important than the precision of the training data for the accuracy of the predictions. This result gives support to the idea that neural networks can perform time-series predictions in many practical applications for which data are necessarily of limited precision, in line with recent results. It also suggests that for a given set of data the reliability of the predictions can be significantly improved by using a network with higher precision than the one of the data.

Read more

Ready to get started?

Join us today