Featured Researches

Neural And Evolutionary Computing

Continuous Learning in a Single-Incremental-Task Scenario with Spike Features

Deep Neural Networks (DNNs) have two key deficiencies, their dependence on high precision computing and their inability to perform sequential learning, that is, when a DNN is trained on a first task and the same DNN is trained on the next task it forgets the first task. This phenomenon of forgetting previous tasks is also referred to as catastrophic forgetting. On the other hand a mammalian brain outperforms DNNs in terms of energy efficiency and the ability to learn sequentially without catastrophically forgetting. Here, we use bio-inspired Spike Timing Dependent Plasticity (STDP)in the feature extraction layers of the network with instantaneous neurons to extract meaningful features. In the classification sections of the network we use a modified synaptic intelligence that we refer to as cost per synapse metric as a regularizer to immunize the network against catastrophic forgetting in a Single-Incremental-Task scenario (SIT). In this study, we use MNIST handwritten digits dataset that was divided into five sub-tasks.

Read more
Neural And Evolutionary Computing

Continuous Optimization Benchmarks by Simulation

Benchmark experiments are required to test, compare, tune, and understand optimization algorithms. Ideally, benchmark problems closely reflect real-world problem behavior. Yet, real-world problems are not always readily available for benchmarking. For example, evaluation costs may be too high, or resources are unavailable (e.g., software or equipment). As a solution, data from previous evaluations can be used to train surrogate models which are then used for benchmarking. The goal is to generate test functions on which the performance of an algorithm is similar to that on the real-world objective function. However, predictions from data-driven models tend to be smoother than the ground-truth from which the training data is derived. This is especially problematic when the training data becomes sparse. The resulting benchmarks may not reflect the landscape features of the ground-truth, are too easy, and may lead to biased conclusions. To resolve this, we use simulation of Gaussian processes instead of estimation (or prediction). This retains the covariance properties estimated during model training. While previous research suggested a decomposition-based approach for a small-scale, discrete problem, we show that the spectral simulation method enables simulation for continuous optimization problems. In a set of experiments with an artificial ground-truth, we demonstrate that this yields more accurate benchmarks than simply predicting with the Gaussian process model.

Read more
Neural And Evolutionary Computing

Correspondence between neuroevolution and gradient descent

We show analytically that training a neural network by stochastic mutation or "neuroevolution" of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise. Averaged over independent realizations of the learning process, neuroevolution is equivalent to gradient descent on the loss function. We use numerical simulation to show that this correspondence can be observed for finite mutations, for shallow and deep neural networks. Our results provide a connection between two distinct types of neural-network training, and provide justification for the empirical success of neuroevolution.

Read more
Neural And Evolutionary Computing

Creative AI Through Evolutionary Computation: Principles and Examples

The main power of artificial intelligence is not in modeling what we already know, but in creating solutions that are new. Such solutions exist in extremely large, high-dimensional, and complex search spaces. Population-based search techniques, i.e. variants of evolutionary computation, are well suited to finding them. These techniques make it possible to find creative solutions to practical problems in the real world, making creative AI through evolutionary computation the likely "next deep learning."

Read more
Neural And Evolutionary Computing

Critical Analysis: Bat Algorithm based Investigation and Application on Several Domains

In recent years several swarm optimization algorithms, such as Bat Algorithm (BA) have emerged, which was proposed by Xin-She Yang in 2010. The idea of the algorithm was taken from the echolocation ability of bats. Purpose: The purpose of this study is to provide the reader with a full study of the Bat Algorithm, including its limitations, the fields that the algorithm has been applied, versatile optimization problems in different domains, and all the studies that assess its performance against other meta-heuristic algorithms. Approach: Bat Algorithm is given in-depth in terms of backgrounds, characteristics, limitations, it has also displayed the algorithms that hybridized with BA (K-Medoids, Back-propagation neural network, Harmony Search Algorithm, Differential Evaluation Strategies, Enhanced Particle Swarm Optimization, and Cuckoo Search Algorithm) and their theoretical results, as well as to the modifications that have been performed of the algorithm (Modified Bat Algorithm (MBA), Enhanced Bat Algorithm (EBA), Bat Algorithm with Mutation (BAM), Uninhabited Combat Aerial Vehicle-Bat algorithm with Mutation (UCAV-BAM), Nonlinear Optimization)... Findings: Shed light on the advantages and disadvantages of this algorithm through all the researches that dealt with the algorithm in addition to the fields and applications it has addressed in the hope that it will help scientists understand and develop it. Originality/value: As far as the research community knowledge, there is no comprehensive survey study conducted on this algorithm coverıng all its aspects. Keywords: Swarm Intelligence; Nature-Inspired Algorithms; Metaheuristic Algorithms; Optimization Algorithms; Bat Algorithm.

Read more
Neural And Evolutionary Computing

Crossmodal Language Grounding in an Embodied Neurocognitive Model

Human infants are able to acquire natural language seemingly easily at an early age. Their language learning seems to occur simultaneously with learning other cognitive functions as well as with playful interactions with the environment and caregivers. From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities, and acquired by means of crossmodal integration. However, characterising the underlying mechanisms in the brain is difficult and explaining the grounding of language in crossmodal perception and action remains challenging. In this paper, we present a neurocognitive model for language grounding which reflects bio-inspired mechanisms such as an implicit adaptation of timescales as well as end-to-end multimodal abstraction. It addresses developmental robotic interaction and extends its learning capabilities using larger-scale knowledge-based data. In our scenario, we utilise the humanoid robot NICO in obtaining the EMIL data collection, in which the cognitive robot interacts with objects in a children's playground environment while receiving linguistic labels from a caregiver. The model analysis shows that crossmodally integrated representations are sufficient for acquiring language merely from sensory input through interaction with objects in an environment. The representations self-organise hierarchically and embed temporal and spatial information through composition and decomposition. This model can also provide the basis for further crossmodal integration of perceptually grounded cognitive representations.

Read more
Neural And Evolutionary Computing

Cyber Kittens, or Some First Steps Towards Categorical Cybernetics

We define a categorical notion of cybernetic system as a dynamical realisation of a generalized open game, along with a coherence condition. We show that this notion captures a wide class of cybernetic systems in computational neuroscience and statistical machine learning, exposes their compositional structure, and gives an abstract justification for the bidirectional structure empirically observed in cortical circuits. Our construction is built on the observation that Bayesian updates compose optically, a fact which we prove along the way, via a fibred category of state-dependent stochastic channels.

Read more
Neural And Evolutionary Computing

DIET-SNN: Direct Input Encoding With Leakage and Threshold Optimization in Deep Spiking Neural Networks

Bio-inspired spiking neural networks (SNNs), operating with asynchronous binary signals (or spikes) distributed over time, can potentially lead to greater computational efficiency on event-driven hardware. The state-of-the-art SNNs suffer from high inference latency, resulting from inefficient input encoding, and sub-optimal settings of the neuron parameters (firing threshold, and membrane leak). We propose DIET-SNN, a low-latency deep spiking network that is trained with gradient descent to optimize the membrane leak and the firing threshold along with other network parameters (weights). The membrane leak and threshold for each layer of the SNN are optimized with end-to-end backpropagation to achieve competitive accuracy at reduced latency. The analog pixel values of an image are directly applied to the input layer of DIET-SNN without the need to convert to spike-train. The first convolutional layer is trained to convert inputs into spikes where leaky-integrate-and-fire (LIF) neurons integrate the weighted inputs and generate an output spike when the membrane potential crosses the trained firing threshold. The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network. The reduced latency combined with high activation sparsity provides large improvements in computational efficiency. We evaluate DIET-SNN on image classification tasks from CIFAR and ImageNet datasets on VGG and ResNet architectures. We achieve top-1 accuracy of 69% with 5 timesteps (inference latency) on the ImageNet dataset with 12x less compute energy than an equivalent standard ANN. Additionally, DIET-SNN performs 20-500x faster inference compared to other state-of-the-art SNN models.

Read more
Neural And Evolutionary Computing

Decomposition in Decision and Objective Space for Multi-Modal Multi-Objective Optimization

Multi-modal multi-objective optimization problems (MMMOPs) have multiple subsets within the Pareto-optimal Set, each independently mapping to the same Pareto-Front. Prevalent multi-objective evolutionary algorithms are not purely designed to search for multiple solution subsets, whereas, algorithms designed for MMMOPs demonstrate degraded performance in the objective space. This motivates the design of better algorithms for addressing MMMOPs. The present work identifies the crowding illusion problem originating from using crowding distance globally over the entire decision space. Subsequently, an evolutionary framework, called graph Laplacian based Optimization using Reference vector assisted Decomposition (LORD), is proposed, which uses decomposition in both objective and decision space for dealing with MMMOPs. Its filtering step is further extended to present LORD-II algorithm, which demonstrates its dynamics on multi-modal many-objective problems. The efficacies of the frameworks are established by comparing their performance on test instances from the CEC 2019 multi-modal multi-objective test suite and polygon problems with the state-of-the-art algorithms for MMMOPs and other multi- and many-objective evolutionary algorithms. The manuscript is concluded by mentioning the limitations of the proposed frameworks and future directions to design still better algorithms for MMMOPs. The source code is available at this https URL.

Read more
Neural And Evolutionary Computing

Decomposition-Based Multi-Objective Evolutionary Algorithm Design under Two Algorithm Frameworks

The development of efficient and effective evolutionary multi-objective optimization (EMO) algorithms has been an active research topic in the evolutionary computation community. Over the years, many EMO algorithms have been proposed. The existing EMO algorithms are mainly developed based on the final population framework. In the final population framework, the final population of an EMO algorithm is presented to the decision maker. Thus, it is required that the final population produced by an EMO algorithm is a good solution set. Recently, the use of solution selection framework was suggested for the design of EMO algorithms. This framework has an unbounded external archive to store all the examined solutions. A pre-specified number of solutions are selected from the archive as the final solutions presented to the decision maker. When the solution selection framework is used, EMO algorithms can be designed in a more flexible manner since the final population is not necessarily to be a good solution set. In this paper, we examine the design of MOEA/D under these two frameworks. We use an offline genetic algorithm-based hyper-heuristic method to find the optimal configuration of MOEA/D in each framework. The DTLZ and WFG test suites and their minus versions are used in our experiments. The experimental results suggest the possibility that a more flexible, robust and high-performance MOEA/D algorithm can be obtained when the solution selection framework is used.

Read more

Ready to get started?

Join us today