Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ammar Mohemmed is active.

Publication


Featured researches published by Ammar Mohemmed.


International Journal of Neural Systems | 2012

SPAN: SPIKE PATTERN ASSOCIATION NEURON FOR LEARNING SPATIO-TEMPORAL SPIKE PATTERNS

Ammar Mohemmed; Stefan Schliebs; Satoshi Matsuda; Nikola Kasabov

Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN - a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow-Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.


international conference on neural information processing | 2011

Evolving Probabilistic Spiking Neural Networks for Spatio-temporal Pattern Recognition: A Preliminary Study on Moving Object Recognition

Nikola Kasabov; Kshitij Dhoble; Nuttapod Nuntalid; Ammar Mohemmed

This paper proposes a novel architecture for continuous spatio-temporal data modeling and pattern recognition utilizing evolving probabilistic spiking neural network ‘reservoirs’ (epSNNr). The paper demonstrates on a simple experimental data for moving object recognition that: (1) The epSNNr approach is more accurate and flexible than using standard SNN; (2) The use of probabilistic neuronal models is superior in several aspects when compared with the traditional deterministic SNN models, including a better performance on noisy data.


international symposium on neural networks | 2011

Are probabilistic spiking neural networks suitable for reservoir computing

Stefan Schliebs; Ammar Mohemmed; Nikola Kasabov

This study employs networks of stochastic spiking neurons as reservoirs for liquid state machines (LSM). We experimentally investigate the separation property of these reservoirs and show their ability to generalize classes of input signals. Similar to traditional LSM, probabilistic LSM (pLSM) have the separation property enabling them to distinguish between different classes of input stimuli. Furthermore, our results indicate some potential advantages of non-deterministic LSM by improving upon the separation ability of the liquid. Three non-deterministic neural models are considered and for each of them several parameter configurations are explored. We demonstrate some of the characteristics of pLSM and compare them to their deterministic counterparts. pLSM offer more flexibility due to the probabilistic parameters resulting in a better performance for some values of these parameters.


computational intelligence methods for bioinformatics and biostatistics | 2011

Modelling the Effect of Genes on the Dynamics of Probabilistic Spiking Neural Networks for Computational Neurogenetic Modelling

Nikola Kasabov; Stefan Schliebs; Ammar Mohemmed

Computational neuro-genetic models (CNGM) combine two dynamic models – a gene regulatory network (GRN) model at a lower level, and a spiking neural network (SNN) model at a higher level to model the dynamic interaction between genes and spiking patterns of activity under certain conditions. The paper demonstrates that it is possible to model and trace over time the effect of a gene on the total spiking behavior of the SNN when the gene controls a parameter of a stochastic spiking neuron model used to build the SNN. Such CNGM can be potentially used to study neurodegenerative diseases or develop CNGM for cognitive robotics.


EANN/AIAI (1) | 2011

Method for Training a Spiking Neuron to Associate Input-Output Spike Trains

Ammar Mohemmed; Stefan Schliebs; Satoshi Matsuda; Nikola Kasabov

We propose a novel supervised learning rule allowing the training of a precise input-output behavior to a spiking neuron. A single neuron can be trained to associate (map) different output spike trains to different multiple input spike trains. Spike trains are transformed into continuous functions through appropriate kernels and then Delta rule is applied. The main advantage of the method is its algorithmic simplicity promoting its straightforward application to building spiking neural networks (SNN) for engineering problems. We experimentally demonstrate on a synthetic benchmark problem the suitability of the method for spatio-temporal classification. The obtained results show promising efficiency and precision of the proposed method.


international symposium on neural networks | 2011

Optimization of Spiking Neural Networks with dynamic synapses for spike sequence generation using PSO

Ammar Mohemmed; Satoshi Matsuda; Stefan Schliebs; Kshitij Dhoble; Nikola Kasabov

We present a method that is based on Particle Swarm Optimization (PSO) for training a Spiking Neural Network (SNN) with dynamic synapses to generate precise time spike sequences. The similarity between the desired spike sequence and the actual output sequence is measured by a simple leaky integrate and fire spiking neuron. This measurement is used as a fitness function for PSO algorithm to tune the dynamic synapses until a desired spike output sequence is obtained when certain input spike sequence is presented. Simulations are made to illustrate the performance of the proposed method.


international conference on neural information processing | 2012

Evaluating SPAN incremental learning for handwritten digit recognition

Ammar Mohemmed; Guoyu Lu; Nikola Kasabov

In a previous work [12, 11], the authors proposed SPAN: a learning algorithm based on temporal coding for Spiking Neural Network (SNN). The algorithm trains a neuron to associate target spike patterns to input spatio-temporal spike patterns. In this paper we present the details of experiment to evaluate the feasibility of SPAN learning on a real-world dataset: classifying images of handwritten digits. As spike encoding is an important issue in using SNN for practical applications, we discuss few methods for image conversion to spike patterns. The experiment yields encouraging results to consider the SPAN learning for practical temporal pattern recognition applications.


international conference on neural information processing | 2011

SPAN: a neuron for precise-time spike pattern association

Ammar Mohemmed; Stefan Schliebs; Nikola Kasabov

In this paper we propose SPAN, a LIF spiking neuron that is capable of learning input-output spike pattern association using a novel learning algorithm. The main idea of SPAN is transforming the spike trains into analog signals where computing the error can be done easily. As demonstrated in an experimental analysis, the proposed method is both simple and efficient achieving reliable training results even in the context of noise.


international symposium on neural networks | 2012

Incremental learning algorithm for spatio-temporal spike pattern classification

Ammar Mohemmed; Nikola Kasabov

In a previous work (Mohemmed et al. [11]), the authors proposed a supervised learning algorithm to train a spiking neuron to associate input/output spike patterns. In this paper, the association learning rule is applied in training a single layer of spiking neurons to classify multiclass spike patterns whereby the neurons are trained to recognize an input spike pattern by emitting a predetermined spike train. The training is performed in incremental fashion, i.e. the synaptic weights are adjusted after each presentation of a training pattern. The individual neurons are trained independently from other neurons and on patterns from a single class. A spike train comparison criterion is used to decode the output spike trains into class labels. The results of the simulation experiments on a synthetic dataset of spike patterns show a high efficiency in solving the considered classification task.


Neurocomputing | 2013

Training spiking neural networks to associate spatio-temporal input-output spike patterns

Ammar Mohemmed; Stefan Schliebs; Satoshi Matsuda; Nikola Kasabov

Collaboration


Dive into the Ammar Mohemmed's collaboration.

Top Co-Authors

Avatar

Nikola Kasabov

Auckland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Schliebs

Auckland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kshitij Dhoble

Auckland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masaki Ishii

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nuttapod Nuntalid

Auckland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge