Saeed Afshar
University of Western Sydney
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Saeed Afshar.
Frontiers in Neuroscience | 2013
Jonathan Tapson; Greg Kevin Cohen; Saeed Afshar; Klaus M. Stiefel; Yossi Buskila; Runchun Mark Wang; Tara Julia Hamilton; André van Schaik
The advent of large scale neural computational platforms has highlighted the lack of algorithms for synthesis of neural structures to perform predefined cognitive tasks. The Neural Engineering Framework (NEF) offers one such synthesis, but it is most effective for a spike rate representation of neural information, and it requires a large number of neurons to implement simple functions. We describe a neural network synthesis method that generates synaptic connectivity for neurons which process time-encoded neural signals, and which makes very sparse use of neurons. The method allows the user to specify—arbitrarily—neuronal characteristics such as axonal and dendritic delays, and synaptic transfer functions, and then solves for the optimal input-output relationship using computed dendritic weights. The method may be used for batch or online learning and has an extremely fast optimization process. We demonstrate its use in generating a network to recognize speech which is sparsely encoded as spike times.
Proceedings of the IEEE | 2014
Tara Julia Hamilton; Saeed Afshar; André van Schaik; Jonathan Tapson
As advances in integrated circuit (IC) fabrication technology reduce feature sizes to dimensions on the order of nanometers, IC designers are facing many of the problems that evolution has had to overcome in order to perform meaningful and accurate computations in biological neural circuits. In this paper, we explore the current state of IC technology including the many new and exciting opportunities “beyond CMOS.” We review the role of noise in both biological and engineered systems and discuss how “stochastic facilitation” can be used to perform useful and precise computation. We explore nondeterministic methodologies for computation in hardware and introduce the concept of stochastic electronics (SE); a new way to design circuits and increase performance in highly noisy and mismatched fabrication environments. This approach is illustrated with several circuit examples whose results demonstrate its exciting potential.
IEEE Transactions on Biomedical Circuits and Systems | 2015
Saeed Afshar; Libin George; Chetan Singh Thakur; Jonathan Tapson; André van Schaik; Philip de Chazal; Tara Julia Hamilton
We have added a simplified neuromorphic model of Spike Time Dependent Plasticity (STDP) to the previously described Synapto-dendritic Kernel Adapting Neuron (SKAN), a hardware efficient neuron model capable of learning spatio-temporal spike patterns. The resulting neuron model is the first to perform synaptic encoding of afferent signal-to-noise ratio in addition to the unsupervised learning of spatio-temporal spike patterns. The neuron model is particularly suitable for implementation in digital neuromorphic hardware as it does not use any complex mathematical operations and uses a novel shift-based normalization approach to achieve synaptic homeostasis. The neurons noise compensation properties are characterized and tested on random spatio-temporal spike patterns as well as a noise corrupted subset of the zero images of the MNIST handwritten digit dataset. Results show the simultaneously learning common patterns in its input data while dynamically weighing individual afferents based on their signal to noise ratio. Despite its simplicity the interesting behaviors of the neuron model and the resulting computational power may also offer insights into biological systems.
Frontiers in Neuroscience | 2014
Saeed Afshar; Libin George; Jonathan Tapson; André van Schaik; Tara Julia Hamilton
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Frontiers in Neuroscience | 2016
Chetan Singh Thakur; Saeed Afshar; Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
international symposium on circuits and systems | 2014
Richard James Sofatzis; Saeed Afshar; Tara Julia Hamilton
In this paper we present a biologically inspired rotationally-invariant end-to-end recognition system demonstrated in hardware with a bitmap camera and a Field Programmable Gate Array (FPGA). The system integrates the Ripple Pond Network (RPN), a neural network that performs image transformation from two dimensions to one dimensional rotationally invariant temporal patterns (TPs), and the Synaptic Kernel Adaptation Network (SKAN), a neural network capable of unsupervised learning of a spatio-temporal pattern of input spikes. Our results demonstrate rapid learning and recognition of simple hand gestures with no prior training and minimal usage of FPGA hardware.
international symposium on neural networks | 2012
Saeed Afshar; Omid Kavehei; André van Schaik; Jonathan Tapson; Stan Skafidas; Tara Julia Hamilton
Recent work in neuroscience is revealing how the blowfly rapidly detects orientation using neural circuits distributed directly behind its photo receptors. These circuits like all biological systems rely on timing, competition, feedback, and energy optimization. The recent realization of the passive memristor device, the so-called fourth fundamental passive element of circuit theory, assists with making low power biologically inspired parallel analog computation achievable. Building on these developments, we present a memristor-based neuromorphic competitive control (mNCC) circuit, which utilizes a single sensor and can control the output of N actuators delivering optimal scalable performance, and immunity from device variation and environmental noise.
Frontiers in Neuroscience | 2015
Chetan Singh Thakur; Runchun Mark Wang; Saeed Afshar; Tara Julia Hamilton; Jonathan Tapson; Shihab A. Shamma; André van Schaik
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.
Frontiers in Neuroscience | 2013
Saeed Afshar; Greg Kevin Cohen; Runchun Mark Wang; André van Schaik; Jonathan Tapson; Torsten Lehmann; Tara Julia Hamilton
We present the biologically inspired Ripple Pond Network (RPN), a simply connected spiking neural network which performs a transformation converting two dimensional images to one dimensional temporal patterns (TP) suitable for recognition by temporal coding learning and memory networks. The RPN has been developed as a hardware solution linking previously implemented neuromorphic vision and memory structures such as frameless vision sensors and neuromorphic temporal coding spiking neural networks. Working together such systems are potentially capable of delivering end-to-end high-speed, low-power and low-resolution recognition for mobile and autonomous applications where slow, highly sophisticated and power hungry signal processing solutions are ineffective. Key aspects in the proposed approach include utilizing the spatial properties of physically embedded neural networks and propagating waves of activity therein for information processing, using dimensional collapse of imagery information into amenable TP and the use of asynchronous frames for information binding.
international symposium on circuits and systems | 2014
Richard James Sofatzis; Saeed Afshar; Tara Julia Hamilton
In this paper we present the Synaptic Kernel Adaptation Network (SKAN) circuit, a dynamic circuit that implements Spike Timing Dependent Plasticity (STDP), not by adjusting synaptic weights but via dynamic synaptic kernels. SKAN performs unsupervised learning of the commonest spatio-temporal pattern of input spikes using simple analog or digital circuits. It features tunable robustness to temporal jitter and will unlearn a pattern that has not been present for a period of time using tunable “forgetting” parameters. It is compact and scalable for use as a building block in a larger network to form a multilayer hierarchical unsupervised memory system which develops models based on the temporal statistics of its environment. Here we show results from simulations as well present digital and analog implementations. Our results show that the SKAN is fast, accurate and robust to noise and jitter.