Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tara Julia Hamilton is active.

Publication


Featured researches published by Tara Julia Hamilton.


Frontiers in Neuroscience | 2011

Neuromorphic silicon neuron circuits

Giacomo Indiveri; Bernabé Linares-Barranco; Tara Julia Hamilton; André van Schaik; Ralph Etienne-Cummings; Tobi Delbruck; Shih-Chii Liu; Piotr Dudek; Philipp Häfliger; Sylvie Renaud; Johannes Schemmel; Gert Cauwenberghs; John V. Arthur; Kai Hynna; Fopefolu Folowosele; Sylvain Saïghi; Teresa Serrano-Gotarredona; Jayawan H. B. Wijekoon; Yingxue Wang; Kwabena Boahen

Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.


IEEE Transactions on Biomedical Circuits and Systems | 2008

An Active 2-D Silicon Cochlea

Tara Julia Hamilton; Craig Jin; A. van Schaik; Jonathan Tapson

In this paper, we present an analog integrated circuit design for an active 2-D cochlea and measurement results from a fabricated chip. The design includes a quality factor control loop that incorporates some of the nonlinear behavior exhibited in the real cochlea. This control loop varies the gain and the frequency selectivity of each cochlear resonator based on the amplitude of the input signal.


Frontiers in Neuroscience | 2013

An FPGA Implementation of a Polychronous Spiking Neural Network with Delay Adaptation

Runchun Mark Wang; Gregory Cohen; Klaus M. Stiefel; Tara Julia Hamilton; Jonathan Tapson; André van Schaik

We present an FPGA implementation of a re-configurable, polychronous spiking neural network with a large capacity for spatial-temporal patterns. The proposed neural network generates delay paths de novo, so that only connections that actually appear in the training patterns will be created. This allows the proposed network to use all the axons (variables) to store information. Spike Timing Dependent Delay Plasticity is used to fine-tune and add dynamics to the network. We use a time multiplexing approach allowing us to achieve 4096 (4k) neurons and up to 1.15 million programmable delay axons on a Virtex 6 FPGA. Test results show that the proposed neural network is capable of successfully recalling more than 95% of all spikes for 96% of the stored patterns. The tests also show that the neural network is robust to noise from random input spikes.


international symposium on circuits and systems | 2010

A log-domain implementation of the Izhikevich neuron model

André van Schaik; Craig Jin; Alistair McEwan; Tara Julia Hamilton

We present an implementation of the Izhikevich neuron model which uses two first-order log-domain low-pass filters and two translinear multipliers. The neuron consists of a leaky-integrate-and-fire core, a slow adaptive state variable and quadratic positive feedback. Simulation results show that this neuron can emulate different spiking behaviours observed in biological neurons.


Frontiers in Neuroscience | 2013

Synthesis of neural networks for spatio-temporal spike pattern recognition and processing

Jonathan Tapson; Greg Kevin Cohen; Saeed Afshar; Klaus M. Stiefel; Yossi Buskila; Runchun Mark Wang; Tara Julia Hamilton; André van Schaik

The advent of large scale neural computational platforms has highlighted the lack of algorithms for synthesis of neural structures to perform predefined cognitive tasks. The Neural Engineering Framework (NEF) offers one such synthesis, but it is most effective for a spike rate representation of neural information, and it requires a large number of neurons to implement simple functions. We describe a neural network synthesis method that generates synaptic connectivity for neurons which process time-encoded neural signals, and which makes very sparse use of neurons. The method allows the user to specify—arbitrarily—neuronal characteristics such as axonal and dendritic delays, and synaptic transfer functions, and then solves for the optimal input-output relationship using computed dendritic weights. The method may be used for batch or online learning and has an extremely fast optimization process. We demonstrate its use in generating a network to recognize speech which is sparsely encoded as spike times.


Proceedings of the IEEE | 2014

Stochastic Electronics: A Neuro-Inspired Design Paradigm for Integrated Circuits

Tara Julia Hamilton; Saeed Afshar; André van Schaik; Jonathan Tapson

As advances in integrated circuit (IC) fabrication technology reduce feature sizes to dimensions on the order of nanometers, IC designers are facing many of the problems that evolution has had to overcome in order to perform meaningful and accurate computations in biological neural circuits. In this paper, we explore the current state of IC technology including the many new and exciting opportunities “beyond CMOS.” We review the role of noise in both biological and engineered systems and discuss how “stochastic facilitation” can be used to perform useful and precise computation. We explore nondeterministic methodologies for computation in hardware and introduce the concept of stochastic electronics (SE); a new way to design circuits and increase performance in highly noisy and mismatched fabrication environments. This approach is illustrated with several circuit examples whose results demonstrate its exciting potential.


international symposium on circuits and systems | 2014

An FPGA design framework for large-scale spiking neural networks

Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik

We present an FPGA design framework for large-scale spiking neural networks, particularly the ones with a high-density of connections or all-to-all connections. The proposed FPGA design framework is based on a reconfigurable neural layer, which is implemented using a time-multiplexing approach to achieve up to 200,000 virtual neurons with one physical neuron using only a fraction of the hardware resources in commercial-off-the-shelf FPGAs (even entry level ones). Rather than using a mathematical computational model, the physical neuron was efficiently implemented with a conductance-based model, of which the parameters were randomised between neurons to emulate the variance in biological neurons. Besides these building blocks, the proposed time-multiplexed reconfigurable neural layer has an address buffer, which will generate a fixed random weight for each connection on the fly for incoming spikes. This structure effectively reduces the usage of memory. After presenting the architecture of the proposed neural layer, we present a network with 23 proposed neural layers, each containing 64k neurons, yielding 1.5 M neurons and 92 G synapses with a total spike throughput of 1.2T spikes/s, while running in real-time on a Virtex 6 FPGA.


Frontiers in Neuroscience | 2014

A mixed-signal implementation of a polychronous spiking neural network with delay adaptation

Runchun Mark Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik

We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits.


international symposium on circuits and systems | 2010

A log-domain implementation of the Mihalas-Niebur neuron model

André van Schaik; Craig Jin; Alistair McEwan; Tara Julia Hamilton; Stefan Mihalas; Ernst Niebur

We present an electronic neuron that uses first-order log-domain low-pass filters to implement the Mihalas-Niebur model. The neuron consists of a leaky-integrate-and-fire core and building blocks to implement an adaptive threshold and spike induced currents. Simulation results show that this modular neuron can emulate different spiking behaviours observed in biological neurons.


Frontiers in Neuroscience | 2015

A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik

We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform.

Collaboration


Dive into the Tara Julia Hamilton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Torsten Lehmann

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Runchun Wang

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julian Jenkins

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Saeed Afshar

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar

Andrew Nicholson

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Libin George

University of New South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge