Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Tapson is active.

Publication


Featured researches published by Jonathan Tapson.


IEEE Transactions on Biomedical Circuits and Systems | 2008

An Active 2-D Silicon Cochlea

Tara Julia Hamilton; Craig Jin; A. van Schaik; Jonathan Tapson

In this paper, we present an analog integrated circuit design for an active 2-D cochlea and measurement results from a fabricated chip. The design includes a quality factor control loop that incorporates some of the nonlinear behavior exhibited in the real cochlea. This control loop varies the gain and the frequency selectivity of each cochlear resonator based on the amplitude of the input signal.


Frontiers in Neuroscience | 2013

An FPGA Implementation of a Polychronous Spiking Neural Network with Delay Adaptation

Runchun Mark Wang; Gregory Cohen; Klaus M. Stiefel; Tara Julia Hamilton; Jonathan Tapson; André van Schaik

We present an FPGA implementation of a re-configurable, polychronous spiking neural network with a large capacity for spatial-temporal patterns. The proposed neural network generates delay paths de novo, so that only connections that actually appear in the training patterns will be created. This allows the proposed network to use all the axons (variables) to store information. Spike Timing Dependent Delay Plasticity is used to fine-tune and add dynamics to the network. We use a time multiplexing approach allowing us to achieve 4096 (4k) neurons and up to 1.15 million programmable delay axons on a Virtex 6 FPGA. Test results show that the proposed neural network is capable of successfully recalling more than 95% of all spikes for 96% of the stored patterns. The tests also show that the neural network is robust to noise from random input spikes.


PLOS ONE | 2015

Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

Mark D. McDonnell; Migel D. Tissera; Tony Vladusich; André van Schaik; Jonathan Tapson

Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.


Frontiers in Neuroscience | 2013

Synthesis of neural networks for spatio-temporal spike pattern recognition and processing

Jonathan Tapson; Greg Kevin Cohen; Saeed Afshar; Klaus M. Stiefel; Yossi Buskila; Runchun Mark Wang; Tara Julia Hamilton; André van Schaik

The advent of large scale neural computational platforms has highlighted the lack of algorithms for synthesis of neural structures to perform predefined cognitive tasks. The Neural Engineering Framework (NEF) offers one such synthesis, but it is most effective for a spike rate representation of neural information, and it requires a large number of neurons to implement simple functions. We describe a neural network synthesis method that generates synaptic connectivity for neurons which process time-encoded neural signals, and which makes very sparse use of neurons. The method allows the user to specify—arbitrarily—neuronal characteristics such as axonal and dendritic delays, and synaptic transfer functions, and then solves for the optimal input-output relationship using computed dendritic weights. The method may be used for batch or online learning and has an extremely fast optimization process. We demonstrate its use in generating a network to recognize speech which is sparsely encoded as spike times.


Proceedings of the IEEE | 2014

Stochastic Electronics: A Neuro-Inspired Design Paradigm for Integrated Circuits

Tara Julia Hamilton; Saeed Afshar; André van Schaik; Jonathan Tapson

As advances in integrated circuit (IC) fabrication technology reduce feature sizes to dimensions on the order of nanometers, IC designers are facing many of the problems that evolution has had to overcome in order to perform meaningful and accurate computations in biological neural circuits. In this paper, we explore the current state of IC technology including the many new and exciting opportunities “beyond CMOS.” We review the role of noise in both biological and engineered systems and discuss how “stochastic facilitation” can be used to perform useful and precise computation. We explore nondeterministic methodologies for computation in hardware and introduce the concept of stochastic electronics (SE); a new way to design circuits and increase performance in highly noisy and mismatched fabrication environments. This approach is illustrated with several circuit examples whose results demonstrate its exciting potential.


IEEE Transactions on Neural Networks | 2010

Optimization Methods for Spiking Neurons and Networks

Alexander F. Russell; Garrick Orchard; Yi Dong; Ş Mihalaş; Ernst Niebur; Jonathan Tapson; Ralph Etienne-Cummings

Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neurons output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas-Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip.


Ultrasonics | 2001

High power resonant tracking amplifier using admittance locking

B.J.P Mortimer; T du Bruyn; J.R Davies; Jonathan Tapson

A high power resonance tracking ultrasonic amplifier is described. The amplifier is a class D type inverter, configured as a half-bridge in which the output MOSFETs are driven into saturation when on. The resonance tracking system makes use of a new method of frequency locking; admittance locking is used to track the optimum power conversion frequency for the transducer. This new arrangement offers some advantages over phase locking and motional feedback methods. The system is capable of delivering up to 3 kW at up to 25 kHz in resonance tracking operation.


international symposium on circuits and systems | 2014

An FPGA design framework for large-scale spiking neural networks

Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik

We present an FPGA design framework for large-scale spiking neural networks, particularly the ones with a high-density of connections or all-to-all connections. The proposed FPGA design framework is based on a reconfigurable neural layer, which is implemented using a time-multiplexing approach to achieve up to 200,000 virtual neurons with one physical neuron using only a fraction of the hardware resources in commercial-off-the-shelf FPGAs (even entry level ones). Rather than using a mathematical computational model, the physical neuron was efficiently implemented with a conductance-based model, of which the parameters were randomised between neurons to emulate the variance in biological neurons. Besides these building blocks, the proposed time-multiplexed reconfigurable neural layer has an address buffer, which will generate a fixed random weight for each connection on the fly for incoming spikes. This structure effectively reduces the usage of memory. After presenting the architecture of the proposed neural layer, we present a network with 23 proposed neural layers, each containing 64k neurons, yielding 1.5 M neurons and 92 G synapses with a total spike throughput of 1.2T spikes/s, while running in real-time on a Virtex 6 FPGA.


Frontiers in Neuroscience | 2014

A mixed-signal implementation of a polychronous spiking neural network with delay adaptation

Runchun Mark Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik

We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits.


Neurocomputing | 2015

Online and adaptive pseudoinverse solutions for ELM weights

André van Schaik; Jonathan Tapson

The ELM method has become widely used for classification and regressions problems as a result of its accuracy, simplicity and ease of use. The solution of the hidden layer weights by means of a matrix pseudoinverse operation is a significant contributor to the utility of the method; however, the conventional calculation of the pseudoinverse by means of a singular value decomposition (SVD) is not always practical for large data sets or for online updates to the solution. In this paper we discuss incremental methods for solving the pseudoinverse which are suitable for ELM. We show that careful choice of methods allows us to optimize for accuracy, ease of computation, or adaptability of the solution.

Collaboration


Dive into the Jonathan Tapson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Runchun Wang

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Saeed Afshar

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge