Runchun Wang
University of Western Sydney
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Runchun Wang.
international symposium on circuits and systems | 2011
Runchun Wang; Craig Jin; Alistair McEwan; André van Schaik
We present an implementation of a programmable axonal propagation delay circuit which uses one first-order log-domain low-pass filter. Delays may be programmed in the 5–50ms range. It is designed to be a building block for time-delay spiking neural networks. It consists of a leaky-integrate-and-fire core, a spike generator circuit, and a delay adaptation circuit.
international symposium on circuits and systems | 2014
Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik
We present an FPGA design framework for large-scale spiking neural networks, particularly the ones with a high-density of connections or all-to-all connections. The proposed FPGA design framework is based on a reconfigurable neural layer, which is implemented using a time-multiplexing approach to achieve up to 200,000 virtual neurons with one physical neuron using only a fraction of the hardware resources in commercial-off-the-shelf FPGAs (even entry level ones). Rather than using a mathematical computational model, the physical neuron was efficiently implemented with a conductance-based model, of which the parameters were randomised between neurons to emulate the variance in biological neurons. Besides these building blocks, the proposed time-multiplexed reconfigurable neural layer has an address buffer, which will generate a fixed random weight for each connection on the fly for incoming spikes. This structure effectively reduces the usage of memory. After presenting the architecture of the proposed neural layer, we present a network with 23 proposed neural layers, each containing 64k neurons, yielding 1.5 M neurons and 92 G synapses with a total spike throughput of 1.2T spikes/s, while running in real-time on a Virtex 6 FPGA.
Frontiers in Neuroscience | 2015
Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik
We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform.
international symposium on neural networks | 2015
Chetan Singh Thakur; Tara Julia Hamilton; Runchun Wang; Jonathan Tapson; André van Schaik
In the biological nervous system, large neuronal populations work collaboratively to encode sensory stimuli. These neuronal populations are characterised by a diverse distribution of tuning curves, ensuring that the entire range of input stimuli is encoded. Based on these principles, we have designed a neuromorphic system called a Trainable Analogue Block (TAB), which encodes given input stimuli using a large population of neurons with a heterogeneous tuning curve profile. Heterogeneity of tuning curves is achieved using random device mismatches in VLSI (Very Large Scale Integration) process and by adding a systematic offset to each hidden neuron. Here, we present measurement results of a single test cell fabricated in a 65nm technology to verify the TAB framework. We have mimicked a large population of neurons by re-using measurement results from the test cell by varying offset. We thus demonstrate the learning capability of the system for various regression tasks. The TAB system may pave the way to improve the design of analogue circuits for commercial applications, by rendering circuits insensitive to random mismatch that arises due to the manufacturing process.
international symposium on circuits and systems | 2014
Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik
We present a compact mixed-signal implementation of synaptic plasticity for both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). The proposed mixed-signal implementation consists of an a VLSI time window generator and a digital adaptor. The weight and delay values are stored in a digital memory, and the adaptor will send these values to the time window generator using a digital spike of which the duration is modulated according to these values. The analogue time window generator will then generate a time window, which is required for the implementation of STDP and STDDP. The digital adaptor will carry out the weight/delay adaption using this time window. The aVLSI time window generator is compact (50 μm2 in IBM 130nm process) and we use a time multiplexing approach to achieve up to 65536 (64k) virtual digital adaptors with one physical adaptor, consuming only a fraction of the hardware resource on a Virtex 6 FPGA. Since the digital adaptor has been implemented on an FPGA, it can be easily reconfigured for different adaptation algorithms, which leaves it open for future development. Our mixed-signal implementation is therefore practical for implementing the synaptic plasticity in large-scale spiking neural networks running in real time. We show circuit simulation results illustrating both weight and delay adaptation.
IEEE Transactions on Biomedical Circuits and Systems | 2017
Runchun Wang; Chetan Singh Thakur; Gregory Cohen; Tara Julia Hamilton; Jonathan Tapson; André van Schaik
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.
Frontiers in Neuroscience | 2016
Chetan Singh Thakur; Saeed Afshar; Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
international symposium on circuits and systems | 2012
Runchun Wang; Jonathan Tapson; Tara Julia Hamilton; André van Schaik
We present measurements from an aVLSI programmable axonal propagation delay circuit. It is intended to be used in the implementation of polychronous spiking neural networks. The delay can be programmed by presenting an input spike followed by a training spike at the desired delay. To fine tune and maintain the delay using an analogue memory, we use continuous spike timing dependent delay adaptation. Measurements presented here show that the axon circuit is capable of learning and retaining delays in the 2.5-20 ms range, as long as the neuron is stimulated at least once every few seconds.
international symposium on circuits and systems | 2014
Runchun Wang; Tara Julia Hamilton; Jonathan Tapson; André van Schaik
We present an analogue Very Large Scale Integration (aVLSI) implementation that uses first-order log-domain low-pass filters to implement a generalised conductance-based silicon neuron. It consists of a single synapse, which is capable of linearly summing both the excitatory and inhibitory post-synaptic currents (EPSC and IPSC) generated by the spikes arriving from different sources, a soma with a positive feedback circuit, a refractory period and spike-frequency adaptation circuit, and a high-speed synchronous Address Event Representation (AER) handshaking circuit. To increase programmability, the inputs to the neuron are digital spikes, the durations of which are modulated according to their weights. The proposed neuron is a compact design (~170 μm2 in the IBM 130nm process). Our aVLSI generalised conductance-based neuron is therefore practical for large-scale reconfigurable spiking neural networks running in real time. Circuit simulations show that this neuron can emulate different spiking behaviours observed in biological neurons.
international conference on intelligent sensors, sensor networks and information processing | 2011
Runchun Wang; Jonathan Tapson; Tara Julia Hamilton; André van Schaik
We present an analogue VLSI implementation of a polychronous network of spiking neurons. The network is capable of storing and retrieving spatial-temporal spike patterns. It consists of 14 leaky-integrate-and-fire neurons and corresponding axonal connections with programmable delays.