Aaron Voelker
University of Waterloo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aaron Voelker.
Frontiers in Neuroinformatics | 2014
Trevor Bekolay; James Bergstra; Eric Hunsberger; Travis DeWolf; Terrence C. Stewart; Daniel Rasmussen; Xuan Choo; Aaron Voelker; Chris Eliasmith
Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the worlds largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4s ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
international conference on robotics and automation | 2016
Ken E. Friedl; Aaron Voelker; Angelika Peer; Chris Eliasmith
Giving robots the ability to classify surface textures requires appropriate sensors and algorithms. Inspired by the biology of human tactile perception, we implement a neurorobotic texture classifier with a recurrent spiking neural network, using a novel semisupervised approach for classifying dynamic stimuli. Input to the network is supplied by accelerometers mounted on a robotic arm. The sensor data are encoded by a heterogeneous population of neurons, modeled to match the spiking activity of mechanoreceptor cells. This activity is convolved by a hidden layer using bandpass filters to extract nonlinear frequency information from the spike trains. The resulting high-dimensional feature representation is then continuously classified using a neurally implemented support vector machine. We demonstrate that our system classifies 18 metal surface textures scanned in two opposite directions at a constant velocity. We also demonstrate that our approach significantly improves upon a baseline model that does not use the described feature extraction. This method can be performed in real-time using neuromorphic hardware, and can be extended to other applications that process dynamic stimuli online.
international symposium on neural networks | 2016
James Knight; Aaron Voelker; Andrew Mundy; Chris Eliasmith; Steve B. Furber
The biological brain is a highly plastic system within which the efficacy and structure of synaptic connections are constantly changing in response to internal and external stimuli. While numerous models of this plastic behavior exist at various levels of abstraction, how these mechanisms allow the brain to learn meaningful values is unclear. The Neural Engineering Framework (NEF) is a hypothesis about how large-scale neural systems represent values using populations of spiking neurons, and transform them using functions implemented by the synaptic weights between populations. By exploiting the fact that these connection weight matrices are factorable, we have recently shown that static NEF models can be simulated very efficiently using the SpiNNaker neuromorphic architecture. In this paper, we demonstrate how this approach can be extended to efficiently support both supervised and unsupervised learning rules designed to operate on these factored matrices. We then present a heteroassociative memory architecture built using these learning rules and prove that it is capable of learning a human-scale semantic network. Finally we demonstrate a 100 000 neuron version of this architecture running on the SpiNNaker simulator with a speed-up exceeding 150x when compared to the Nengo reference simulator.
Neural Computation | 2018
Aaron Voelker; Chris Eliasmith
Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.
PLOS ONE | 2017
Daniel Guldager Kring Rasmussen; Aaron Voelker; Chris Eliasmith
We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain’s general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model’s behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions.
international symposium on circuits and systems | 2017
Aaron Voelker; Ben Varkey Benjamin; Terrence C. Stewart; Kwabena Boahen; Chris Eliasmith
The Neural Engineering Framework (NEF) is a theory for mapping computations onto biologically plausible networks of spiking neurons. This theory has been applied to a number of neuromorphic chips. However, within both silicon and real biological systems, synapses exhibit higher-order dynamics and heterogeneity. To date, the NEF has not explicitly addressed how to account for either feature. Here, we analytically extend the NEF to directly harness the dynamics provided by heterogeneous mixed-analog-digital synapses. This theory is successfully validated by simulating two fundamental dynamical systems in Nengo using circuit models validated in SPICE. Thus, our work reveals the potential to engineer robust neuromorphic systems with well-defined high-level behaviour that harness the low-level heterogeneous properties of their physical primitives with millisecond resolution.
Topics in Cognitive Science | 2017
Sean Aubin; Aaron Voelker; Chris Eliasmith
Archive | 2018
Aaron Voelker; Christopher David Eliasmith
international symposium on circuits and systems | 2017
Eric Kauderer-Abrams; Andrew Gilbert; Aaron Voelker; Ben Varkey Benjamin; Terrence C. Stewart; Kwabena Boahen
arXiv: Neurons and Cognition | 2017
Andreas Stöckel; Aaron Voelker; Chris Eliasmith