Bryan P. Tripp
University of Waterloo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bryan P. Tripp.
Frontiers in Neuroinformatics | 2009
Terrence C. Stewart; Bryan P. Tripp; Chris Eliasmith
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.
Neural Computation | 2010
Bryan P. Tripp; Chris Eliasmith
Temporal derivatives are computed by a wide variety of neural circuits, but the problem of performing this computation accurately has received little theoretical study. Here we systematically compare the performance of diverse networks that calculate derivatives using cell-intrinsic adaptation and synaptic depression dynamics, feedforward network dynamics, and recurrent network dynamics. Examples of each type of network are compared by quantifying the errors they introduce into the calculation and their rejection of high-frequency input noise. This comparison is based on both analytical methods and numerical simulations with spiking leaky-integrate-and-fire (LIF) neurons. Both adapting and feedforward-network circuits provide good performance for signals with frequency bands that are well matched to the time constants of postsynaptic current decay and adaptation, respectively. The synaptic depression circuit performs similarly to the adaptation circuit, although strictly speaking, precisely linear differentiation based on synaptic depression is not possible, because depression scales synaptic weights multiplicatively. Feedback circuits introduce greater errors than functionally equivalent feedforward circuits, but they have the useful property that their dynamics are determined by feedback strength. For this reason, these circuits are better suited for calculating the derivatives of signals that evolve on timescales outside the range of membrane dynamics and, possibly, for providing the wide range of timescales needed for precise fractional-order differentiation.
Neural Computation | 2012
Bryan P. Tripp
Response variability is often positively correlated in pairs of similarly tuned neurons in the visual cortex. Many authors have considered correlated variability to prevent postsynaptic neurons from averaging across large groups of inputs to obtain reliable stimulus estimates. However, a simple average of variability ignores nonlinearities in cortical signal integration. This study shows that feedforward divisive normalization of a neurons inputs effectively decorrelates their variability. Furthermore, we show that optimal linear estimates of a stimulus parameter that are based on normalized inputs are more accurate than those based on nonnormalized inputs, due partly to reduced correlations, and that these estimates improve with increasing population size up to several thousand neurons. This suggests that neurons may possess a simple mechanism for substantially decorrelating noise in their inputs. Further work is needed to reconcile this conclusion with past evidence that correlated noise impairs visual perception.
Neural Networks | 2016
Bryan P. Tripp; Chris Eliasmith
In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex.
international conference on artificial neural networks | 2016
Bryan P. Tripp
Convolutional neural networks have many parallels with the primate visual cortex, including deep structures with sparse retinotopic connections, and feature maps with increasing specificity and invariance along feedforward paths. The present study explores the possibility of specifically training convolutional networks to resemble the primate cortex more closely. In particular, in addition to supervised learning to minimize an output error function, a deep layer is directly trained to approximate primate electrophysiology data. This method is used to develop a model of the macaque monkey dorsal stream that estimates heading and speed from visual input.
BMC Neuroscience | 2014
Murphy Berzish; Bryan P. Tripp
The organization of neural systems reflects the specific complexities of the physical environments in which they operate. In order to address this relationship more directly, there is increasing interest in testing real-time neural simulations that interface with the physical world. We describe a new simulation approach that allows us to run large, sophisticated neural models on low-power embedded commodity hardware such as field-programmable gate arrays (FPGAs). A custom digital circuit was designed to approximate the collective outputs of populations of neurons that have correlated activity. These populations are taken to represent physical quantities in their spike rates. Information processing (e.g. function approximation) is taken to be determined by synaptic weights. This design is based on the Neural Engineering Framework (NEF), which bridges the gap between neural activity and higher-level behaviour [1,2]. Populations are grouped together on hardware execution components, which we call “population units”, that perform time-multiplexing in order to simulate 1024 populations per timestep. The population unit represents each population as a weighted sum of principal components of the neural tuning curves summed with a model of the associated high-frequency spike-related fluctuations. These principal components span the functions that weighted sums of spikes can approximate without being dominated by spike-related noise. Populations running on the same population unit use the same principal components, which saves memory and improves the speed of the simulation. Clustering is performed prior to simulation, which groups together populations which can be accurately represented by shared principal components. The hardware does not need to be customized or regenerated in order to simulate different networks. It can be programmed with a network description generated by a compiler that operates as a backend to the Nengo simulator. (Nengo was used to run the Spaun model [2].) The design was implemented on an FPGA and was able to run simulations of up to 45 thousand populations of neurons (a realistic surrogate model of about 1-5 million point neurons) in real-time at 12-bit accuracy on a 1 millisecond timestep. Input and output to the hardware is over Gigabit Ethernet and can be collected from a PC running Nengo for simulation control and visualization. This implementation allows real-time approximate simulation of about the same scale as the largest real-time GPU simulations in Nengo, but using much less power. Furthermore, the FPGA chip is suitable for embedded applications such as mobile robots, cameras, etc. This work greatly facilitates simulation of an essential feature of neural systems, their embodiment and interaction with the physical world.
international conference on artificial neural networks | 2016
Murphy Berzish; Chris Eliasmith; Bryan P. Tripp
Models of neural systems often use idealized inputs and outputs, but there is also much to learn by forcing a neural model to interact with a complex simulated or physical environment. Unfortunately, sophisticated interactions require models of large neural systems, which are difficult to run in real time. We have prototyped a system that can simulate efficient surrogate models of a wide range of neural circuits in real time, with a field programmable gate array (FPGA). The scale of the simulations is increased by avoiding simulation of individual neurons, and instead simulating approximations of the collective activity of groups of neurons. The system can approximate roughly a million spiking neurons in a wide range of configurations.
canadian conference on computer and robot vision | 2016
Scott Huber; Ben Selby; Bryan P. Tripp
Biologically inspired vision systems, such as convolutional networks, have begun to rival humans in some vision tasks. A key difference between such systems and the human visual system is that the latter dedicates most of its resources to a small fraction of the visual field (the fovea), which it moves frequently and rapidly to acquire a series of rich representations of small parts of a scene. We are developing an artificial system to approximate this aspect of human vision on a robot. The main components are a novel foveating lens, and a novel gimbal that can perform rapid and precise stereo camera orientation. This paper focuses on the gimbal design, which we are releasing under an open hardware license. We expect that it will serve as both an advanced robot vision system, and a source of further insight into human vision.
international symposium on neural networks | 2017
Bryan P. Tripp
Deep convolutional neural networks (CNNs) trained for object classification have a number of striking similarities with the primate ventral visual stream. In particular, activity in early, intermediate, and late layers is closely related to activity in V1, V4, and the inferotemporal cortex (IT). This study further compares activity in late layers of object-classification CNNs to activity patterns reported in the IT electrophysiology literature. There are a number of close similarities, including the distributions of population response sparseness across stimuli, and the distribution of size tuning bandwidth. Statisics of scale invariance, responses to clutter and occlusion, and orientation tuning are less similar. Statistics of object selectivity are quite different. These results agree with recent studies that highlight strong parallels between object-categorization CNNs and the ventral stream, and also highlight differences that could perhaps be reduced in future CNNs.
Neural Networks | 2017
Salman Khan; Bryan P. Tripp
There are compelling computational models of many properties of the primate ventral visual stream, but a gap remains between the models and the physiology. To facilitate ongoing refinement of these models, we have compiled diverse information from the electrophysiology literature into a statistical model of inferotemporal (IT) cortex responses. This is a purely descriptive model, so it has little explanatory power. However it is able to directly incorporate a rich and extensible set of tuning properties. So far, we have approximated tuning curves and statistics of tuning diversity for occlusion, clutter, size, orientation, position, and object selectivity in early versus late response phases. We integrated the model with the V-REP simulator, which provides stimulus properties in a simulated physical environment. In contrast with the empirical model presented here, mechanistic models are ultimately more useful for understanding neural systems. However, a detailed empirical model may be useful as a source of labeled data for optimizing and validating mechanistic models, or as a source of input to models of other brain areas.