Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Eliasmith is active.

Publication


Featured researches published by Chris Eliasmith.


Science | 2012

A Large-Scale Model of the Functioning Brain

Chris Eliasmith; Terrence C. Stewart; Xuan Choo; Trevor Bekolay; Travis DeWolf; Yichuan Tang; Daniel Rasmussen

Modeling the Brain Neurons are pretty complicated cells. They display an endless variety of shapes that sprout highly variable numbers of axons and dendrites; they sport time- and voltage-dependent ion channels along with an impressive array of neurotransmitter receptors; and they connect intimately with near neighbors as well as former neighbors who have since moved away. Simulating a sizeable chunk of brain tissue has recently become achievable, thanks to advances in computer hardware and software. Eliasmith et al. (p. 1202; see the Perspective by Machens) present their million-neuron model of the brain and show that it can recognize numerals, remember lists of digits, and write down those lists—tasks that seem effortless for a human but that encompass the triad of perception, cognition, and behavior. Two-and-a-half million model neurons recognize images, learn via reinforcement, and display fluid intelligence. A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains. Recently described, large-scale neural models have not bridged this gap between neural activity and biological function. In this work, we present a 2.5-million-neuron model of the brain (called “Spaun”) that bridges this gap by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks.


Neural Computation | 2005

A Unified Approach to Building and Controlling Spiking Attractor Networks

Chris Eliasmith

Extending work in Eliasmith and Anderson (2003), we employ a general framework to construct biologically plausible simulations of the three classes of attractor networks relevant for biological systems: static (point, line, ring, and plane) attractors, cyclic attractors, and chaotic attractors. We discuss these attractors in the context of the neural systems that they have been posited to help explain: eye control, working memory, and head direction; locomotion (specifically swimming); and olfaction, respectively. We then demonstrate how to introduce control into these models. The addition of control shows how attractor networks can be used as subsystems in larger neural systems, demonstrates how a much larger class of networks can be related to attractor networks, and makes it clear how attractor networks can be exploited for various information processing tasks in neurobiological systems.


Frontiers in Neuroinformatics | 2014

Nengo: a Python tool for building large-scale functional brain models.

Trevor Bekolay; James Bergstra; Eric Hunsberger; Travis DeWolf; Terrence C. Stewart; Daniel Rasmussen; Xuan Choo; Aaron Voelker; Chris Eliasmith

Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the worlds largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4s ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.


Philosophical Psychology | 1996

The third contender: A critical examination of the Dynamicist theory of cognition

Chris Eliasmith

Abstract In a recent series of publications, dynamicist researchers have proposed a new conception of cognitive functioning. This conception is intended to replace the currently dominant theories of connectionism and symbolicism. The dynamicist approach to cognitive modeling employs concepts developed in the mathematical field of dynamical systems theory. They claim that cognitive models should be embedded, low‐dimensional, complex, described by coupled differential equations, and non‐representational. In this paper I begin with a short description of the dynamicist project and its role as a cognitive theory. Subsequently, I determine the theoretical commitments of dynamicists, critically examine those commitments and discuss current examples of dynamicist models. In conclusion, I determine dynamicisms relation to symbolicism and connectionism and find that the dynamicist goal to establish a new paradigm has yet to be realized.


international conference on artificial neural networks | 2012

Silicon neurons that compute

Swadesh Choudhary; Steven A. Sloan; Sam Fok; Alexander Neckar; Eric Trautmann; Peiran Gao; Terry C. Stewart; Chris Eliasmith; Kwabena Boahen

We use neuromorphic chips to perform arbitrary mathematical computations for the first time. Static and dynamic computations are realized with heterogeneous spiking silicon neurons by programming their weighted connections. Using 4K neurons with 16M feed-forward or recurrent synaptic connections, formed by 256K local arbors, we communicate a scalar stimulus, quadratically transform its value, and compute its time integral. Our approach provides a promising alternative for extremely power-constrained embedded controllers, such as fully implantable neuroprosthetic decoders.


The Journal of Neuroscience | 2006

Higher-Dimensional Neurons Explain the Tuning and Dynamics of Working Memory Cells

Ray Singh; Chris Eliasmith

Measurements of neural activity in working memory during a somatosensory discrimination task show that the content of working memory is not only stimulus dependent but also strongly time varying. We present a biologically plausible neural model that reproduces the wide variety of characteristic responses observed in those experiments. Central to our model is a heterogeneous ensemble of two-dimensional neurons that are hypothesized to simultaneously encode two distinct stimuli dimensions. We demonstrate that the spiking activity of each neuron in the population can be understood as the result of a two-dimensional state space trajectory projected onto the tuning curve of the neuron. The wide variety of observed responses is thus a natural consequence of a population of neurons with a diverse set of preferred stimulus vectors and response functions in this two-dimensional space. In addition, we propose a taxonomy of network topologies that will generate the two-dimensional trajectory necessary to exploit this population. We conclude by proposing some experimental indicators to help distinguish among these possibilities.


Journal of Computational Neuroscience | 2005

A Controlled Attractor Network Model of Path Integration in the Rat

John Conklin; Chris Eliasmith

Cells in several areas of the hippocampal formation show place specific firing patterns, and are thought to form a distributed representation of an animal’s current location in an environment. Experimental results suggest that this representation is continually updated even in complete darkness, indicating the presence of a path integration mechanism in the rat. Adopting the Neural Engineering Framework (NEF) presented by Eliasmith and Anderson (2003) we derive a novel attractor network model of path integration, using heterogeneous spiking neurons. The network we derive incorporates representation and updating of position into a single layer of neurons, eliminating the need for a large external control population, and without making use of multiplicative synapses. An efficient and biologically plausible control mechanism results directly from applying the principles of the NEF. We simulate the network for a variety of inputs, analyze its performance, and give three testable predictions of our model.


Frontiers in Neuroscience | 2012

Learning to Select Actions with Spiking Neurons in the Basal Ganglia

Terrence C. Stewart; Trevor Bekolay; Chris Eliasmith

We expand our existing spiking neuron model of decision making in the cortex and basal ganglia to include local learning on the synaptic connections between the cortex and striatum, modulated by a dopaminergic reward signal. We then compare this model to animal data in the bandit task, which is used to test rodent learning in conditions involving forced choice under rewards. Our results indicate a good match in terms of both behavioral learning results and spike patterns in the ventral striatum. The model successfully generalizes to learning the utilities of multiple actions, and can learn to choose different actions in different states. The purpose of our model is to provide both high-level behavioral predictions and low-level spike timing predictions while respecting known neurophysiology and neuroanatomy.


Cognitive Science | 2006

Is the Brain a Quantum Computer

Abninder Litt; Chris Eliasmith; Frederick W. Kroon; Steven Weinstein; Paul Thagard

We argue that computation via quantum mechanical processes is irrelevant to explaining how brains produce thought, contrary to the ongoing speculations of many theorists. First, quantum effects do not have the temporal properties required for neural information processing. Second, there are substantial physical obstacles to any organic instantiation of quantum computation. Third, there is no psychological evidence that such mental phenomena as consciousness and mathematical thinking require explanation via quantum theory. We conclude that understanding brain function is unlikely to require quantum computation or similar mechanisms.


Frontiers in Neuroinformatics | 2009

Python scripting in the Nengo simulator

Terrence C. Stewart; Bryan P. Tripp; Chris Eliasmith

Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.

Collaboration


Dive into the Chris Eliasmith's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Gosmann

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Charles H. Anderson

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xuan Choo

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge