Jasmin Léveillé
Boston University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jasmin Léveillé.
IEEE Computer | 2011
Greg Snider; Rick Amerson; Dick Carter; Hisham Abdalla; Muhammad Shakeel Qureshi; Jasmin Léveillé; Massimiliano Versace; Heather Ames; Sean Patrick; Benjamin Chandler; Anatoli Gorchetchnikov; Ennio Mingolla
In a synchronous digital platform for building large cognitive models, memristive nanodevices form dense, resistive memories that can be placed close to conventional processing circuitry. Through adaptive transformations, the devices can interact with the world in real time.
Journal of Computational Neuroscience | 2010
Jasmin Léveillé; Massimiliano Versace; Stephen Grossberg
How spiking neurons cooperate to control behavioral processes is a fundamental problem in computational neuroscience. Such cooperative dynamics are required during visual perception when spatially distributed image fragments are grouped into emergent boundary contours. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity occur in response to binary spikes with irregular timing across many interacting cells. Some models have demonstrated spiking dynamics in recurrent laminar neocortical circuits, but not how perceptual grouping occurs. Other models have analyzed the fast speed of certain percepts in terms of a single feedforward sweep of activity, but cannot explain other percepts, such as illusory contours, wherein perceptual ambiguity can take hundreds of milliseconds to resolve by integrating multiple spikes over time. The current model reconciles fast feedforward with slower feedback processing, and binary spikes with analog network-level properties, in a laminar cortical network of spiking cells whose emergent properties quantitatively simulate parametric data from neurophysiological experiments, including the formation of illusory contours; the structure of non-classical visual receptive fields; and self-synchronizing gamma oscillations. These laminar dynamics shed new light on how the brain resolves local informational ambiguities through the use of properly designed nonlinear feedback spiking networks which run as fast as they can, given the amount of uncertainty in the data that they process.
Attention Perception & Psychophysics | 2011
Stephen Grossberg; Jasmin Léveillé; Massimiliano Versace
How do spatially disjoint and ambiguous local motion signals in multiple directions generate coherent and unambiguous representations of object motion? Various motion percepts, starting with those of Duncker (Induced motion, 1929/1938) and Johansson (Configurations in event perception, 1950), obey a rule of vector decomposition, in which global motion appears to be subtracted from the true motion path of localized stimulus components, so that objects and their parts are seen as moving relative to a common reference frame. A neural model predicts how vector decomposition results from multiple-scale and multiple-depth interactions within and between the form- and motion-processing streams in V1–V2 and V1–MST, which include form grouping, form-to-motion capture, figure–ground separation, and object motion capture mechanisms. Particular advantages of the model are that these mechanisms solve the aperture problem, group spatially disjoint moving objects via illusory contours, capture object motion direction signals on real and illusory contours, and use interdepth directional inhibition to cause a vector decomposition, whereby the motion directions of a moving frame at a nearer depth suppress those directions at a farther depth, and thereby cause a peak shift in the perceived directions of object parts moving with respect to the frame.
international symposium on neural networks | 2011
Anatoli Gorchetchnikov; Massimiliano Versace; Heather Ames; Ben Chandler; Jasmin Léveillé; Gennady Livitz; Ennio Mingolla; Greg Snider; Rick Amerson; Dick Carter; Hisham Abdalla; Muhammad Shakeel Qureshi
Realizing adaptive brain functions subserving perception, cognition, and motor behavior on biological temporal and spatial scales remains out of reach for even the fastest computers. Newly introduced memristive hardware approaches open the opportunity to implement dense, low-power synaptic memories of up to 1015 bits per square centimeter. Memristors have the unique property of “remembering” the past history of their stimulation in their resistive state and do not require power to maintain their memory, making them ideal candidates to implement large arrays of plastic synapses supporting learning in neural models. Over the past decades, many learning rules have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. To ensure an optimal implementation of a large variety of learning rules in hardware, some general and easily parameterized form of learning rule must be designed. This general form learning equation would allow instantiation of multiple learning rules through different parameterizations, without rewiring the hardware. This paper characterizes a subset of local learning rules amenable to implementation in memristive hardware. The analyzed rules belong to four broad classes: Hebb rule derivatives with various methods for gating learning and decay, Threshold rule variations including the covariance and BCM families, Input reconstruction-based learning rules, and Explicit temporal trace-based rules.
Neuroinformatics | 2008
Massimiliano Versace; Heather Ames; Jasmin Léveillé; Bret Fortenberry; Anatoli Gorchetchnikov
Making use of very detailed neurophysiological, anatomical, and behavioral data to build biologically-realistic computational models of animal behavior is often a difficult task. Until recently, many software packages have tried to resolve this mismatched granularity with different approaches. This paper presents KInNeSS, the KDE Integrated NeuroSimulation Software environment, as an alternative solution to bridge the gap between data and model behavior. This open source neural simulation software package provides an expandable framework incorporating features such as ease of use, scalability, an XML based schema, and multiple levels of granularity within a modern object oriented programming design. KInNeSS is best suited to simulate networks of hundreds to thousands of branched multi-compartmental neurons with biophysical properties such as membrane potential, voltage-gated and ligand-gated channels, the presence of gap junctions or ionic diffusion, neuromodulation channel gating, the mechanism for habituative or depressive synapses, axonal delays, and synaptic plasticity. KInNeSS outputs include compartment membrane voltage, spikes, local-field potentials, and current source densities, as well as visualization of the behavior of a simulated agent. An explanation of the modeling philosophy and plug-in development is also presented. Further development of KInNeSS is ongoing with the ultimate goal of creating a modular framework that will help researchers across different disciplines to effectively collaborate using a modern neural simulation platform.
IEEE Pulse | 2012
Heather Ames; Ennio Mingolla; Aisha Sohail; Benjamin Chandler; Anatoli Gorchetchnikov; Jasmin Léveillé; Gennady Livitz; Massimiliano Versace
The researchers at Boston University (BU)s Neuromorphics Laboratory, part of the National Science Foundation (NSF)-sponsored Center of Excellence for Learning in Education, Science, and Technology (CELEST), are working in collaboration with the engineers and scientists at Hewlett-Packard (HP) to implement neural models of intelligent processes for the next generation of dense, low-power, computer hardware that will use memristive technology to bring data closer to the processor where computation occurs. The HP and BU teams are jointly designing an optimal infrastructure, simulation, and software platform to build an artificial brain. The resulting Cog Ex Machina (Cog) software platform has been successfully used to implement a large-scale, multicomponent brain system that is able to simulate some key rat behavioral results in a virtual environment and has been applied to control robotic platforms as they learn to interact with their environment.
international symposium on neural networks | 2011
Zlatko Vasilkoski; Heather Ames; Ben Chandler; Anatoli Gorchetchnikov; Jasmin Léveillé; Gennady Livitz; Ennio Mingolla; Massimiliano Versace
In the foreseeable future, synergistic advances in high-density memristive memory, scalable and massively parallel hardware, and neural network research will enable modelers to design large-scale, adaptive neural systems to support complex behaviors in virtual and robotic agents. A large variety of learning rules have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. A generalized parametrizable form for many of these rules is proposed in a satellite paper in this volume [1]. Implementation of these rules in hardware raises a concern about the stability of memories created by these rules when the learning proceeds continuously and affects the performance in a network controlling freely-behaving agents. This paper can serve as a reference document as it summarizes in a concise way using a uniform notation the stability properties of the rules that are covered by the general form in [1].
BMC Neuroscience | 2010
Anatoli Gorchetchnikov; Massimiliano Versace; Heather Ames; Ben Chandler; Arash Yazdanbakhsh; Jasmin Léveillé; Ennio Mingolla; Greg Snider
The DARPA Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) initiative aims to create a new generation of high-density, low-power consumption chips capable of replicating adaptive and intelligent behavior observed in animals. To ensure fast speed, low power consumption, and parallel learning in billions of synapses, the learning laws that govern the adaptive behavior must be implemented in hardware. Over the past decades, multitudes of learning laws have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. In order to implement as many of these laws as possible on the hardware, some general and easily parameterized form of learning law has to be designed and implemented on the chip. Such a general form would allow instantiation of multiple learning laws through different parameterizations without rewiring the hardware. From the perspectives of usefulness, stability, homeostatic properties, and spatial and temporal locality, this project analyzes four categories of existing learning rules: 1. Hebb rule derivatives with various methods for gating learning and decay; 2. Threshold rule variations including the covariance and BCM families; 3. Error-based learning rules; and 4. Reinforcement rules For each individual category a general form that can be implemented in hardware was derived. Even more general forms that include multiple categories are further suggested.
international conference on neural information processing | 2014
Kunihiko Fukushima; Isao Hayashi; Jasmin Léveillé
The neocognitron is a hierarchical, multi-layered neural network capable of robust visual pattern recognition. The neocognitron acquires the ability to recognize visual patterns through learning. The winner-kill-loser is a competitive learning rule recently shown to outperform standard winner-take-all learning when used in the neocognitron to perform a character recognition task. In this paper, we improve over the winner-kill-loser rule by introducing an additional threshold to the already existing two thresholds used in the original version. It is shown theoretically, and also by computer simulation, that the use of a triple threshold makes the learning process more stable. In particular, a high recognition rate can be obtained with a smaller network.
Neural Computation | 2013
Jasmin Léveillé; Thomas Hannagan
Convolutional models of object recognition achieve invariance to spatial transformations largely because of the use of a suitably defined pooling operator. This operator typically takes the form of a max or average function defined across units tuned to the same feature. As a model of the brains ventral pathway, where computations are carried out by weighted synaptic connections, such pooling can lead to spatial invariance only if the weights that connect similarly tuned units to a given pooling unit are of approximately equal strengths. How identical weights can be learned in the face of nonuniformly distributed data remains unclear. In this letter, we show how various versions of the trace learning rule can help solve this problem. This allows us in turn to explain previously published results and make recommendations as to the optimal rule for invariance learning.